Dear Knot Resolver users,
I have an early notification for those who rely on our upstream package
repositories for Debian 9 and Ubuntu 16.04.
Ubuntu 16.04: We'll no longer provide packages after April 2021 when the
distribution reaches end of life.
Debian 9: Knot Resolver 5.x is the last supported major version for
Debian 9. Once we release a new major version, we'll no longer provide
packages for Debian 9. We expect to release our next major version in
the second half of 2021.
Packages for other distributions remain unaffected and are supported as
usual.
Thanks for understanding.
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello Tomas,
You are right, it was just those silly configuration mistakes. Everything works now :)
Thank you so much!
Best regards,
--Manuel
________________________________
De: Tomas Krizek
Enviado: Viernes, 11 de Diciembre de 2020 13:25
Para: Knot Resolver Users List; Urueña-Pascual Manuel
Asunto: Re: [knot-resolver-users] Is policy.rpz a non-chain action?
Hi, the use-case you're trying to achieve is possible, but there are
some issues with your configuration.
On 10/12/2020 17.29, Urueña-Pascual Manuel wrote:>
policy.add(policy.rpz(policy.DENY_MSG('domain blocked'),
'/etc/knot-resolver/blocklist.rpz', true))
> policy.add(policy.rpz(policy.PASS(), '/etc/knot-resolver/allowlist.rpz', true))
You want to specify "policy.PASS" without the brackets.
> and these are the RPZ zones:
>
> $ cat '/etc/knot-resolver/allowlist.rpz':
> www.google.com<http://www.google.com> 600 IN CNAME rpz-passthrough.
> www.bing.com<http://www.bing.com> 600 IN CNAME rpz-passthrough.
When you provide kresd these RPZ zones, it will complain:
[poli] RPZ /tmp/kr_dev/etc/knot-resolver/allowlist.rpz:1: CNAME with
custom target in RPZ is not supported yet (ignored)
It's because you're trying to use unsupported CNAME. See the table in
our docs [1]. What you're probably looking for is "rpz-passthru."
instead. However, if you're using a separate allowlist with policy.PASS
action (which is your case) "." would also work here.
You should also be able to combine the blocklist and allowlist into just
a single rpz file, using policy.DENY_MSG("...") and controlling whether
domain is blocked ("CNAME .") or allowed ("CNAME rpz-passthru.") with
the RPZ rules themselves.
[1] -
https://knot-resolver.readthedocs.io/en/stable/modules-policy.html#response…
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello,
I'm trying to setup a knot-resolver 5.2.0-1 instance where all DNS queries should return a fixed IPv4 address, but the domains on an allow list that should return the real IPv4 address instead (by using a policy.PASS action), and the ones on a block list that should return a SRVFAIL response (by using a policy.DROP action).
I'm using Response Policy Zones (RPZ), and with an explicit list of domains to redirect the traffic to (plus the explicit allow and block lists) everything works fine. However, when trying to redirect all queries by default by using a policy.all rule, the allow list does not longer work and all queries (but blocked ones) are responded with the fixed IPv4 address.
This is my '/etc/knot-resolver/kresd.conf':
-- turns off DNSSEC validation
trust_anchors.remove('.')
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('10.127.0.20', 53, { kind = 'dns' })
net.ipv6 = false
net.listen('/tmp/kres.control', nil, { kind = 'control'})
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
policy.add(policy.rpz(policy.DENY_MSG('domain blocked'), '/etc/knot-resolver/blocklist.rpz', true))
--policy.add(policy.rpz(policy.ANSWER(), '/etc/knot-resolver/redirectlist.rpz', true))
policy.add(policy.rpz(policy.PASS(), '/etc/knot-resolver/allowlist.rpz', true))
policy.add(policy.all(policy.ANSWER({ [kres.type.A] = { rdata=kres.str2ip('10.127.0.10'), ttl=300 } })))
and these are the RPZ zones:
$ cat '/etc/knot-resolver/allowlist.rpz':
www.google.com 600 IN CNAME rpz-passthrough.
www.bing.com 600 IN CNAME rpz-passthrough.
$ cat /etc/knot-resolver/blocklist.rpz
examplemalwaredomain.com 600 IN CNAME .
*.examplemalwaredomain.com 600 IN CNAME .
Thus, www.examplemalwaredomain.com is blocked:
$ dig www.examplemalwaredomain.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.examplemalwaredomain.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 26657
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;www.examplemalwaredomain.com. IN A
;; AUTHORITY SECTION:
www.examplemalwaredomain.com. 10800 IN SOA www.examplemalwaredomain.com. nobody.invalid. 1 3600 1200 604800 10800
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "domain blocked"
And any other domain is redirected:
$ dig nic.cz @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> nic.cz @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60891
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nic.cz. IN A
;; ANSWER SECTION:
nic.cz. 300 IN A 10.127.0.10
But the domains in the allow list are also redirected :(
$ dig www.google.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.google.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49284
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 300 IN A 10.127.0.10
Interestingly, if I add a policy.PASS rule with a list of explicit domains before the policy.all one, it does work properly:
policy.add(policy.suffix(policy.PASS, policy.todnames({'example.com', 'example.net'})))
$ dig www.example.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.example.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59322
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 120 IN A 93.184.216.34
Thus, it looks as if the allowed domains first hit the allow RPZ list, but then they hit the policy.ANSWER policy anyway. Is this the expected behaviour? Is there any way to implement such a default redirect but for a list of allowed or blocked domains?
Regards,
--Manuel
The knot-resolver documentation only appears to support Linux. Any plans to
make
the resolver available, or useful on systems other than Linux? While I could
put Linux in a VM, or BE, and use the resolver from there. It wouldn't really
be
very effective. Any insight to using this on any of the BSD's would be
greatly
appreciated (I'm currently using unbound along side knot authoritative).
Thanks!
--Chris
Hi,
I’ve stumbled across knot-resolver because I have an issue with my current DNS solution.
What is the best way to block a large number of domains.
I’ve trying to work with the below by it’s not functioning
Part of /etc/knot-resolver/kresd.conf
-- Domain Blocking
policy.add(
policy.rpz(policy.DENY_MSG('domain blocked by your IT department'),'/etc/knot-resolver/blacklist.rpz', true))
policy.add (
policy.rpz(policy.DENY, '/etc/knot-resolver/blacklist.rpz'))
/etc/knot-resolver/backlist.rpz
007bets.com,
Rrds,
Mike
Hi,
I’ve stumbled across knot-resolver because I have an issue with my current DNS solution.
What is the best way to block a large number of domains.
I’ve trying to work with the below by it’s not functioning
Part of /etc/knot-resolver/kresd.conf
-- Domain Blocking
policy.add(
policy.rpz(policy.DENY_MSG('domain blocked by your IT department'),'/etc/knot-resolver/blacklist.rpz', true))
policy.add (
policy.rpz(policy.DENY, '/etc/knot-resolver/blacklist.rpz'))
/etc/knot-resolver/backlist.rpz
007bets.com,
Rrds,
Mike
Hello,
recently we upgraded from 5.1 to 5.2 few servers (CentOS7 and Raspbian) and
all seems to be working fine, but I can see on (yes I know, not
recommend and supported) rasbian issue with metrics and in the log I can
issue this error:
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/H#003: socket:connect: No such
file or directory (ignoring this socket)
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/H: socket:connect: No such file
or directory (ignoring this socket)
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/: socket:connect: Connection
refused (ignoring this socket)
the error is triggered by opening "ip:8453/metrics" and it show almost
empty response:
# TYPE resolver_latency histogram
resolver_latency_count 0.000000
resolver_latency_sum 0.000000
root@dns-cache-2:/# apt -qq list knot* --installed
knot-resolver-module-http/unknown,now 5.2.0-1 all [installed]
knot-resolver-release/unknown,now 1.7-1 all [installed]
knot-resolver/unknown,now 5.2.0-1 armhf [installed]
root@dns-cache-2:/#
root@dns-cache-2:/# uname -a
Linux dns-cache-2 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020
armv7l GNU/Linux
Could it be a bug or something related to mix armv7l and armhf? I know of
the limitation as we discussed this setup on raspberry some time ago. I
just want to share my experience as it can be useful and if you will have
any tip I will appreciate it.
Dear Knot Resolver users,
Knot Resolver 5.2.0 has been released!
One of the notable features is a new DNS-over-HTTTPS implementation
which is more scalable and stable than the old one. It also has less
dependencies and simpler configuration.
Another new feature is experimental eXpress Data Path (XDP) support for
UDP. With support from both the network card and the kernel, it can
provide superior performance and lower latency for UDP answers.
Some of the improvements and bugfixes required a few backward
incompatible changes, mainly regarding control sockets or module API.
Please refer to our upgrading guide for details:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#to-5-2
Improvements
------------
- doh2: add native C module for DNS-over-HTTPS (#600, !997)
- xdp: add server-side XDP support for higher UDP performance (#533,
!1083)
- lower default EDNS buffer size to 1232 bytes (#538, #300, !920);
see https://dnsflagday.net/2020/
- net: split the EDNS buffer size into upstream and downstream (!1026)
- lua-http doh: answer to /dns-query endpoint as well as /doh (!1069)
- improve resiliency against UDP fragmentation attacks (disable PMTUD)
(!1061)
- ta_update: warn if there are differences between statically configured
keys and upstream (#251, !1051)
- human readable output in interactive mode was improved (!1020)
- doc: generate info page (!1079)
- packaging: improve sysusers and tmpfiles support (!1080)
Bugfixes
--------
- avoid an assert() error in stash_rrset() (!1072)
- fix emergency cache locking bug introduced in 5.1.3 (!1078)
- migrate map() command to control sockets; fix systemd integration
(!1000)
- fix crash when sending back errors over control socket (!1000)
- fix SERVFAIL while processing forwarded CNAME to a sibling zone (#614,
!1070)
Incompatible changes
--------------------
- see upgrading guide:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#to-5-2
- minor changes in module API
- control socket API commands have to be terminated by "\n"
- graphite: default prefix now contains instance identifier (!1000)
- build: meson >= 0.49 is required (!1082)
- planned changes in future versions:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#upcoming-chan…
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.2.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.2.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.2.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.2.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello team,
most probably we hit the issue !1070 "fix SERVFAIL in *FORWARD modes with
certain CNAME setup" and I would like to ask when we can expect release of
knot-resolver 5.2.0 (with the fix) or if there will be "hot fix" in 5.1.x?
example of affected domain: www.action.foundation.total
Thank you.
These two patches modify Knot Resolver's build process to check for
Texinfo utilities during configuration and, if they're found, to build
and install documentation in Info format (the standard for GNU
distributions such as Guix[0]) as well as HTML. (If the Texinfo
utilities aren't found, the build works as before.)
The first patch implements these changes and additionally updates the
manual to list Texinfo as an optional dependency.
The second patch isn't required but genericizes a few references
within the manual to documentation in specific formats, in the
interest of readability. Note I've left unchanged the "build-html-doc"
ref-ID to help ensure external links to the documentation continue to
function.
[0] https://guix.gnu.org/
----------
With these patches applied you'll find makeinfo generates a large
number of warnings of the form
knotresolver.texi:11811: warning: @anchor should not appear in @deffn
These warnings are harmless but result from the way Breathe processes
C declarations. It generates output with this structure (here for the
"kr_request_selected" macro):
<desc_signature ids="c.kr_request_selected" is_multiline="True">
<desc_signature_line add_permalink="True" xml:space="preserve">
<target ids="resolve_8h_1a9ba8c6cef807cdf580e3249675a2589c"
names="resolve_8h_1a9ba8c6cef807cdf580e3249675a2589c"></target>
<desc_name xml:space="preserve">kr_request_selected</desc_name>
<desc_parameterlist xml:space="preserve">
<desc_parameter noemph="True" xml:space="preserve"><emphasis>req</emphasis></desc_parameter>
</desc_parameterlist>
</desc_signature_line>
</desc_signature>
The issue is the "target" node's being placed within
desc_signature_line. I assume this is done for convenience when
generating HTML, but it also leads to this Texinfo output:
@deffn {Define} @anchor{lib resolve_8h_1a9ba8c6cef807cdf580e3249675a2589c}@anchor{25f}kr_request_selected (req)
which is invalid, since (as the warning says) @anchor is not supposed
to appear within @deffn.
Sphinx's Texinfo writer doesn't account for this, since there's no way
(that I can see) for normal ReST input to generate a "target" tag in
this position. However it's not clear Breathe is at fault here either,
since I can't find anything that defines where target is allowed to
appear or which tags desc_signature_line is meant to contain.
A straightforward approach to fixing this is implemented by this patch
to Sphinx, which causes its Texinfo writer to reorder target nodes
embedded within function descriptions:
diff --git a/sphinx/writers/texinfo.py b/sphinx/writers/texinfo.py
index 5ad2831dd..bd76acdc3 100644
--- a/sphinx/writers/texinfo.py
+++ b/sphinx/writers/texinfo.py
@@ -1383,6 +1383,11 @@ class TexinfoTranslator(SphinxTranslator):
if objtype != 'describe':
for id in node.get('ids'):
self.add_anchor(id, node)
+ # embedded anchors need to go in front
+ for n in node.traverse(nodes.target, include_self=False):
+ if isinstance(n, nodes.target):
+ n.walkabout(self)
+ n.parent.remove(n)
# use the full name of the objtype for the category
try:
domain = self.builder.env.get_domain(node.parent['domain'])
However, whether this is the correct approach or if a fix would better
be applied to Breathe isn't clear to me. Engaging with the community
of either project means signing up for services I'd rather not use, so
there's little I can do to carry this forward. Perhaps someone else is
interested?
---
Simon South
simon(a)simonsouth.net
Simon South (2):
doc: generate Info manual
doc: use non-format-specific references to documentation
doc/build.rst | 11 +++++++----
doc/meson.build | 7 +++++++
scripts/install-doc-info.sh | 8 ++++++++
scripts/make-doc.sh | 6 ++++++
4 files changed, 28 insertions(+), 4 deletions(-)
create mode 100644 scripts/install-doc-info.sh
--
2.28.0
Hi,
I am using knot-resolver with default configuration in arch linux. I am not
able to resolve some domains (like costco.ca). It works fine with other
resolvers like cloudflare/google. Is there way to debug this?
Thanks,
Bala
Hello,
I'd like to have output of socat commands in my shell script.
Currently, I log in to socat by using, for example:
socat - UNIX-CONNECT:/run/knot-resolver/control/1
There, I can issue cache.stats() and read it.
How can I have output these commands in shell script?
Appreciate your help.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Dear Knot Resolver users,
we would like to inform you about certain changes in upcoming Knot Resolver 5.2.0:
- - Users of Control sockets API need to terminate each command sent to resolver with newline character (ASCII \n). Correct usage: "cache.stats()\n". Newline terminated commands are accepted by all resolver versions >= 1.0.0.
- - Human readable output from Control sockets is not stable and changes from time to time. Users who need machine readable output for scripts should use Lua function "tojson()" to convert Lua values into standard JSON format instead of attempting to parse the human readable output. For example API call "tojson(cache.stats())\n" will return JSON string with "cache.stats()" results represented as dictionary. Function "tojson()" is available in all resolver versions >= 1.0.0.
- - DoH served with http module DNS-over-HTTP (DoH) will be marked as legacy and won’t receive any more bugfixes. A more reliable and scalable DoH module will be available instead. The new DoH module will only support HTTP/2 over TLS.
- - New releases since October 2020 will contain changes for DNS Flag Day 2020. Please double-check your firewall, it has to allow DNS traffic on UDP and also TCP port 53. See https://dnsflagday.net/2020/ for more information.
This text with links to further documentation can be found here:
https://knot-resolver.readthedocs.io/en/latest/upgrading.html#upcoming-chan…
We plan to use the same URL to publish planned changes in not-yet-released versions.
To make our version numbers easier to understand we have documented how we version new releases:
https://knot-resolver.readthedocs.io/en/latest/NEWS.html#version-numbering
Feedback is more than welcome. Do not hesitate to reach out on the usual channels:
https://www.knot-resolver.cz/support/
- --
Petr Špaček @ CZ.NIC
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEvibrucvgWbORDKNbzo3WoaUKIeQFAl9+9kcACgkQzo3WoaUK
IeR7Jg/5AT4QLP1ZJBR8Vkoa5DirCnTBrBx3GPkiW+hVEXZPl55AXWMu54i9ZTuY
4R38+I1PypAM924n5xk98AaX7iRbISVZ87kOEfeJOKR2tzB8TiYwzufl2Y3PUYo8
IlazzcUyNFcQw4ARE3BqiQF6DLKl/s8U812XMEUlRdryE7lURWVEJyo711XNLPPy
6PWuGpsNboMR2V6z/7yrX/i+/0XGpxoSL8S9Fv4hQrOJB3c0lIZNSfaj23GywDao
ZgkqlxwBgPkOGAhSIq3r5Xqitl9fTh2Gb1565Lrbj+4OY5WbL8YqaNArZMp3poUD
T2QCH1bUypa0EP+smzdcriuFDWwl3L0zwODsnH2ofLAnosWeGfxtHXHI5W7rwskF
gAAZJJe+UhxSeBzwr01r6YwZbDYlHiDA9+AkGkYGjYwQmfLCTftXRhlqmJ3BvpeR
Nr97D8lSuct9XugIN2pUJfR1cVIr40ViiO3TG4m2eL2dHExELB4OOf/ZCOTX5dQk
KJuQSspz5zQvVFWZZh17L6iT9WUdgwmi0TkvR8aMmPlr75vokFvioQMAOHfHgGtq
7s10W9TtRGX5o9ioHtTLrDaqEC/0P79bMTVU5HCap1dSCzgk1AYXtUDu4AVc0pWC
Aui8tih9ykESC8rPUaEBc6wpR3wgKSPclfIWgiJira4xB8eRCUs=
=9lwy
-----END PGP SIGNATURE-----
Trying to use Knot DNS + Knot Resolver for a small business network.
So far it's working great, however there's one function that's still
needed: short hostname resolution suffixing.
Meaning for 'server.example.com' I can `dig server.example.com` and get the
desired result, but wondering how to attach the 'example.com' suffix so I
can simply query `dig server` and have Knot auto-append 'example.com'
suffix.
I believe I saw it in the documentation but I can't find it again for the
life of me...
Thanks in advance,
-KM
Dear Knot Resolver users,
Knot Resolver 5.1.3 has been released!
Improvements
------------
- capabilities are no longer constrained when running as root (!1012)
- cache: add percentage usage to cache.stats() (#580, !1025)
- cache: add number of cache entries to cache.stats() (#510, !1028)
- aarch64 support again, as some systems still didn't work (!1033)
- support building against Knot DNS 3.0 (!1053)
Bugfixes
--------
- tls: fix compilation to support net.tls_sticket_secret() (!1021)
- validator: ignore bogus RRSIGs present in insecure domains (!1022,
#587)
- build if libsystemd version isn't detected as integer (#592, !1029)
- validator: more robust reaction on missing RRSIGs (#390, !1020)
- ta_update module: fix broken RFC5011 rollover (!1035)
- garbage collector: avoid keeping multiple copies of cache (!1042)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.1.3/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.1.3.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.1.3.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.1.3/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
I am newby in kresd and lua
ipv4 format in documentation 127.0.0.1 as "\127\0\0\1" is new for me
i meen "fe80::56e6::f188:75d6" as "\fe80\\56e6\f188\75d6" is no right way (no work for me)
i will try kres.str2ip('........')
On 13. 08. 20 8:35, knot-resolver-users-bounces(a)lists.nic.cz wrote:
> how is correct syntax for ipv6 address in policy.ANSWER ?
Hello,
and thank you for pointing out our insufficient documentation.
I've attempted to clarify it in the following change:
https://gitlab.nic.cz/knot/knot-resolver/-/merge_requests/1037/diffs?view=p…
Does the text on right side (green part) answer your question? If not, what is unclear?
--
Petr Špaček @ CZ.NIC
Hello all,
I have a question regarding cache size and it’s using. We are running
knot-resolver on RaspberryPI 4 (as secondary cache). Previously we had
256MB cache as tmpfs and hit issue that process crashed due to “No space
left on device (workdir '/var/lib/knot-resolver')” I doubled the site to
512MB, but same issue occurs. My question is why it crash even from the
logs the usage of cache is about 80 percent (same in the case with 256MB).
I taught that kres-cache-gc is taking care about its size and it does not
allow to full the cache and prevent writing to it in the way that main
process crash. Should we increase the cache size or we hit a some bug? Any
other suggestion what could cause that cache is full and not “cleared”?
Thank you for tip and have a nice day. Petr Kyselak
Config:
/etc/fstab
tmpfs /var/cache/knot-resolver tmpfs
rw,size=512m,uid=knot-resolver,gid=knot-resolver,nosuid,nodev,noexec,mode=0700
0 00
-- Cache size
cache.size = cache.fssize() - 10*MB
Logs:
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Cache analyzed in 1.41
secs, 1038386 records, limit category is 100.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: 0 records to be deleted
using 0.00 MBytes of temporary memory, 0 records skipped due to memory
limit.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Deleted 0 records (0
already gone) types
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: It took 0.00 secs, 0
transactions (OK)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Usage: 81.32% (428077056 /
526385152)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Cache analyzed in 1.41
secs, 1038420 records, limit category is 100.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: 0 records to be deleted
using 0.00 MBytes of temporary memory, 0 records skipped due to memory
limit.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Deleted 0 records (0
already gone) types
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: It took 0.00 secs, 0
transactions (OK)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Usage: 81.32% (428077056 /
526385152)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Cache analyzed in 1.41
secs, 1038420 records, limit category is 100.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: 0 records to be deleted
using 0.00 MBytes of temporary memory, 0 records skipped due to memory
limit.
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Deleted 0 records (0
already gone) types
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: It took 0.00 secs, 0
transactions (OK)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Usage: 81.32% (428077056 /
526385152)
Jun 4 16:32:05 dns-cache-2 kres-cache-gc[548]: Cache analyzed in 1.41
secs, 1038421 records, limit category is 100.
Jun 4 16:32:32 dns-cache-2 kresd[672]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[672]: [cache] clearing error, falling back
Jun 4 16:32:32 dns-cache-2 kresd[672]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
…
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Main process
exited, code=killed, status=11/SEGV
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
…
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Failed with result
'signal'.
…
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Main process
exited, code=killed, status=11/SEGV
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing failed to get
./.cachelock; retry later
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
…
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Failed with result
'signal'.
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -17
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing error, falling back
…
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] MDB_BAD_TXN, probably
overfull
Jun 4 16:32:32 dns-cache-2 kresd[675]: [cache] clearing because overfull,
ret = -28
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Service
RestartSec=100ms expired, scheduling restart.
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Scheduled restart
job, restart counter is at 1.
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Service
RestartSec=100ms expired, scheduling restart.
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Scheduled restart
job, restart counter is at 1.
Jun 4 16:32:32 dns-cache-2 kresd[30052]: [system] error while loading
config: /usr/lib/knot-resolver/sandbox.lua:400: can't open cache path
'/var/cache/knot-resolver'; working directory '/var/lib/knot-resolver'; No
space left on device (workdir '/var/lib/knot-resolver')
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Main process
exited, code=exited, status=1/FAILURE
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)1.service: Failed with result
'exit-code'.
Jun 4 16:32:32 dns-cache-2 kresd[30051]: [system] error while loading
config: /usr/lib/knot-resolver/sandbox.lua:400: can't open cache path
'/var/cache/knot-resolver'; working directory '/var/lib/knot-resolver'; No
space left on device (workdir '/var/lib/knot-resolver')
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Main process
exited, code=exited, status=1/FAILURE
Jun 4 16:32:32 dns-cache-2 systemd[1]: kresd(a)2.service: Failed with result
'exit-code'.
Hello all,
I am trying to configure my local PC (always on server), to serve DNS
resolving all other computers in my LAN. I installed and enabled the
service. I've attempted configuration. Knot-resolver correctly resolves
DNS querier on localhost machine, but fails to reply to queries from
other LAN machines.
Can someone please share configuration which I should use to have LAN
resolving?
I've tried various net.listen(), net() in config file, to no avail. I've
studied this page
https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-net_server.h…
but I have no idea which option I should put in config to enable my
server PC (192.168.1.44) to allow all PCs in LAN (192.168.1.xxx) to use
knot-resolver.
I'd appreciate if someone with knowledge would help a newbie like me.
Thank you!
pioruns
Hi all,
Is there a way to reload dynamically the knot-resolver configuration
without stopping knot-resolver ?
e.g I have to add this in the config file
extraTrees = policy.todnames({'foo.example.net'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), extraTrees))
policy.add(policy.suffix(policy.FORWARD({'192.168.0.15'}), extraTrees))
Is there a sort of CLI command to take it in account ?
Regards
______________________________
*Pierrick CHOVELON I *Ingénieur système
*Advanced Software I IT*
+33 4 77 43 27 05
pierrick.chovelon(a)savoye.com
*SAVOYE - 8, rue de la Richelandière - 42100 - Saint-Etienne - France*
*Savoye recrute ! <https://careers.savoye.com/#!/fr/index> - *www.savoye.com
Dear Knot Resolver users,
Knot Resolver 5.1.0 has been released!
We also have two important announcements:
1) Ubuntu and Debian repositories
Ubuntu and Debian users have to manually set up the upstream repository
once again by following the instructions at:
https://www.knot-resolver.cz/download/
We apologize for the inconvenience, tests are now in place to prevent
this from happening again.
2) The upcoming major version will contain reworked
hints/policy/prefill/rebinding/view modules and related functionalities.
Please participate in the following survey to ensure we do not forget
about your particular use-case:
https://www.knot-resolver.cz/survey/
It will help us to improve Knot Resolver. Thank you!
Our upstream repositories now also provide packages for CentOS 8,
Ubuntu 20.04 and Fedora 32.
And finally, the release notes for version 5.1.0:
Improvements
------------
- cache garbage collector: reduce filesystem operations when idle (!946)
- policy.DEBUG_ALWAYS and policy.DEBUG_IF for limited verbose logging
(!957)
- daemon: improve TCP query latency under heavy TCP load (!968)
- add policy.ANSWER action (!964, #192)
- policy.rpz support fake A/AAAA (!964, #194)
Bugfixes
--------
- cache: missing filesystem support for pre-allocation is no longer
fatal (#549)
- lua: policy.rpz() no longer watches the file when watch is set to
false (!954)
- fix a strict aliasing problem that might've lead to "miscompilation"
(!962)
- fix handling of DNAMEs, especially signed ones (#234, !965)
- lua resolve(): correctly include EDNS0 in the virtual packet (!963)
Custom modules might have been confused by that.
- do not leak bogus data into SERVFAIL answers (#396)
- improve random Lua number generator initialization (!979)
- cache: fix CNAME caching when validation is disabled (#472, !974)
- cache: fix CNAME caching in policy.STUB mode (!974)
- prefill: fix crash caused by race condition with resolver startup
(!983)
- webmgmt: use javascript scheme detection for websockets' protocol
(#546)
- daf module: fix del(), deny(), drop(), tc(), pass() functions
(#553, !966)
- policy and daf modules: expose initial query when evaluating
postrules (#556)
- cache: fix some cases of caching answers over 4 KiB (!976)
- docs: support sphinx 3.0.0+ (!978)
Incompatible changes
--------------------
- minor changes in module API; see upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v5.1.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.1.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.1.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.1.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Dobrý den,
dnes po mnoha dnech (měsíců) bezproblémového provozu knot-resolveru
4.2.2 najednou přestal vracet odpovědi na dotazy.
Využívá jej několik tisíc klientů, tak nebyl moc čas na experimerny a
reboot celého stroje pomohl... Ale jen asi na 15min. Pak opět přestal
vracet záznamy, jako by je neměl. Opět reboot a to vydrželo fungovat asi
1.5h a opt přestal komunikovat. To už začalo být vážné, takže jsme
spoustu DNS dotazů na routrech přesměrovali na 8.8.8.8. Mezi tím opět
restart a nyní to již téměř 2h funguje bez problémů.
Vím, že nyní již existuje knot-resolver verze 5.0.1, ale zajímá mě, zda
uvedený problém je známy, jen jsem se sním do dnes nesetkal a v další
verzi opraven, nebo se jedná o neevidovanou anomálii. V logách jsem
totiž našel jen toto:
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Watchdog
timeout (limit 10s)!
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Killing
process 382 (kresd) with signal SIGABRT.
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Main process
exited, code=killed, status=6/ABRT
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Failed with
result 'watchdog'.
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Service
RestartSec=100ms expired, scheduling restart.
Apr 27 18:17:56 knot-resolver systemd[1]: kresd(a)4.service: Scheduled
restart job, restart counter is at 1.
Předpokládám, že tato nemilá reakce knot-resolveru byla způsobena
nějakým "spatným" dotazem, ať náhodným nebo záměrným.
Nyní jsme opět zrušili přesměrovaní na 8.8.8.8 a téměř okamžitě přestal
odpovídat:
dig @192.168.100.100 -t AAAA www.seZNAm.cz
; <<>> DiG 9.11.5-P4-5.1-Debian <<>> @192.168.100.100 -t AAAA www.seZNAm.cz
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 64547
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.seZNAm.cz. IN AAAA
;; Query time: 0 msec
;; SERVER: 192.168.100.100#53(192.168.100.100)
;; WHEN: Po dub 27 22:32:04 CEST 2020
;; MSG SIZE rcvd: 42
Děkuji za pomoc
Zdeněk Janiš
Hi,
the package signing key for our upstream repositories has expired and
Debian and Ubuntu users with existing installation need to manually
update the knot-resolver-release package to fix the issue:
wget https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
dpkg -i knot-resolver-release.deb
apt update
Otherwise, apt update will report following errors and no new updates
will be installed.
W: GPG error:
http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-lates…
InRelease: The following signatures were invalid: EXPKEYSIG
74062DB36A1F4009 home:CZ-NIC OBS Project <home:CZ-NIC@build.opensuse.org>
E: The repository
'http://download.opensuse.org/repositories/home:/CZ-NIC:/knot-resolver-lates…
InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is
therefore disabled by default.
I apologize for the inconvenience.
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello,
Would it be possible to maintain the Fedora 29 knot-resolver repo for a few
more months? It recently dropped off the mirrors and it's unclear if some
F29 machines will be updated for a few more months.
Thanks!
R
Dear Knot Resolver users,
Knot Resolver 5.0.0 has been released!
This version has a few backward incompatible changes. Most notably, the
network interface configuration has been moved back to the configuration
file and configuration via systemd sockets is no longer supported.
Unfortunately, this change requires a manual modification of your
configuration file if you've previously configured your network
interfaces via systemd sockets. Please follow the instructions printed
out during package upgrade and check out our upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html
We apologize for the inconvenience.
Incompatible changes
--------------------
- see upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html
- systemd sockets are no longer supported (#485)
- net.listen() throws an error if it fails to bind; use freebind option
if needed
- control socket location has changed (!922)
- -f/--forks is deprecated (#529, !919)
Improvements
------------
- logging: control-socket commands don't log unless --verbose (#528)
- use SO_REUSEPORT_LB if available (FreeBSD 12.0+)
- lua: remove dependency on lua-socket and lua-sec, used lua-http and
cqueues (#512, #521, !894)
- lua: remove dependency on lua-filesystem (#520, !912)
- net.listen(): allow binding to non-local address with freebind
option (!898)
- cache: pre-allocate the file to avoid SIGBUS later (not macOS;
!917, #525)
- lua: be stricter around nonsense returned from modules (!901)
- user documentation was reorganized and extended (!900, !867)
- multiple config files can be used with --config/-c option (!909)
- lua: stop trying to tweak lua's GC (!201)
- systemd: add SYSTEMD_INSTANCE env variable to identify different
instances (!906)
Bugfixes
--------
- correctly use EDNS(0) padding in failed answers (!921)
- policy and daf modules: fix postrules and reroute rules (!901)
- renumber module: don't accidentally zero-out request's .state (!901)
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v5.0.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.0.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.0.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.0.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello Petr,
many thanks for your hint regarding modules.unload('bogus_log')
I´m still facing the service kresd@1 crashes without any obvious reasons.
Today I did a second try to upgrade to Knot Resover to version 4.2.2 and the
upgrade seems to be ok, service can start without any difficulties. It runs
as expected more than 3,5 hour, but unfortunately, it starts to write in the
log the same messages as was reported in my previous post and the service
get restart by itself. Every restarts couse a new sevice PID in /var/cache/
knot-resolver/tty, the old one was not correctly finished and the whole
operating system goes to a visible slowdown. I don´t know how to do an
exact sevice crashdump file, but I can provide any log messages if needed.
I unload the bogus_log.
The service runs now just a while. My usual dns queries throughput is around
700qps and I have only one PID in /var/cache/knot-resolver/tty as usual.
Regards,
--
Smil Milan Jeskyňka Kazatel
---------- Původní e-mail ----------
Od: knot-resolver-users-request(a)lists.nic.cz
Komu: knot-resolver-users(a)lists.nic.cz
Datum: 25. 10. 2019 11:53:00
Předmět: knot-resolver-users Digest, Vol 93, Issue 1
"Send knot-resolver-users mailing list submissions to
knot-resolver-users(a)lists.nic.cz
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.nic.cz/mailman/listinfo/knot-resolver-users
or, via email, send a message with subject or body 'help' to
knot-resolver-users-request(a)lists.nic.cz
You can reach the person managing the list at
knot-resolver-users-owner(a)lists.nic.cz
When replying, please edit your Subject line so it is more specific
than "Re: Contents of knot-resolver-users digest..."
Today's Topics:
1. Re: DNSSEC validation failure logging on Centos 7 Knot
Resolver, version 4.2.0 (Milan Jeskynka Kazatel)
2. Re: DNSSEC validation failure logging on Centos 7 Knot
Resolver, version 4.2.0 (Petr Špaček)
3. HTTP module after upgrade from Knot DNS Resolver, version
2.3.0 to Knot Resolver, version 4.2.0 (Milan Jeskynka Kazatel)
----------------------------------------------------------------------
Message: 1
Date: Tue, 22 Oct 2019 14:27:30 +0200 (CEST)
From: "Milan Jeskynka Kazatel" <KazatelM(a)seznam.cz>
To: <knot-resolver-users(a)lists.nic.cz>
Subject: Re: [knot-resolver-users] DNSSEC validation failure logging
on Centos 7 Knot Resolver, version 4.2.0
Message-ID: <1XT.6dmr.1vQ4HW9UcyY.1ThlMo(a)seznam.cz>
Content-Type: text/plain; charset="utf-8"
Hello Team,
I found it, it is described in the Upgrading guide,
DNSSEC validation is now turned on by default. If you need to disable it,
see Trust anchors and DNSSEC
(https://knot-resolver.readthedocs.io/en/stable/daemon.html#dnssec-config).
***
Since version 4.0, DNSSEC validation is enabled by default. This is secure
default and should not be changed unless absolutely necessary.
Options in this section are intended only for expert users and normally
should not be needed.
If you really need to turn DNSSEC off and are okay with lowering security of
your system by doing so, add the following snippet to your configuration
file.
<span style='box-sizing:border-box;color:rgb(64,128,144);font-style:italic'>
-- turns off DNSSEC validation</span>
<span style='box-sizing:border-box'>trust_anchors</span><span style='box-
sizing:border-box'>.</span><span style='box-sizing:border-box'>remove</span>
<span style='box-sizing:border-box'>(</span><span style='box-sizing:border-
box;color:rgb(64,112,160)'>'.'</span><span style='box-sizing:border-box'>)</
span>.
***
Anyway, if it is enabled by default, how to prevent the "DNSSEC validation
failure" spamming in the log and increasing the I/O operation on the
system?
For me now is the service in the unstable condition. My kresd@1 is crashing
and restarting in the row. Please, any advice?
I modify the server name and the domain, but still it is a live log output.
Oct 22 14:02:51 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:02:58 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:03:08 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:03:18 dnstestserver systemd[1]: kresd(a)1.service watchdog timeout
(limit 10s)!
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service: main process
exited, code=killed, status=6/ABRT
Oct 22 14:03:22 dnstestserver systemd[1]: Unit kresd(a)1.service entered
failed state.
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service failed.
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service holdoff time over,
scheduling restart.
Oct 22 14:03:22 dnstestserver systemd[1]: Cannot add dependency job for unit
kresd.service, ignoring: Unit not found.
Oct 22 14:03:22 dnstestserver systemd[1]: Stopped Knot Resolver daemon.
Oct 22 14:03:22 dnstestserver systemd[1]: Starting Knot Resolver daemon...
Oct 22 14:04:07 dnstestserver kresd[16468]: [http] created new ephemeral TLS
certificate
Oct 22 14:04:07 dnstestserver systemd[1]: Started Knot Resolver daemon.
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] refreshing TA for .
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] key: 20326 state:
Valid
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] next refresh for .
in 24 hours
Oct 22 14:04:09 dnstestserver kresd[16468]: DNSSEC validation failure
example.com DNSKEY
...
Best regards.
--
Smil Milan Jeskyňka Kazatel
---------- Původní e-mail ----------
Od: Milan Jeskynka Kazatel <KazatelM(a)seznam.cz>
Komu: knot-resolver-users(a)lists.nic.cz
Datum: 22. 10. 2019 13:33:46
Předmět: DNSSEC validation failure logging on Centos 7 Knot Resolver,
version 4.2.0
"Hello Team,
I would like to know if the "DNSSEC validation failure logging" is enabled
by DEFAULT in version 4.2.0. on Centos 7.
I do not have any explicit call for this module - as is described in the
documentation like this: modules.load('bogus_log'), nevertheless, I´m facing
a huge report in the system log regarding DNSSEC validation failure
somedomainname. DNSKEY
In the configuration, I´m using the 'http' module and module 'stats', can it
be relevant?
kresd.conf
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
}
-- load HTTP module with defaults (self-signed TLS cert)
modules.load('http')
http.config()
How can I disable DNSSEC validation failure logging?
best regards,
--
Smil Milan Jeskyňka Kazatel
"
Hello Team,
I would like to ask why my new version of Knot Resolver does many records of
"DNSSEC validation failure szn-broken-dnssec.cz. DNSKEY"
I tried to compare results with my second resolver on Unbound 1.9.4 where I'
m able to receive an answer by command #unbound-control lookup szn-broken-
dnssec.cz
but no answer via dig command #dig szn-broken-dnssec.cz
The same result via dig is on the server with new Knot Resolver 4.2.2
As was mentioned in your documentation https://knot-resolver.readthedocs.io/
en/stable/modules.html#dnssec-validation-failure-logging
(https://knot-resolver.readthedocs.io/en/stable/modules.html#dnssec-validati…)
I tried the https://dnsviz.net/d/szn-broken-dnssec.cz/dnssec/
(https://dnsviz.net/d/szn-broken-dnssec.cz/dnssec/)
and the result is BOGUS,
then should I be worried about this message in my log?
Thanks for any answer,
best regards
--
Smil Milan Jeskyňka Kazatel
I wonder if this is possible. The Knot Resolver certainly looks
powerful, and I use it on my Omnia Router of course. But the
documentation
<https://knot-resolver.readthedocs.io/en/stable/index.html>which I have
perused and read quickly, hasn't helped me understand this alas.
It leaves me with a couple of interesting questions:
1. How are nameservers configured? Interestingly my running instance of
kresd is on the Omnia and it receives nameservers via a dhcp request
on my ISP I imagine, though I don't see them in /etc/resolv.conf
2. Is it possible configure a number of nameservers on a the basis of
query them all (akin to dnsmasq's --all-servers) and return the
first affirmative response?
My interest is acutely related to:
https://superuser.com/questions/1505755/can-one-configure-name-resolution-t…
And I'd happily use kresd on my local machine(s) as well as on my LAN
DNS (The Omnia) to help resolve names on my .lan while on a VPN!
Regards,
Bernd.
Now that Firefox blocks 3rd-party cookies by default, many sites try to
hide the fact that a cookie is 3rd-party by using CNAMEs.
% kdig +short A ftn.fortuneo.fr
ftn-fr.eulerian.net.
ftn.eulerian.net.
109.232.194.56
I want to block all the requests to *.eulerian.net.
policy.add(policy.suffix(policy.DENY, {todname('eulerian.net.')}))
Does not work since, to quote the documentation "The policy module
currently only looks at whole DNS requests. The rules won’t be
re-applied e.g. when following CNAMEs."
% kdig A ftn.eulerian.net
;; ->>HEADER<<- opcode: QUERY; status: NXDOMAIN; id: 42477
...
ftn.eulerian.net. 10800 IN SOA ftn.eulerian.net. nobody.invalid. 1 3600 1200 604800 10800
% kdig A ftn.fortuneo.fr
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 59262
...
ftn.fortuneo.fr. 600 IN CNAME ftn-fr.eulerian.net.
ftn-fr.eulerian.net. 5799 IN CNAME ftn.eulerian.net.
ftn.eulerian.net. 5799 IN A 109.232.194.56
This seriously limits the usefulness of policy.DENY. What are the
possible solutions?
Hello Team,
I found it, it is described in the Upgrading guide,
DNSSEC validation is now turned on by default. If you need to disable it,
see Trust anchors and DNSSEC
(https://knot-resolver.readthedocs.io/en/stable/daemon.html#dnssec-config).
***
Since version 4.0, DNSSEC validation is enabled by default. This is secure
default and should not be changed unless absolutely necessary.
Options in this section are intended only for expert users and normally
should not be needed.
If you really need to turn DNSSEC off and are okay with lowering security of
your system by doing so, add the following snippet to your configuration
file.
<span style='box-sizing:border-box;color:rgb(64,128,144);font-style:italic'>-- turns off DNSSEC validation</span>
<span style='box-sizing:border-box'>trust_anchors</span><span style='box-sizing:border-box'>.</span><span style='box-sizing:border-box'>remove</span><span style='box-sizing:border-box'>(</span><span style='box-sizing:border-box;color:rgb(64,112,160)'>'.'</span><span style='box-sizing:border-box'>)</span>.
***
Anyway, if it is enabled by default, how to prevent the "DNSSEC validation
failure" spamming in the log and increasing the I/O operation on the
system?
For me now is the service in the unstable condition. My kresd@1 is crashing
and restarting in the row. Please, any advice?
I modify the server name and the domain, but still it is a live log output.
Oct 22 14:02:51 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:02:58 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:03:08 dnstestserver kresd[15877]: DNSSEC validation failure
example.com DNSKEY
Oct 22 14:03:18 dnstestserver systemd[1]: kresd(a)1.service watchdog timeout
(limit 10s)!
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service: main process
exited, code=killed, status=6/ABRT
Oct 22 14:03:22 dnstestserver systemd[1]: Unit kresd(a)1.service entered
failed state.
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service failed.
Oct 22 14:03:22 dnstestserver systemd[1]: kresd(a)1.service holdoff time over,
scheduling restart.
Oct 22 14:03:22 dnstestserver systemd[1]: Cannot add dependency job for unit
kresd.service, ignoring: Unit not found.
Oct 22 14:03:22 dnstestserver systemd[1]: Stopped Knot Resolver daemon.
Oct 22 14:03:22 dnstestserver systemd[1]: Starting Knot Resolver daemon...
Oct 22 14:04:07 dnstestserver kresd[16468]: [http] created new ephemeral TLS
certificate
Oct 22 14:04:07 dnstestserver systemd[1]: Started Knot Resolver daemon.
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] refreshing TA for .
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] key: 20326 state:
Valid
Oct 22 14:04:07 dnstestserver kresd[16468]: [ta_update] next refresh for .
in 24 hours
Oct 22 14:04:09 dnstestserver kresd[16468]: DNSSEC validation failure
example.com DNSKEY
...
Best regards.
--
Smil Milan Jeskyňka Kazatel
---------- Původní e-mail ----------
Od: Milan Jeskynka Kazatel <KazatelM(a)seznam.cz>
Komu: knot-resolver-users(a)lists.nic.cz
Datum: 22. 10. 2019 13:33:46
Předmět: DNSSEC validation failure logging on Centos 7 Knot Resolver,
version 4.2.0
"Hello Team,
I would like to know if the "DNSSEC validation failure logging" is enabled
by DEFAULT in version 4.2.0. on Centos 7.
I do not have any explicit call for this module - as is described in the
documentation like this: modules.load('bogus_log'), nevertheless, I´m facing
a huge report in the system log regarding DNSSEC validation failure
somedomainname. DNSKEY
In the configuration, I´m using the 'http' module and module 'stats', can it
be relevant?
kresd.conf
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
}
-- load HTTP module with defaults (self-signed TLS cert)
modules.load('http')
http.config()
How can I disable DNSSEC validation failure logging?
best regards,
--
Smil Milan Jeskyňka Kazatel
"
Hello Team,
I would like to know if the "DNSSEC validation failure logging" is enabled
by DEFAULT in version 4.2.0. on Centos 7.
I do not have any explicit call for this module - as is described in the
documentation like this: modules.load('bogus_log'), nevertheless, I´m facing
a huge report in the system log regarding DNSSEC validation failure
somedomainname. DNSKEY
In the configuration, I´m using the 'http' module and module 'stats', can it
be relevant?
kresd.conf
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
}
-- load HTTP module with defaults (self-signed TLS cert)
modules.load('http')
http.config()
How can I disable DNSSEC validation failure logging?
best regards,
--
Smil Milan Jeskyňka Kazatel
Hello Knot Resolver team,
I tried to upgrade to the latest stable version of Knot Resolver on Centos7,
where I'm facing with a configuration issue with the module HTTP.
I used it in version 2.3.0 with configuration
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
http = {
host = '10.0.0.1',
port = 8053,
cert = false,
}
}
and it works as expected, I reached a /stats/ from the browser.
Unfortunately after upgrade to version 4.2.0 I'm not able to correctly start
the module even if I follow the migration hints on documentation websites. I
still receive a startup fail message
Oct 04 11:32:09 dns systemd[1]: Starting Knot Resolver daemon...
Oct 04 11:32:54 dns kresd[9540]: error: /usr/lib64/knot-resolver/kres_
modules/http.lua:35: attempt to call global 'moduledir' (a nil value)
Oct 04 11:32:54 dns kresd[9540]: [system] failed to load module 'http'
Oct 04 11:32:54 dns kresd[9540]: error occured here (config filename:lineno
is at the bottom, if config is involved):
Oct 04 11:32:54 dns kresd[9540]: stack traceback:
Oct 04 11:32:54 dns kresd[9540]: [C]: in function 'load'
Oct 04 11:32:54 dns kresd[9540]: /etc/knot-resolver/kresd.conf:14: in main
chunk
Oct 04 11:32:54 dns kresd[9540]: ERROR: No such file or directory
Oct 04 11:32:54 dns systemd[1]: kresd(a)1.service: main process exited, code=
exited, status=1/FAILURE
Oct 04 11:32:54 dns systemd[1]: Failed to start Knot Resolver daemon.
Current configuration is:
-- interfaces
...
net.listen('10.0.0.1', 8453, { kind = 'webmgmt' })
-- load HTTP module with defaults (self-signed TLS cert)
modules.load('http')
http.config()
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
}
Where should be written the path to module http? What I did wrong?
Regards,
--
Smil Milan Jeskyňka Kazatel
Hello Knot Resolver team,
I tried to upgrade to the latest stable version of Knot Resolver on Centos7,
where I'm facing with a configuration issue with the module HTTP.
I used it in version 2.3.0 with configuration
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
http = {
host = '10.0.0.1',
port = 8053,
cert = false,
}
}
and it works as expected, I reached a /stats/ from the browser.
Unfortunately after upgrade to version 4.2.0 I'm not able to correctly start
the module even if I follow the migration hints on documentation websites. I
still receive a startup fail message
Oct 04 11:32:09 dns systemd[1]: Starting Knot Resolver daemon...
Oct 04 11:32:54 dns kresd[9540]: error: /usr/lib64/knot-resolver/kres_
modules/http.lua:35: attempt to call global 'moduledir' (a nil value)
Oct 04 11:32:54 dns kresd[9540]: [system] failed to load module 'http'
Oct 04 11:32:54 dns kresd[9540]: error occured here (config filename:lineno
is at the bottom, if config is involved):
Oct 04 11:32:54 dns kresd[9540]: stack traceback:
Oct 04 11:32:54 dns kresd[9540]: [C]: in function 'load'
Oct 04 11:32:54 dns kresd[9540]: /etc/knot-resolver/kresd.conf:14: in main
chunk
Oct 04 11:32:54 dns kresd[9540]: ERROR: No such file or directory
Oct 04 11:32:54 dns systemd[1]: kresd(a)1.service: main process exited, code=
exited, status=1/FAILURE
Oct 04 11:32:54 dns systemd[1]: Failed to start Knot Resolver daemon.
Current configuration is:
-- interfaces
...
net.listen('10.0.0.1', 8453, { kind = 'webmgmt' })
-- load HTTP module with defaults (self-signed TLS cert)
modules.load('http')
http.config()
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Handle requests by source IP
'stats', -- Track internal statistics
'hints', -- Add static records to resolver
}
Where should be written the path to module http? What I did wrong?
--
Smil Milan Jeskyňka Kazatel
Dear Knot Resolver users,
Knot Resolver 4.2.1 has been released!
Note for Debian users: If you have previously installed
knot-resolver-dbgsym package on Debian, please remove it and install
knot-resolver-dbg instead.
Bugfixes
--------
- rebinding module: fix handling some requests, respect ALLOW_LOCAL flag
- fix incorrect SERVFAIL on cached bogus answer for +cd request (!860)
(regression since 4.1.0 release, in less common cases)
- prefill module: allow a different module-loading style (#506)
- validation: trim TTLs by RRSIG's expiration and original TTL (#319,
#504)
- NS choice algorithm: fix a regression since 4.0.0 (#497, !868)
- policy: special domains home.arpa. and local. get NXDOMAIN (!855)
Improvements
------------
- add compatibility with (future) libknot 2.9
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v4.2.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-4.2.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-4.2.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v4.2.1/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello Knot resolver folks, and especially the packagers,
I've noticed that the CentOS 7 packages published by CZNIC ship with
/etc/knot-resolver writable by the "knot-res" user (the directory mode
is 0775).
It seems that the directory is writable, because kresd (running as user
knot-res) runs a lua script to manage the /etc/knot-resolver/root.keys file.
My sysadmin mind is suspicious of this setup. If any other modules of
kresd have a bug, they have the potential to modify config files in
/etc/knot-resolver. My thinking is that the root.keys file should be
installed in /var/cache/knot-resolver, and that is writable by "knot-res".
Could someone please explain to me why the config directory is writable
by an unprivileged user? Is there a good reason I'm not seeing for this
setup?
Regards,
Anand Buddhdev
Hello,
Could someone help me to understand the behavior of Knot Resolver?
Can I somehow process a change in the list of static records for knot
resolver like a command for reloading configuration?
If I have in the kresd.conf file a module
modules = {
'hints> iterate', - Add static records to resolver
...
...
and the list is located
- Load static records
hints.add_hosts ('/ etc / knot-resolver / static_records.txt')
how can I reload the change in the static_records.txt list for the kresd @ 1
service
Or should I always restart the daemon kresd @ 1
systemctl restart Kresd @ 1
Thanks for any advice,
Regadrs,
--
Smil Milan Jeskyňka Kazatel
Hello,
first note the change to the correct mailing-list.
On 7/29/19 10:20 AM, Balakrishnan B wrote:
> I am trying to add wildcard static hints to catch all local domains like
> below. But does not seem to work.
>
> hints['nextcloud.local'] = '127.0.0.1' # This works fine
> hints['*.local'] = '127.0.0.1'
> hints['.local'] = '127.0.0.1'
>
> DNSMasq supports this like https://stackoverflow.com/a/22551303
>
> Is there way to do this in knot?
No, I don't think there's a good way currently. The hints module only
supports exact names with A and AAAA and corresponding PTR, like in
/etc/hosts files. With our [RPZ] you can do wildcards, but there's no
support for positive answers (so you need to do NXDOMAIN or similar
denials).
BTW, note that .local is reserved for [mDNS] protocol, in particular
kresd might never get to such requests because it's "only" a DNS server:
> 3. Name resolution APIs and libraries SHOULD recognize these names as
> special and SHOULD NOT send queries for these names to their
> configured (unicast) caching DNS server(s).
>
[RPZ]
https://knot-resolver.readthedocs.io/en/stable/modules.html#c.policy.rpz
[mDNS] https://tools.ietf.org/html/rfc6762#section-22.1
--Vladimir
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Dear Knot Resolver users,
Knot Resolver 4.1.0 has been released!
This is a minor release with couple improvements and many bugfixes,
including security fixes for CVE-2019-10190 and CVE-2019-10191.
Packages for supported distributions are now available from
https://www.knot-resolver.cz/download/
Highlights
==========
- - security fixes, we encourage all users to upgrade as soon as possible
- - new garbage collector improves cache utilization on busy machines
- - ARM64 (aarch64) is now experimentally supported, please report issues
- - compatibility with non-standard DoH clients was improved
Full release notes:
Knot Resolver 4.1.0 (2019-07-10)
================================
Security
- --------
- - fix CVE-2019-10190: do not pass bogus negative answer to client (!827)
- - fix CVE-2019-10191: do not cache negative answer with forged
QNAME+QTYPE (!839)
Improvements
- ------------
- - new cache garbage collector is available and enabled by default (#257)
This improves cache efficiency on big installations.
- - DNS-over-HTTPS: unknown HTTP parameters are ignored to improve
compatibility with non-standard clients (!832)
- - DNS-over-HTTPS: answers include `access-control-allow-origin: *`
which allows JavaScript to use DoH endpoint (!823).
- - http module: support named AF_UNIX stream sockets (again)
- - aggressive caching is disabled on minimal NSEC* ranges (!826)
This improves cache effectivity with DNSSEC black lies and also
accidentally works around bug in proofs-of-nonexistence from F5 BIG-IP
load-balancers.
- - aarch64 support, even kernels with ARM64_VA_BITS >= 48 (#216, !797)
This is done by working around a LuaJIT incompatibility.
Please report bugs.
- - lua tables for C modules are more strict by default, e.g. `nsid.foo`
will throw an error instead of returning `nil` (!797)
- - systemd: basic watchdog is now available and enabled by default (#275)
Bugfixes
- --------
- - TCP to upstream: fix unlikely case of sending out wrong message length
(!816)
- - http module: fix problems around maintenance of ephemeral certs (!819)
- - http module: also send intermediate TLS certificate to clients,
if available and luaossl >= 20181207 (!819)
- - send EDNS with SERVFAILs, e.g. on validation failures (#180, !827)
- - prefill module: avoid crash on empty zone file (#474, !840)
- - rebinding module: avoid excessive iteration on blocked attempts (!842)
- - rebinding module: fix crash caused by race condition (!842)
- - rebinding module: log each blocked query only in verbose mode (!842)
- - cache: automatically clear stale reader locks (!844)
Module API changes
- ------------------
- - lua modules may omit casting parameters of layer functions (!797)
- --
Petr Špaček @ CZ.NIC
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEvibrucvgWbORDKNbzo3WoaUKIeQFAl0mBC0ACgkQzo3WoaUK
IeT3uw/9Fqz2PWYmEtF50nlnFIyM44EAClQFVboM8djVQVffN347nZTo4WPI+rsj
exUg5tYKIEC6xrihRLwojbTfFcGVXcEHo0NZsvyv51qIP+lT6yHOVMz+tckmGyPI
qG4JxvF7juUz1oLNCLUJfuZX/rOBgcOID2hga6q8ZXmkE5GFkyPsz7giaxjNWCGY
Hf+uxzR2oezd4GGaOS2bIpqSjBmQwjaLvN3odG0xGZEHQCm+MkblVIbpiKuyR79e
CulWcLNU1KL4V4o/rF5BUXHCnArGP/TM0JoVifPZepZVyB8xMEfBuyew9k1EAuX6
+5oJ9H1JcsftMtBgGUtksltTVK+9Mst9Rc52oiP5RA8ffpc5j0TtQi/lgNSgGnfQ
P10t7fnlotseBYsHeQ0dm54TlGKBgsECuk1/wgjlytbUU9AUQMkmop696xZEcw1w
gVY7FhAq09ciqtNTKuPJLoNTN/uhIdnyvt496Xf15OA3znZlMxdrI+rdVvf3btX/
SV9/bzNyWtJOhiYoN4hIPUqcvq6rFqxBbZpDA3pX9ks/lsWil7Y8sm96GzOjvaSx
EjhDbBaFLBtX4kOPbrREDs8DbCTsxD1rLEruPos+Mp8PfAISEa/RvjyhuI/yyjwE
qpNksYdKNHeniYEcycJ1khJBs1cz+9GlBFdCnSdFG2tFFu4nY4g=
=pAEs
-----END PGP SIGNATURE-----
Hello,
this is pre-release announcement. We are going to release Knot Resolver
4.1.0 with two security fixes on Wednesday 2019-07-10. This release
includes fixes for CVE-2019-10190 and CVE-2019-10191.
We advise all users to upgrade to version 4.1.0 as soon as possible.
Version 4.1.0 is fully is compatible with version 4.0.0 and no manual
steps are required during upgrade.
Pre-built software packages and source code will be made available from
https://www.knot-resolver.cz/download/
during Wednesday 2019-07-10.
Customers with formal support contracts with CZ.NIC can receive fixes
immediatelly.
Software packages provided by Linux distributions (i.e. not supplied by
CZ.NIC) will follow usual release cycle of respective vendors. CZ.NIC
cannot guarantee availability and timeline related to fixes in these
packages. Nevertheless, following vendors received security patches in
advance:
ALT Linux, Amazon Linux AMI, Arch Linux, Chrome OS, CloudLinux, CoreOS,
Debian, Gentoo, Openwall, Oracle, Red Hat, Slackware, SUSE, Ubuntu, Wind
River.
Please send your questions to mailing list
knot-resolver-users(a)lists.nic.cz.
--
Petr Špaček @ CZ.NIC
Hi,
I'd like to ask your opinion about knot-resolver's systemd integration -
in particular, about the network configuration which is currently done
via drop-in files for systemd sockets - kresd.socket, kresd-tls.socket,
kresd-doh.socket and kresd-webmgmt.socket. [network-config-doc]
Have you had any issues while trying to configure network interfaces for
kresd?
Do you find the benefits of socket activation worth the extra complexity
of network configuration? (socket activation enables kresd to be
executed under non-priviledged user and the process doesn't require any
extra capabilities)
All the newtork configuration could take place in kresd.conf via
net.listen() [net-listen-exmaple] if we decided to drop the socket
activation feature. However, this would require we start the service
with the CAP_NET_BIND_SERVICE capability, which could be dropped once
kresd binds to the ports.
Any feedback is welcome!
[network-config-doc] -
https://knot-resolver.readthedocs.io/en/stable/daemon.html#network-configur…
[net-listen-exmaple] -
https://knot-resolver.readthedocs.io/en/stable/daemon.html#c.net.listen
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
to see whether kresd 4.0.0 would do a comparable or better job
than doh-httpproxy we analyzed failure rates by looking at HTTP response
codes for each setup and found some significant differences
for HTTP response code 400:
sample size: 1 000 000 HTTP requests for each run
HTTP Code [1] [2] [3]
--------------------------------
200 93.96 97.04 96.33
499 3.28 2.45 3.11
400 2.07 0.18 0.22 <<<
415 0.68 0.32 0.34
408 0.002 0.002 0.002
413 0.001 0 0
Numbers show percent.
setups:
[1] nginx -> (http, no tls) kresd
[2] nginx -> (http, no tls) doh-httpproxy -> (udp/53) unbound
To reduce the likelihood of measuring unrelated issues (like issues
caused by qname minimization differences between unbound and kresd) we
also used kresd as DNS resolver without touching its DoH code path:
[3] nginx -> (http, no tls) doh-httpproxy -> (udp/53) kresd
The HTTP request rate for [1] was slightly lower when compared with [2]
and [3].
To be more precise, doh-httproxy services was configured with two
running instances as described here:
https://facebookexperimental.github.io/doh-proxy/tutorials/nginx-dohhttppro…
version information:
kresd 4.0.0
nginx 1.14.2
unbound 1.9.0
https://github.com/facebookexperimental/doh-proxy @
9f943a4c232bd018ae155b7839a6b4e13181a5fd
This information on its own is not very useful but it might help
motivate further tests in a test-environment with no real end-user
traffic that allows for more verbose logging.
We would also like to hear from other kresd DoH adopters if they
observe similar failure rates on their setups.
Some open questions for further tests:
- How reproducible are these results?
- Does the HTTP method (GET vs. POST) change the error rate?
- Can the error rate be reduced by running multiple kresd backends?
(nginx sending requests to two kresd bakcends)
- Is the DNS transaction ID of always 0 as per RFC8484 an issue when
kresd is not also terminating the initial HTTPS connection from the DoH
client? (from a backend's perspective two distinct queries might look
identical when a reverse proxy sits between the DoH client and kresd)
regards,
Christoph
I'm new to knot resolver (krsed).
Running kresd in runit and logging via svlogd.
I'm trying to run kresd on a lan with internal ip addresses and
internal domains.
I can currently do this with dnsmasq and unbound, but I wanted to see
how kresd would do on the client facing edge.
I have an Active Directory domain which I've inherited (domain.local)
and I've made a building.domain dns infrastructure for the different
buildings. (building-red.domain, building-orange.domain,
building-green.domain, etc..)
There were two AD dns servers doing all the DNS.. There is now a
pihole server and dnsmasq helping to offset the queries.
I'm looking to put up a kresd on the :53 and move the current dnsmasq
installs to :57 and have kresd forward to them.
When I do this my building.domain and domain.local are not resolvable.
What am I missing?
Unbound has a private-address and private-domain which handles this.
Does knot resolver have something similar?
egrep -v "\-\-" /etc/knot-resolver/config
<code>
net.listen('10.20.0.43', 53)
trust_anchors.remove('.')
modules = {
'policy',
'stats',
'predict'
}
cache.size = 100*MB
cache.storage = 'lmdb:///var/cache/knot-resolver/'
user( 'knot-resolver', 'knot-resolver' )
predict.config({ window = 20, period = 72 })
policy.add( policy.all( policy.FORWARD(
{ '10.20.0.43@57', '10.20.0.53@57' }
)))
</code>
Below is an excerpt from the kresd logs captured via svlogd showing
the nxdomain return..
2019-05-22_18:19:53.06750 [00000.00][plan] plan 'squid.tech.pcsd.'
type 'A' uid [35568.00]
2019-05-22_18:19:53.06756 [35568.00][iter] 'squid.tech.pcsd.' type
'A' new uid was assigned .01, parent uid .00
2019-05-22_18:19:53.06758 [35568.01][cach] => trying zone: ., NSEC, hash 0
2019-05-22_18:19:53.06759 [35568.01][cach] => NSEC sname: covered
by: pccw. -> pe., new TTL 83379
2019-05-22_18:19:53.06760 [35568.01][cach] => NSEC wildcard: covered
by: . -> aaa., new TTL 84454
2019-05-22_18:19:53.06761 [35568.01][cach] => writing RRsets: +++
2019-05-22_18:19:53.06762 [35568.01][iter] <= rcode: NXDOMAIN
2019-05-22_18:19:53.06768 [35568.01][resl] AD: request NOT
classified as SECURE
2019-05-22_18:19:53.06773 [35568.01][resl] finished: 0, queries: 1,
mempool: 16400 B
and this is another request that worked successfully
2019-05-22_18:19:53.07382 [00000.00][plan] plan
'r6---sn-8xgp1vo-2iae.googlevideo.com.' type 'A' uid [24882.00]
2019-05-22_18:19:53.07384 [24882.00][iter]
'r6---sn-8xgp1vo-2iae.googlevideo.com.' type 'A' new uid was assigned
.01, parent uid .00
2019-05-22_18:19:53.07388 [24882.01][cach] => skipping unfit CNAME
RR: rank 020, new TTL -144
2019-05-22_18:19:53.07389 [24882.01][cach] => no NSEC* cached for
zone: googlevideo.com.
2019-05-22_18:19:53.07389 [24882.01][cach] => skipping zone:
googlevideo.com., NSEC, hash 0;new TTL -123456789, ret -2
2019-05-22_18:19:53.07389 [24882.01][cach] => skipping zone:
googlevideo.com., NSEC, hash 0;new TTL -123456789, ret -2
2019-05-22_18:19:53.07390 [ ][nsre] score 21 for 10.20.0.43#00057;
cached RTT: 19
2019-05-22_18:19:53.07391 [ ][nsre] score 40001 for
10.20.0.53#00057; cached RTT: 12666
2019-05-22_18:19:53.07391 [24882.01][resl] => id: '07414' querying:
'10.20.0.43#00057' score: 21 zone cut: '.' qname:
'R6---sN-8Xgp1VO-2iae.goOgLeVIDeO.CoM.' qtype: 'A' proto: 'udp'
here is dig showing that the entry does exist..
; <<>> DiG 9.14.2 <<>> @10.20.0.43 -p 53 -t any squid.tech.pcsd
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17967
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;squid.tech.pcsd. IN ANY
;; ANSWER SECTION:
squid.tech.pcsd. 300 IN A 10.20.0.69
;; Query time: 3 msec
;; SERVER: 10.20.0.43#53(10.20.0.43)
;; MSG SIZE rcvd: 60
As I said I'm still new to kresd and it's logging format so please
excuse my ignorance if the answer is obvious.. I've been reading all
that I can, and I couldn't find a use case that was like mine in the
documentation.. (I did find two things that were causing me problems
with upstream providers in unbound because of the docs.. which is why
I'm looking to try it on the lan and see what happens.. )
Thank you for taking the time to read this.
I'm looking to fortify and make things more resilient with the network
so that I can focus on finishing other projects.. and from what I can
see.. knot will totally help me with that.
Again, thanks for this piece of software - greatly appreciated.
After squidguard and pihole this is what I'm sending to the outiside
world (thanks to krsed..
https://imgur.com/a/tlcC6Jx
--
This message may contain confidential information and is intended only for
the individual(s) named. If you are not an intended recipient you are not
authorized to disseminate, distribute or copy this e-mail. Please notify
the sender immediately if you have received this e-mail by mistake and
delete this e-mail from your system.
(I'm including the mailing list so others can benefit from this
discussion as well since we had the proxy/no-proxy topic there before)
Vaclav Steiner wrote:
> Our KRESd daemons are configured with lua-http module for DoH, not anz reverse proxy.
Is this a temporary setup or what is the motivation behind not using any
HTTP frontend since knot-resolver developers actively recommend against
a "naked" kresd DoH endpoint without any reverse proxy?
> And about list [3] we know. We want to be there :-)
Great to hear that.
kind regards,
Christoph
> Issue [2] I’ll say guys from knot-resolver team. Probably Firefox
doesn’t have problem with it.
a _fresh_ firefox 66.0.5 acutally has a problem with it:
"
Warning: Potential Security Risk Ahead
Firefox detected a potential security threat and did not continue to
odvr.nic.cz. If you visit this site, attackers could try to steal
information like your passwords, emails, or credit card details.
[...]
"
>> 20. 5. 2019 v 19:36, Christoph wrote:
>>
>> Hi Vaclav,
>>
>> thanks for running a public DoH service [1].
>> Would be great if you could add your DoH server to the
>> public DNS resolver lists [3].
>>
>> There is a TLS misconfiguration that results in a TLS error
>> because the certificate chain is incomplete [2].
>>
>> Is this a kresd without any HTTP reverse proxy like nginx in front of it?
>>
>>
>> kind regards,
>> Christoph
>>
>>
>> [1] https://blog.nic.cz/2019/05/20/na-odvr-podporujeme-take-dns-over-https/
>> [2]
>> https://www.ssllabs.com/ssltest/analyze.html?d=odvr.nic.cz&hideResults=on
>> [3] https://github.com/curl/curl/wiki/DNS-over-HTTPS
>> https://github.com/DNSCrypt/dnscrypt-resolvers
>>
>
>
Hi,
how big should the cache filesystem (tmpfs) be relative to the cache.size?
If cache.size is N MB
is N + 50 MB a fine value?
I'm asking because apparently 50 MB
additional space is not enough:
> SIGBUS received; this is most likely due to filling up the filesystem
where cache resides.
https://knot-resolver.readthedocs.io/en/stable/daemon.html#envvar-cache.size
writes:
> Note that this is only a hint to the backend, which may or may not
respect it. See cache.open().
If cache.size does not set a maximum size maybe remove "Set the cache
maximum size in bytes." from its documentation? :)
Does that mean I have to use cache.open() as well to enforce a limit
or is "max_size" also just a hint that is not respected/enforced?
If so: is there a way to enforce a max_size for the cache?
thanks,
Christoph
Hi,
I have a quick update about the upstream package repositories in
OpenBuild Service (OBS).
New repositories
----------------
- Fedora_30 - x86_64, armv7l
- xUbuntu_19.04 - x86_64
- Debian_Next - x86_64 (Debian unstable rolling release)
New repositories for Debian 10 and CentOS 8 should be available shortly
after these distros are released, depending on their buildroot
availability in OBS.
Deprecated repositories
-----------------------
- Arch - x86_64
Due to many issues with Arch packaging in OBS (invalid package size,
incorrect signatures) and the fast pace of Arch updates, please consider
this repository deprecated in favor of the AUR package [1] that I keep
up-to-date. The Arch OBS repository will most likely be removed in the
future.
Also, please note I'll be periodically deleting repositories for distros
that reach their official end of life. In the coming months, this
concerns Ubuntu 18.10 and Fedora 28.
[1] - https://aur.archlinux.org/packages/knot-resolver/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello!
This occurs if for some reason the prefill file happens to be empty:
kresd[11812]: [prefill] root zone file valid for 17 hours 01 minutes,
reusing data from disk
kresd[11812]: segfault at 0 ip 00007f9b06017436 sp 00007ffc3142bb58
error 4 in libc-2.28.so[7f9b05fa1000+148000]
Apr 30 20:26:13 scruffy kernel: Code: 0f 1f 40 00 66 0f ef c0 66 0f ef
c9 66 0f ef d2 66 0f ef db 48 89 f8 48 89 f9 48 81 e1 ff 0f 00 00 48 81
f9 cf 0f 00 00 77 6a <f3> 0f 6f 20 66 0f 74 e0 66 0f d7 d4 85 d2 74 04
0f bc c2 c3 48 83
This happens in a loop until systemd gives up trying to start kresd.
Solved by removing /var/cache/knot-resolver/root.zone (0 bytes).
kresd version: 4.0.0
We use your example config from:
https://knot-resolver.readthedocs.io/en/stable/modules.html#cache-prefilling
kind regards,
Christoph
Hi,
we are a non-profit running privacy enhancing services for the public,
among those services are also public DNS resolvers supporting DoH and DoT.
We started the process to migrate our DoH/DoT endpoints to kresd 4
replacing our temporary setup using doh-httpproxy/unbound.
Here are a few questions that came up, the first ones about caching
and safe logging are the most important ones.
- kresd writes the cache to disk by default. Is there an easy way to
disable that and to switch to in-memory cache only without workarounds
like a ramdisk? (we didn't find an answer to this in the documentation
[6]) We want to avoid writing any cache data to disk.
We haven't found much documentation about logging. We would like to
ensure that no sensitive data (IP addresses, domain names) is written to
the logs. If verbose() is false, is that enough to avoid logging any IP
addresses and domains?
- Is the DoH URI configurable? (change /doh to our currently used URI)
or does that require something like
https://knot-resolver.readthedocs.io/en/stable/modules.html#how-to-expose-c…
?
- Is it possible to enable multiple DoH endpoints (URIs)
via a single kresd instance where every endpoint
has a distinct upstream configuration?
- Does kresd 4 (in the client position) support OOOR? [7]
- Are there any known kresd munin plugins
that produce graphs similar to unbound's munin plugin? [1]
- Does kresd need a reload/restart after
the TLS certificate got renewed (by letsencrypt)?
- Is there a recommended way to configure the interaction between
certbot and kresd? (the defaults would not work since kresd - starting
as knot-resolver user will not be able to read certificates owned by root)
- What is the canonical way to report security issues? (if [4] does not
work)
- Do you run a security bug bounty program?
thanks!
Christoph
The documentation under [5] does not cover all fields
in the output of "worker.stats()", it would be great if those
missing fields could be added to the documentation.
[2] links to https:// rocks.moonscript.org but the certificate is for
https://luarocks.org
[3] gives this example:
```
> print(cache.storage)
error occured here (config filename:lineno is at the bottom, if config
is involved):
stack traceback:
[C]: in function 'get'
/usr/lib/knot-resolver/sandbox.lua:265: in function '__index'
[string "return table_print(print(cache.storage))"]:1: in main chunk
ERROR: Function not implemented
```
the document probably meant?
"print(cache.current_storage)"
After solving about 20 reCAPTCHAs and errors like the following we gave
up trying to submit issues via [4].
"There was an error with the reCAPTCHA. Please solve the reCAPTCHA again."
[1] https://angristan.xyz/configure-unbound-plugin-munin/
[2] https://knot-resolver.readthedocs.io/en/stable/daemon.html
[3]
https://knot-resolver.readthedocs.io/en/stable/daemon.html#envvar-cache.cur…
[4] https://gitlab.labs.nic.cz/knot/knot-resolver/issues
[5]
https://knot-resolver.readthedocs.io/en/stable/daemon.html#c.worker.stats
[6]
https://knot-resolver.readthedocs.io/en/stable/daemon.html#cache-configurat…
[7]
https://dnsprivacy.org/wiki/display/DP/DNS+Privacy+Implementation+Status#DN…
Hi, below is a continuation of a discussion regarding the default port
for DNS-over-HTTPS (DoH) in Knot Resolver. My argument is to use 44353
as a packaging default, since using port 443 by default makes an
unexpected collision likely. The DoH service requires configuration to
be exposed on public interfaces anyway, so users are free to override
this value as they see fit.
On 18/04/2019 15.16, Daniel Kahn Gillmor wrote:> On Thu 2019-04-18
10:01:39 +0200, Tomas Krizek wrote:
>> thanks for these constructive suggestions. I've tried to use port 443
>> for kres-doh.socket and have the socket masked (simply disabling it
>> isn't enough, since kresd(a)*.service requsts it via Sockets=).
>
> I think the answer there is, don't put it in Sockets= -- if it's
> enabled, it will get associated properly because of the Service= line,
> and if it's not enabled, it won't get associated.
Sockets= is required, otherwise the socket won't be passed to any other
instance than kresd@1, which goes against our use-case of scaling up
using multiple independent systemd processes.
>> However, I wasn't able to find a reasonable way to do it across all
>> ditros in a way that would work reliably in all cases (including
>> upgrades).
>
> Do you have a pointer to documentation about what mechanisms you tried,
> and how they failed on different distros? I'd like to understand what
> you ran into, so i can try to help reason about it without having to
> replicate all the work.
No documentation, mostly just trial and error (and a lot of swearing :)
If kresd-doh.service is masked by default, this places a symlink to
/etc/systemd/system, which is where users usually makes their changes
(as opposed to /usr/lib/systemd/system). Is this acceptable with various
packaging policies across distros?
Let's say it is. User unmasks this socket file and the uninstalls the
package. This will issue a warning, since the symlink was part of the
package.
A more problematic case is when you consider package upgrade. If
/etc/systemd/system/kresd-doh.service -> /dev/null is part of the
package, then the user removes this files via unmasking to actually use
the socket, it will get re-masked again during the next package upgrade.
>> I also think that it'd be confusing for users that kresd-doh.socket
>> would require extra command to use, as opposed to kresd-webmgmt.socket
>> (both used by http module).
>
> I don't think that's confusing. These are different interfaces, and
> it's entirely reasonable that they would be controlled separately.
I disagree. knot-resolver-module-http now provides two socket files:
kresd-doh.socket
kresd-webmgmt.socket
Now both are enabled by default after the installation, on localhost
only, and users simply need to load http module in their config.
It seems unnecessarily complicated to require additional handling of
kresd-doh.socket as opposed to kresd-webmgmt.socket (which has no reason
not to be enabled by default).
>> In the end, we've decided to go with 127.0.0.1:44353 as the default for
>> kresd-doh.socket in upstream and Fedora/EPEL packages. The documented
>> use case is how to remove this default and listen on port 443 on all
>> interfaces.
>
> Hm, that makes me sad for a number of reasons (both the ephemeral-range
> port *and* the 127.0.0.1, which doesn't make a lot of sense). I hope we
> can figure out the details together about these tradeoffs, so that we
> can make the default configuration something more useful.
I don't consider the localhost default to be an issue. All other
sockets, i.e. plain DNS and DNS-over-TLS, are configured like this and
I'd like to keep it this way. If a user decides to expose the DNS
service, it should require a manual action.
The port, of course, is a matter of lively debate. However, perhaps the
default port in kresd-doh.socket file doesn't matter all that much. It's
easy to override and documented how to do that.
> happy to talk about this on-list as well, if you prefer (presumably
> knot-resolver-users(a)lists.nic.cz).
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Dear Knot Resolver users,
Knot Resolver 4.0.0 has been released!
This is a major release with many improvements and also some breaking
changes, please see our upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html
Those interested in DNS-over-HTTPS are welcome to look for unintentional
Easter Bugs we may have accidentally hidden in our experimental
implementation. Upstream packages with DNS-over-HTTPS support are
available for Debian 9, CentOS 7, Ubuntu 18, Fedora and Arch.
Incompatible changes
--------------------
- see upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html
- configuration: trust_anchors aliases .file, .config() and .negative
were removed (!788)
- configuration: trust_anchors.keyfile_default is no longer accessible
(!788)
- daemon: -k/--keyfile and -K/--keyfile-ro options were removed
- meson build system is now used for builds (!771)
- build with embedded LMBD is no longer supported
- default modules dir location has changed
- DNSSEC is enabled by default
- upstream packages for Debian now require systemd
- libknot >= 2.8 is required
- net.list() output format changed (#448)
- net.listen() reports error when address-port pair is in use
- bind to DNS-over-TLS port by default (!792)
- stop versioning libkres library
- default port for web management and APIs changed to 8453
Improvements
------------
- policy.TLS_FORWARD: if hostname is configured, send it on wire (!762)
- hints module: allow configuring the TTL and change default from 0 to 5s
- policy module: policy.rpz() will watch the file for changes by default
- packaging: lua cqueues added to default dependencies where available
- systemd: service is no longer auto-restarted on configuration errors
- always send DO+CD flags upstream, even in insecure zones (#153)
- cache.stats() output is completely new; see docs (!775)
- improve usability of table_print() (!790, !801)
- add DNS-over-HTTPS support (#280)
- docker image supports and exposes DNS-over-HTTPS
Bugfixes
--------
- predict module: load stats module if config didn't specify period (!755)
- trust_anchors: don't do 5011-style updates on anchors from files
that were loaded as unmanaged trust anchors (!753)
- trust_anchors.add(): include these TAs in .summary() (!753)
- policy module: support '#' for separating port numbers, for consistency
- fix startup on macOS+BSD when </dev/null and cqueues installed
- policy.RPZ: log problems from zone-file level of parser as well (#453)
- fix flushing of messages to logs in some cases (notably systemd) (!781)
- fix fallback when SERVFAIL or REFUSED is received from upstream (!784)
- fix crash when dealing with unknown TA key algorhitm (#449)
- go insecure due to algorithm support even if DNSKEY is NODATA (!798)
- fix mac addresses in the output of net.interfaces() command (!804)
- http module: fix too early renewal of ephemeral certificates (!808)
Module API changes
------------------
- kr_straddr_split() changed API a bit (compiler will catch that)
- C modules defining `*_layer` or `*_props` symbols need to change a bit
See the upgrading guide for details. It's detected on module load.
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v4.0.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-4.0.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-4.0.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v4.0.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
I created new certs with Let's Encrypt and knot-resolver just throws
"invalid argument" when starting. The old one work fine, so first I
thought it's related that they are wildcard certs now, but also new
created certs just for "dns.mydomain.tld" also lead to the error.
Any ideas?
BR
Bjoern
Message "moved" from [knot-dns-users] to [knot-resolver-users]:
-------- Forwarded Message --------
Subject: [knot-dns-users] running knot-resolver with only kresd-tls.socket
Date: Sat, 16 Mar 2019 13:36:50 -0000 (UTC)
From: Daniel Lublin <daniel(a)lublin.se>
To: knot-dns-users(a)lists.nic.cz
Ahoj,
I'm tinkering with running knot-resolver for DNS-over-TLS (only). My kresd@1
is listening on the public interface by using a drop-in override file,
following kresd.systemd(7) (kresd-tls.socket.d/override.conf).
Thing is, I would like knot to not listen on port 53 on any interface, not
even localhost. But this is precisely what it does.
I naively tried to stop it from doing so, first with a
kresd.socket.d/override.conf with:
[Socket]
ListenStream=
But that failed with journalctl -u kresd.socket containing `kresd.socket:
Unit has no Listen setting (ListenStream=, ListenDatagram=, ListenFIFO=,
...). Refusing.`
And also by trying to disable that "socket-unit", with a
kresd(a)1.service.d/override.conf containing:
[Service]
Sockets=
Sockets=kresd-tls.socket
Sockets=kresd-control(a)%i.socket
But that did nothing.
Finally, using `systemctl mask kresd.socket` I get it to stop listening on
(localhost) port 53. But then instead systemd find itself in "degraded"
mode...
Any tip on how to accomplish this cleanly?
Hello, Knot * users!
Please note that Knot DNS 2.8.x breaks all Knot Resolver versions
released so far (<=3.2.1). The breakage is a bit subtle, and build
system won't detect it.
We plan to release an update soon. If you don't want to wait, you can
apply a simple patch:
https://gitlab.labs.nic.cz/knot/knot-resolver/commit/186f2639
--Vladimir
Hi,
When I request a domain name to the knot-resolver server, I will see that
the uppercase and lowercase letters are randomly converted to log records.
--
Best Regards!!
champion_xie
Hello,
I have at the moment a network with 3 DNS servers. They do
recursive/caching as well as authorative zones (Bind 9). I want to build
a structure with 2 authorative Servers using Knot-DNS and 3 resolvers
using Knot-Resolver.
As our authorative servers are in our network and as I want to get any
changes in their zones as fast as possible to our users I use from the
documentation:
Replacing part of the DNS tree
Interesting is this line:
policy.add(policy.suffix(policy.STUB({'$our ip'}), extraTrees))
Can I put more than one IP at this line to forward the queries? We have
2 authorative servers, so I want ask both of them round robin like.
In my config I have comma separated two servers, kresd is running with
it, but it seems it uses only the first.
Is there a way to get my desired behavior?
Regards
Thomas
P.S.: Versions are Knot-Resolver 3.2.1 and OS is Ubuntu 18.04.02
--
Thomas Belián
Fachhochschule Erfurt
Fak. GTI / FR AI & Hochschulrechenzentrum
Postfach 45 01 55, 99051 Erfurt
Telefon: +49361 6700 - 647
E-Mail: thomas.belian(a)fh-erfurt.de
Dear Knot Resolver users,
Knot Resolver 3.2.1 has been released.
Bugfixes
--------
- trust_anchors: respect validity time range during TA bootstrap (!748)
- fix TLS rehandshake handling (!739)
- make TLS_FORWARD compatible with GnuTLS 3.3 (!741)
- special thanks to Grigorii Demidov for his long-term work on Knot
Resolver!
Improvements
------------
- improve handling of timeouted outgoing TCP connections (!734)
- trust_anchors: check syntax of public keys in DNSKEY RRs (!748)
- validator: clarify message about bogus non-authoritative data (!735)
- dnssec validation failures contain more verbose reasoning (!735)
- new function trust_anchors.summary() describes state of DNSSEC TAs (!737),
and logs new state of trust anchors after start up and automatic changes
- trust anchors: refuse revoked DNSKEY even if specified explicitly,
and downgrade missing the SEP bit to a warning
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v3.2.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.2.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.2.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v3.2.1/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Dear Knot Resolver users,
Knot Resolver 3.2.0 has been released.
New features
------------
- module edns_keepalive to implement server side of RFC 7828 (#408)
- module nsid to implement server side of RFC 5001 (#289)
- module bogus_log provides .frequent() table (!629, credit Ulrich Wisser)
- module stats collects flags from answer messages (!629, credit Ulrich
Wisser)
- module view supports multiple rules with identical address/TSIG
specification
and keeps trying rules until a "non-chain" action is executed (!678)
- module experimental_dot_auth implements an DNS-over-TLS to auth protocol
(!711, credit Manu Bretelle)
- net.bpf bindings allow advanced users to use eBPF socket filters
Bugfixes
--------
- http module: only run prometheus in parent process if using --forks=N,
as the submodule collects metrics from all sub-processes as well.
- TLS fixes for corner cases (!700, !714, !716, !721, !728)
- fix build with -DNOVERBOSELOG (#424)
- policy.{FORWARD,TLS_FORWARD,STUB}: respect net.ipv{4,6} setting (!710)
- avoid SERVFAILs due to certain kind of NS dependency cycles, again
(#374) this time seen as 'circular dependency' in verbose logs
- policy and view modules do not overwrite result finished requests (!678)
Improvements
------------
- Dockerfile: rework, basing on Debian instead of Alpine
- policy.{FORWARD,TLS_FORWARD,STUB}: give advantage to IPv6
when choosing whom to ask, just as for iteration
- use pseudo-randomness from gnutls instead of internal ISAAC (#233)
- tune the way we deal with non-responsive servers (!716, !723)
- documentation clarifies interaction between policy and view modules
(!678, !730)
Module API changes
------------------
- new layer is added: answer_finalize
- kr_request keeps ::qsource.packet beyond the begin layer
- kr_request::qsource.tcp renamed to ::qsource.flags.tcp
- kr_request::has_tls renamed to ::qsource.flags.tls
- kr_zonecut_add(), kr_zonecut_del() and kr_nsrep_sort() changed
parameters slightly
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v3.2.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.2.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.2.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v3.2.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Dobrý den,
používám knot-resolver ver. 3.1.0 a zjistil, že nevrací A záznamy pro:
www.cezdistribuce.cz
když vymažu cache:
cache.clear('cezdistribuce.cz')
tak se knot-resolver na nedefinovanou dobu umoudří a A záznamy normálně
vrací. Přitom google DNS 8.8.8.8 záznamy vrací v pořádku. Jinou anomálii
jsem nezjistil. V čem může být problém? Konfigurace?
Nefunkční výpis:
$ dig @192.168.100.100 -t A www.cezdistribuce.cz
; <<>> DiG 9.11.4-P2-3~bpo9+1-Debian <<>> @192.168.100.100 -t A
www.cezdistribuce.cz
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26696
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.cezdistribuce.cz. IN A
;; AUTHORITY SECTION:
cezdistribuce.cz. 41952 IN SOA ns10.cez.cz.
netmaster.cezdata.cz. 2018112701 28800 7200 864000 86400
;; Query time: 1 msec
;; SERVER: 192.168.100.100#53(192.168.100.100)
;; WHEN: Po pro 10 07:58:55 CET 2018
;; MSG SIZE rcvd: 112
Funkční, po vymazaní z cache:
$ dig @192.168.100.100 -t A www.cezdistribuce.cz
; <<>> DiG 9.11.4-P2-3~bpo9+1-Debian <<>> @192.168.100.100 -t A
www.cezdistribuce.cz
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 89
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.cezdistribuce.cz. IN A
;; ANSWER SECTION:
www.cezdistribuce.cz. 24 IN A 89.111.76.140
;; Query time: 0 msec
;; SERVER: 192.168.100.100#53(192.168.100.100)
;; WHEN: Po pro 10 08:09:59 CET 2018
;; MSG SIZE rcvd: 65
Děkuji za pomoc.
--
Zdeněk Janiš
hi
My idea is to use knot resolver as dns forwarder / cache instead using
dnsmasq. I am using old PC with archlinux as router.
I did change dnsmasq config so it listen on port 5353.
following steps here
https://wiki.archlinux.org/index.php/Knot_Resolver
I did change systemd unit so my kresd is listening on both local interfaces. I
am checking that with ss command and it is ok.
here is my config of kresd
cat /etc/knot-resolver/kresd.conf
-- vim:syntax=lua:
-- Refer to manual:
http://knot-resolver.readthedocs.org/en/latest/daemon.html#configuration
-- Load useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- See kresd.systemd(7) about configuring network interfaces when using systemd
-- Listen on localhost (default)
-- net = { '127.0.0.1', '::1'}
-- Enable DNSSEC validation
-- trust_anchors.file = '/etc/knot-resolver/root.keys'
hints.root_file = '/etc/knot-resolver/root.hints'
-- Cache size
cache.size = 100 * MB
After start I can see following errors in journal
Nov 08 22:49:43 skriatok systemd[1]: Starting Knot Resolver daemon...
Nov 08 22:49:43 skriatok systemd[1]: Started Knot Resolver daemon.
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'b.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'b.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'h.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'j.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'm.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'l.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'j.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'i.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'g.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'f.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'e.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'd.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'a.root-
servers.net.', type: 28
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'h.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'c.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'k.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'e.root-
servers.net.', type: 1
Nov 08 22:49:53 skriatok kresd[9012]: [priming] cannot resolve address 'f.root-
servers.net.', type: 1
...
thank you for help
Hi,
I am not able to force knot-resolver to forward some queries.
I have real DNS zone and in internal network I have few 3rd level subzones.
For them I would like to make my kresd forward queries to our internal DNS
server (bind9).
My computer is not inside company nework - connected via openvpn.
System is ubuntu 18.04.1 (up-to-date) and knot-resolver 3.0.0.
Relevant part of kresd.conf is:
policy.add(policy.suffix(
policy.FORWARD('10.0.0.1'),{
todname('sub1.company.cz'),
todname('sub2.company.cz')
}
))
dig machine.sub1.company.cz @127.0.0.53 does NOT work,
dig machine.sub1.company.cz @10.0.0.1 DOES work
I have set verbose(true) but with no help.
kresd queries 10.0.0.1 for 'company.cz' only, but that's all.
I am just working on it on my ubuntu workstation,
but real target will be turris omnia with its kresd,
which connects via openvpn to company network.
--
Sincerely
Ivo Panacek
Dear Knot Resolver users,
Is it possible to create a view with more than one policie?
If I do for example something like this:
view:addr('192.168.1.0/24',
policy.suffix(
policy.PASS,
policy.todnames({'good.com','better.com','best.com'})
)
)
view:addr('192.168.1.0/24',
policy.suffix(
policy.REFUSE,{todname('bad.com')}
)
)
Knot resolver will obey only the first view and users will be able to resolve the bad.com domain ... obviously the first view is the match and no other views are considered afterwards ...
Without multiple policies views are pretty much useless so I believe that I must be doing something wrong here so could anybody help me on this ...
Best regards,
Bratislav ILIC
Dear Knot Resolver users,
Knot Resolver 3.0.0 has been released.
Incompatible changes
--------------------
- cache: fail lua operations if cache isn't open yet (!639)
By default cache is opened *after* reading the configuration,
and older versions were silently ignoring cache operations.
Valid configuration must open cache using `cache.open()` or
`cache.size =` before executing cache operations like `cache.clear()`.
- libknot >= 2.7.1 is required, which brings also larger API changes
- in case you wrote custom Lua modules, please consult
https://knot-resolver.readthedocs.io/en/latest/lib.html#incompatible-change…
- in case you wrote custom C modules, please see compile against
Knot DNS 2.7 and adjust your module according to messages from C
compiler
- DNS cookie module (RFC 7873) is not available in this release,
it will be later reworked to reflect development in IEFT dnsop working
group
- version module was permanently removed because it was not really used
by users; if you want to receive notifications about new releases
please subscribe to
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-announce
Bugfixes
--------
- fix multi-process race condition in trust anchor maintenance (!643)
- ta_sentinel: also consider static trust anchors not managed via
RFC 5011
Improvements
------------
- reorder_RR() implementation is brought back
- bring in performace improvements provided by libknot 2.7
- cache.clear() has a new, more powerful API
- cache documentation was improved
- old name "Knot DNS Resolver" is replaced by unambiguous "Knot
Resolver" to prevent confusion with "Knot DNS" authoritative server
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v3.0.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.0.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-3.0.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v3.0.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello,
I set config as below, but an error is output at some timing.
Although it is an error message related to policy.lua, reading the document does not know the cause.
"""
error: /usr/local/lib/kdns_modules/policy.lua:526: attempt to call local 'action' (a table value)
error: /usr/local/lib/kdns_modules/policy.lua:526: attempt to call local 'action' (a table value)
error: /usr/local/lib/kdns_modules/policy.lua:526: attempt to call local 'action' (a table value)
error: /usr/local/lib/kdns_modules/policy.lua:526: attempt to call local 'action' (a table value)
"""
"""
net.listen(net.eth0, 53, false)
-- Load Useful modules
modules = {
'policy', -- Block queries to local zones/bad sites
'view', -- Views for certain clients
'hints > iterate', -- Hints AFTER iterate
'priming', -- Initializing a DNS Resolver with Priming Queries implemented according.
'detect_time_skew', -- System time skew detector
'detect_time_jump', -- Detect discontinuous jumps in the system time
'daf',
predict = {
window = 180, -- 180 minutes sampling window
period = 24*(60/15) -- track last 24 hours
},
'bogus_log',
}
modules.list() -- Check module call order
-- stub forward
policy.add(policy.suffix(policy.PASS({'192.168.1.3@10053', '192.168.1.4@10053'}), {todname('kometch.private')}))
policy.add(policy.suffix(policy.PASS({'192.168.1.3@10053', '192.168.1.4@10053'}), {todname('168.192.in-addr.arpa') }))
--forward policy
policy.add(policy.all(policy.FORWARD({'1.1.1.1', '1.0.0.1'})))
policy.del(0)
"""
Would you please advise me?
Best regards.
Hello,
Since version 2.4.1, have the version module been deleted?
I confirmed NEWS, but I could not find a sentence to mention that the "version" module was deleted.
https://gitlab.labs.nic.cz/knot/knot-resolver/blob/master/NEWS
I have confirmed that it has been deleted from Gitlab.
Best regards.
Hi,
I'm trying to build knot-resolver with redis support. I was wondering
why the memcache and redis-stuff is commented out within the Makefile,
so I uncommented the redis-part, also in modules.mk.
Now it fails because it does not find it's own cache.h?
modules/redis/redis.c:24:10: fatal error: lib/cache.h: No such file or
directory
Regards
Bjoern
Hello Petr,
This a good news, that query source IP would be planned in the future module version.
I would like to answer your related notes like:
- How many 'most frequent' IP addresses you want to get? - It is better to do the list as variable items, someone needs to list top 10 and someone 1000.
- Should the number of addresses be configurable? yes
- Do you consider query from the same IP but different port as 'different client' or not? (E.g. clients behind NAT?) In my case, it is not a topic function.
- Should IP addresses be somehow tied to most frequent query names or not? yes, it will be better to know frequented queries to domain names.
- Do you need a way to flush the table on fly? An option to clear statistic list and count it since a specific time range sound as a good idea.
In the case when I´m not so familiar with Lua, where should be added your code part?
Best regards,
Milan Sýkora
-----Original Message-----
From: knot-resolver-users [mailto:knot-resolver-users-bounces@lists.nic.cz] On Behalf Of knot-resolver-users-request(a)lists.nic.cz
Sent: Tuesday, July 24, 2018 12:00 PM
To: knot-resolver-users(a)lists.nic.cz
Subject: knot-resolver-users Digest, Vol 31, Issue 1
Send knot-resolver-users mailing list submissions to
knot-resolver-users(a)lists.nic.cz
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
or, via email, send a message with subject or body 'help' to
knot-resolver-users-request(a)lists.nic.cz
You can reach the person managing the list at
knot-resolver-users-owner(a)lists.nic.cz
When replying, please edit your Subject line so it is more specific than "Re: Contents of knot-resolver-users digest..."
Today's Topics:
1. Knot resolver module stats - query source ip (Sýkora Milan)
2. Re: Knot resolver module stats - query source ip (Petr Špaček)
----------------------------------------------------------------------
Message: 1
Date: Tue, 24 Jul 2018 07:32:52 +0000
From: Sýkora Milan <milan.sykora(a)cetin.cz>
To: "knot-resolver-users(a)lists.nic.cz"
<knot-resolver-users(a)lists.nic.cz>
Subject: [knot-resolver-users] Knot resolver module stats - query
source ip
Message-ID: <365048c9e5ae48c699c079fee6343a5f(a)cewexch402.ad.cetin>
Content-Type: text/plain; charset="iso-8859-2"
Hello,
I have your cool DNS resolver in version 2.3.0, I know that was released newest version.
My question is - is it possible to explore the most frequented IP (queries source) in the module stats? Or exist any other way how to achieve it?
Many thanks for your answer in the future, Best regards.
Milan Sýkora
Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či jejich změny jsou společností Česká telekomunikační infrastruktura a.s. uzavírány v písemné formě nebo v podobě a postupem podle příslušných všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné zde<https://www.cetin.cz/cs/jak-cetin-vyjednava-o-smlouve>.
The content of this message is intended for communication purposes only. It does neither represent any contract proposal, nor its amendment or acceptance of any potential contract proposal. Česká telekomunikační infrastruktura a.s. concludes contracts or amendments thereto in a written form or in the form and the procedure in accordance with relevant general terms and conditions of Česká telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts are concluded by an authorized person entitled on the basis of a written authorization. Contracts on a future contract are concluded solely in a written form, self-signed or signed by means of an advanced electronic signature. The conditions under which Česká telekomunikační infrastruktura a.s. negotiates contracts and under which it proceeds are available here<https://www.cetin.cz/en/jak-cetin-vyjednava-o-smlouve>.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.nic.cz/pipermail/knot-resolver-users/attachments/20180724/cc47…>
------------------------------
Message: 2
Date: Tue, 24 Jul 2018 11:05:51 +0200
From: Petr Špaček <petr.spacek(a)nic.cz>
To: knot-resolver-users(a)lists.nic.cz
Subject: Re: [knot-resolver-users] Knot resolver module stats - query
source ip
Message-ID: <2a769de2-3b70-7d6f-5476-4bc7bd48fa8b(a)nic.cz>
Content-Type: text/plain; charset=UTF-8
Hello,
at the moment there is no built-in module with this statistics but it can be hacked around (see below).
BTW we plan to to rework stats so it would be very valuable to get your requirements!
To make sure future version contains what you need, can you specify what kind of data + what configuration you want to get? For example:
- How many 'most frequent' IP addresses you want to get?
- Should the number of addresses be configurable?
- Do you consider query from the same IP but different port as 'different client' or not? (E.g. clients behind NAT?)
- Should IP addresses be somehow tied to most frequent query names or not?
- Do you need a way to flush the table on fly?
For now you can use the following Lua config snippet to log client IP addresses.
-- start of config snippet
function LOG_IP(state, req)
req = kres.request_t(req)
if req.qsource == nil or req.qsource.addr == nil then
-- internal request, no source
return state end
print('query from IP ' .. tostring(req.qsource.addr))
return -- continue with other policy rules end
policy.add(policy.all(LOG_IP))
-- end of config snipper
Output looks like this:
"query from IP ::1#56927"
This can be further processed by your log processing system to get aggregate numbers over all resolvers or alternativelly it can be extended using LRU library in Lua to get stats for single resolver.
I hope it helps.
Petr Špaček @ CZ.NIC
On 24.7.2018 09:32, Sýkora Milan wrote:
> Hello,
>
>
>
> I have your cool DNS resolver in version 2.3.0, I know that was
> released newest version.
>
>
>
> My question is – is it possible to explore the most frequented IP
> (queries source) in the module stats? Or exist any other way how to
> achieve it?
>
>
>
>
>
> Many thanks for your answer in the future,
>
> Best regards.
>
> *Milan Sýkora***
------------------------------
Subject: Digest Footer
--
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
Please change Subject line before before you reply to a digest message!
------------------------------
End of knot-resolver-users Digest, Vol 31, Issue 1
**************************************************
Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či jejich změny jsou společností Česká telekomunikační infrastruktura a.s. uzavírány v písemné formě nebo v podobě a postupem podle příslušných všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné zde<https://www.cetin.cz/cs/jak-cetin-vyjednava-o-smlouve>.
The content of this message is intended for communication purposes only. It does neither represent any contract proposal, nor its amendment or acceptance of any potential contract proposal. Česká telekomunikační infrastruktura a.s. concludes contracts or amendments thereto in a written form or in the form and the procedure in accordance with relevant general terms and conditions of Česká telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts are concluded by an authorized person entitled on the basis of a written authorization. Contracts on a future contract are concluded solely in a written form, self-signed or signed by means of an advanced electronic signature. The conditions under which Česká telekomunikační infrastruktura a.s. negotiates contracts and under which it proceeds are available here<https://www.cetin.cz/en/jak-cetin-vyjednava-o-smlouve>.
Hello,
I have your cool DNS resolver in version 2.3.0, I know that was released newest version.
My question is - is it possible to explore the most frequented IP (queries source) in the module stats? Or exist any other way how to achieve it?
Many thanks for your answer in the future,
Best regards.
Milan Sýkora
Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či jejich změny jsou společností Česká telekomunikační infrastruktura a.s. uzavírány v písemné formě nebo v podobě a postupem podle příslušných všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné zde<https://www.cetin.cz/cs/jak-cetin-vyjednava-o-smlouve>.
The content of this message is intended for communication purposes only. It does neither represent any contract proposal, nor its amendment or acceptance of any potential contract proposal. Česká telekomunikační infrastruktura a.s. concludes contracts or amendments thereto in a written form or in the form and the procedure in accordance with relevant general terms and conditions of Česká telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts are concluded by an authorized person entitled on the basis of a written authorization. Contracts on a future contract are concluded solely in a written form, self-signed or signed by means of an advanced electronic signature. The conditions under which Česká telekomunikační infrastruktura a.s. negotiates contracts and under which it proceeds are available here<https://www.cetin.cz/en/jak-cetin-vyjednava-o-smlouve>.
Dear Knot Resolver users,
Knot Resolver 2.4.0 has been released.
Incompatible changes
--------------------
- minimal libknot version is now 2.6.7 to pull in latest fixes (#366)
Security
--------
- fix a rare case of zones incorrectly downgraded to insecure status
(!576)
New features
------------
- TLS session resumption (RFC 5077), both server and client (!585, #105)
(disabled when compiling with gnutls < 3.5)
- TLS_FORWARD policy uses system CA certificate store by default (!568)
- aggressive caching for NSEC3 zones (!600)
- optional protection from DNS Rebinding attack (module rebinding, !608)
- module bogus_log to log DNSSEC bogus queries without verbose logging
(!613)
Bugfixes
--------
- prefill: fix ability to read certificate bundle (!578)
- avoid turning off qname minimization in some cases, e.g. co.uk. (#339)
- fix validation of explicit wildcard queries (#274)
- dns64 module: more properties from the RFC implemented (incl.
bug #375)
Improvements
------------
- systemd: multiple enabled kresd instances can now be started using
kresd.target
- ta_sentinel: switch to version 14 of the RFC draft (!596)
- support for glibc systems with a non-Linux kernel (!588)
- support per-request variables for Lua modules (!533)
- support custom HTTP endpoints for Lua modules (!527)
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v2.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.4.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v2.4.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
today I wanted to try the ta sentinel functionality.
Unfortunately do I not get the expected results.
Maybe somebody could show me some example querries?
I tried:
dig +dnssec kskroll-sentinel-is-ta-20326.iis.se
dig +dnssec kskroll-sentinel-not-ta-20326.iis.se
dig +dnssec kskroll-sentinel-is-ta-19036.iis.se
dig +dnssec kskroll-sentinel-not-ta-19036.iis.se
All with the same result. No answer section.
dig +dnssec iis.se
iis.se. 60 IN A 91.226.37.214
iis.se. 60 IN RRSIG A 5 2 60 20180615103001 20180605103001 6150 iis.se.
XAbt3nM5DmpsDsvZN6gRW94mGwTsCYpAQxurTJYp4FHZXaXt82o+3SMD
ZEZjWsDswy3WsYp087Wx7Y4sH6HbuP0VYuwg1yLsBegrj6xNlcd5JarN
wi8ZGbTytnDBG5rvpqwUp8fat3ZZ6kko4t7BtsD4cFzdL2xpw4MPbBKz p0s=
What am I doing wrong?
/Ulrich
--
Ulrich Wisser
ulrich(a)wisser.se
Hi,
over the last weekend I did configure knot resolver on Turris Omnia to
write statistics to my influxdb instance.
Now that I have statistics, I am not entirely sure what all the data means.
Everything under answer is easy to understand.
But all data under cache is only 0, although answer.cached is not 0.
Please find some stats below. I would be very grateful for some explanation
what the data means.
My next step would be to try to use RPZ funtionality. is there statistics
for RPZ too?
/Ulrich
> stats.list()
[answer.nxdomain] => 360546
[answer.100ms] => 64682
[answer.1500ms] => 2740
[answer.slow] => 12137
[answer.servfail] => 204137
[answer.250ms] => 84208
[answer.cached] => 430091
[answer.nodata] => 1533
[answer.1ms] => 462921
[answer.total] => 922236
[answer.10ms] => 58963
[answer.noerror] => 356015
[answer.50ms] => 195917
[answer.500ms] => 29794
[answer.1000ms] => 7525
[predict.learned] => 54
[predict.epoch] => 19
[predict.queue] => 520
[query.edns] => 152223
[query.dnssec] => 211
> cache.stats()
[hit] => 0
[delete] => 0
[miss] => 0
[insert] => 0
--
Ulrich Wisser
ulrich(a)wisser.se
Hello knot-resolver users,
I have a question about design of systemd service for knot-resolver. I
installed knot from repository OpenSuse repository
<https://software.opensuse.org//download.html?project=home%3ACZ-NIC%3Aknot-r…>
on Ubuntu 16 and 18.
The systemd service uses user *knot-resolver*. But this user cannot bind to
unprivileged ports, so when I have configuration like below where I bind to
network interface on privileged port and change user context, it fails with
"*[system] bind to '10.20.30.118@853' Permission denied*":
```
net.listen({'10.20.30.118'}, 853, { tls = true })
user('knot-resolver', 'knot-resolver')
```
To fix this I changed User *knot-resolver* to *root* in systemd service.
Now service starts to run as root, binds to network interface and then
changes context.
My question is, is this solution security wise fine? Why is the systemd
service designed to run as user knot-resolver, when I guess many people
will need to override this in order to use knot-resolver properly? What is
the main idea? Or is there a different approach to overcome this (Such as
linux capabilities)?
Thank you for responses and please correct me in anything if I am wrong.
Ondrej Vaško
Hi,
I would like to run Knot Resolver with DNS-over-TLS on my laptop, but I
need to configure 'policy.FORWARD' whenever I connect to our corporate
network. The information about new connection is provided by the Network
Manager, that is not a problem, but then I need to configure the
resolver somehow. I was thinking about creating a new configuration file
and simply restarting the server, but it fails with "Start request
repeated too quickly".
Is there a way to add/remove policy rules "on the fly"?
The HTTP/2 module seems like a good candidate for doing this. Can this
module be used to accomplish this task?
Best regards,
Martin Sehnoutka
PS: If there is anyone using dnssec-trigger, this would be similar, but
less complicated.
--
Martin Sehnoutka | Associate Software Engineer
PGP: 5FD64AF5
UTC+1 (CET)
RED HAT | TRIED. TESTED. TRUSTED.
Hi, Folks!
I want to use GoogleDNS 8.8.x.x as fallback forwarder when domain's NS'es
are down or inaccessible (for example, when network connectivity between me
and NS is broken).
Is it possible?
policy.add(policy.all(policy.FORWARD({ '8.8.8.8', '8.8.4.4' }))) works
well, but disables recursion at all. How to call it only after/when
recursion is failed?
WBR, Ilya
Dear Knot Resolver users,
Knot Resolver 2.3.0 has been released. This is a security release that
fixes CVE-2018-1110.
We're also introducing a new mailing list, knot-resolver-announce. Only
notifications about new releases or important announcements will be
posted there.
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-announce
Security
--------
- fix CVE-2018-1110: denial of service triggered by malformed DNS
messages (!550, !558, security!2, security!4)
- increase resilience against slow lorris attack (security!5)
Bugfixes
--------
- validation: fix SERVFAIL in case of CNAME to NXDOMAIN in a single
zone (!538)
- validation: fix SERVFAIL for DS . query (!544)
- lib/resolve: don't send unecessary queries to parent zone (!513)
- iterate: fix validation for zones where parent and child share
NS (!543)
- TLS: improve error handling and documentation (!536, !555, !559)
Improvements
------------
- prefill: new module to periodically import root zone into cache
(replacement for RFC 7706, !511)
- network_listen_fd: always create end point for supervisor supplied
file descriptor
- use CPPFLAGS build environment variable if set (!547)
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v2.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.3.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v2.3.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
I have a fresh installation of the Knot Resolver on my Fedora 27:
*Package version*:
$ rpm -q knot-resolver
knot-resolver-2.2.0-1.fc27.x86_64
but it does not work out of the box. The problem is, that the user
"knot-resolver" cannot bind to a privileged port. Why is the systemd
service file using knot-resolver user? It works just fine, when I remove
the "User=" option from service file and add this line into the
kres.conf file:
user('knot-resolver', 'knot-resolver')
The resulting process runs as knot-resolver and it binds successfully.
Best regards,
--
Martin Sehnoutka | Associate Software Engineer
PGP: 5FD64AF5
UTC+1 (CET)
RED HAT | TRIED. TESTED. TRUSTED.
Hello,
There is a website I need to use in a daily basis that uses DNSSEC,
however their keys have expired which causes validation to fail. I have
contacted their support but they failed to resolve the issue so far.
Since I can resolve the name when using `dig +cd`, I was hoping I could
configure `kresd` to skip validation when resolving that specific
domain. It seems that I should be able to do so by using the `policies`
module and the `FLAGS` action:
https://knot-resolver.readthedocs.io/en/stable/modules.html#actions
I am not sure with flag/flags to use. I inspected the source and tried
the following:
policy.add(policy.suffix(policy.FLAGS('DNSSEC_CD'),{todname('example.org.')}))
But this apparently had no effect. I also tried without the trailing dot
and played with other flags, but no success.
Does anybody know which flag I could set to bypass DNSSEC validation for
the specified domain? Or, if the policy module is not the way to achieve
that goal, is there any other way?
# kresd --version
Knot DNS Resolver, version 1.5.1
Any help will be greatly appreciated,
// Leonardo.
Greetings. I want to build Knot 2.1 on a Ubuntu 16.04.4 LTS. However, the
libknot from the packages is too old:
Makefile:87: *** libknot >= 2.6.4 required. Stop.
As compared to:
libknot-dev/xenial,now 2.1.1-1build1 amd64 [installed]
Is it possible to get the libknot package on Ubuntu 16.04.4 LTS updated
soon? If not, how do I install the latest libknot by hand?
--Paul Hoffman
Greetings. My kresd config file is:
net.listen('192.241.207.161', 5364)
trust_anchors.file = 'root.keys'
modules.load('ta_sentinel')
I wanted to test this with a zone I set up at this-is-signed.com. However,
I'm getting a positive result back for both the is-ta and the not-ta
records (it is properly giving me the SERVFAIL for the bogus record).
Have I configured Knot Resolver incorrectly? For Knot 2.2.1, do I need a
different form for the names in order to get the kskroll-sentinel effect to
kick in? From the DNSOP WG mailing list traffic, I thought I needed the
4f66 tag, but could have misinterpreted that.
--Paul Hoffman
# dig @192.241.207.161 -p 5364
kskroll-sentinel-is-ta-4f66.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
kskroll-sentinel-is-ta-4f66.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36153
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kskroll-sentinel-is-ta-4f66.this-is-signed.com. IN A
;; ANSWER SECTION:
kskroll-sentinel-is-ta-4f66.this-is-signed.com. 60 IN CNAME
this-is-signed.com.
this-is-signed.com. 60 IN A 192.241.207.161
;; Query time: 283 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:09:54 UTC 2018
;; MSG SIZE rcvd: 105
# dig @192.241.207.161 -p 5364
kskroll-sentinel-not-ta-4f66.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
kskroll-sentinel-not-ta-4f66.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14466
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kskroll-sentinel-not-ta-4f66.this-is-signed.com. IN A
;; ANSWER SECTION:
kskroll-sentinel-not-ta-4f66.this-is-signed.com. 60 IN CNAME
this-is-signed.com.
this-is-signed.com. 54 IN A 192.241.207.161
;; Query time: 5 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:10:00 UTC 2018
;; MSG SIZE rcvd: 106
# dig @192.241.207.161 -p 5364 bogus.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
bogus.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 20810
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;bogus.this-is-signed.com. IN A
;; Query time: 9 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:10:08 UTC 2018
;; MSG SIZE rcvd: 42
Dear Knot Resolver users,
Knot Resolver 2.1.0 is released.
Incompatible changes
--------------------
- stats: remove tracking of expiring records (predict uses another way)
- systemd: re-use a single kresd.socket and kresd-tls.socket
- ta_sentinel: implement protocol draft-ietf-dnsop-kskroll-sentinel-01
(our draft-ietf-dnsop-kskroll-sentinel-00 implementation had inverted
logic)
- libknot: require version 2.6.4 or newer to get bugfixes for DNS-over-TLS
Bugfixes
--------
- detect_time_jump module: don't clear cache on suspend-resume (#284)
- stats module: fix stats.list() returning nothing, regressed in 2.0.0
- policy.TLS_FORWARD: refusal when configuring with multiple IPs (#306)
- cache: fix broken refresh of insecure records that were about to expire
- fix the hints module on some systems, e.g. Fedora (came back on 2.0.0)
- build with older gnutls (conditionally disable features)
- fix the predict module to work with insecure records & cleanup code
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v2.1.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.1.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.1.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v2.1.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
I am running knot-resolver version 2.0.0 and I would like to redirect
(STUB) queries from a subnet to a specific nameserver. For that I
created a view configuration for that subnet (see below). A query
without being in that subnet caches the answer but unfortunately the
cache overrides the answer when doing the same query inside the view.
My configuration file looks like that:
---
modules = {
'hints > iterate',
'policy > hints',
'view < cache'
}
view:addr('${SUBNET}', policy.suffix(policy.STUB('${IP}'),
{todname('${DOMAIN_SUFFIX}')}))
---
Testing procedure:
1) Try to resolve A record being in the special subnet.
$kdig @${KNOT-RESOLVER} A www.${DOMAIN_SUFFIX}
-> STUB works and answer is correct
2) Try to resolve A record without being in the special subnet.
$kdig @${KNOT-RESOLVER} ${KNOT-RESOLVER}
-> view is not triggerd and answer is correct
3) Try to resolve A record again within the special subnet.
$kdig @${KNOT-RESOLVER} A www.${DOMAIN_SUFFIX}
-> The answer is satisfied from cache, which is not the right answer.
Is it even possible to do that?
Best wishes,
Sakirnth