Hi,
I'm receiving this error:
ERROR: write access needed to keyfile dir '/etc/knot-resolver/root.keys'
Current permissions are 775 knot-resolver:knot-resolver. I've also
tried 0:0.
Contents of the dir are same owner 664.
Suggestions?
Thanks,
Mike Wright
Hi,
on a fresh debian system I followed this installation guide
https://www.knot-resolver.cz/documentation/stable/quickstart-install.html
The package installed successfully, but after that things get a bit more
difficult
The installed gpg key is expired
> /etc/apt/trusted.gpg.d/cznic-obs.gpg
> ------------------------------------
> pub rsa2048 2018-02-15 [SC] [verfallen: 2024-08-15]
> 4573 7F9C 8BC3 F3ED 2791 8182 7406 2DB3 6A1F 4009
> uid [ verfallen ] home:CZ-NIC OBS Project
> <home:CZ-NIC@build.opensuse.org>
>
>
"verfallen" means expired. Sorry that system speaks german (german hoster).
Makes it kind of hard to install kresd. :-)
And while we are at it, why are there no kresd packages for the
raspberry pi? Please!!!
Kind regards
/Ulrich
Dear Knot Resolver users,
Knot Resolver 6.0.12 (early-access) has been released!
Security:
- DoS: fix rare crashes with either of the lines below (!1682)
[system] requirement "h && h->end > h->begin" failed in queue_pop_impl
[system] requirement "val == task" failed in session2_tasklist_del
Bugfixes:
- daemon: fix DoH with multiple "parallel" queries in one connection
(#931, !1677)
- /management/unix-socket: revert to absolute path (#926, !1664)
- fix `tags` when used in /local-data/rules/*/records (!1670)
- stats: request latency was very incorrect in some cases (!1676)
Improvements:
- /local-data/rpz/*/watchdog: new configuration to enable watchdog for
RPZ files (!1665)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.12/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.12.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.12.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.12/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
I'm trying to set up a resolver with the addition of an invented TLD
(this is an experiment, no need to explain to me that it may be a bad
idea). I have authoritative name servers for the dummy TLD, which is
signed with DNSSEC and I want DNSSEC validation.
The documentation says that policy.FORWARD requires to forward to a
resolver :-(
policy.STUB disables validation so it is a no-no.
If I configure with policy.add + policy.FORWARD, and trust_anchors.add
for the key of the dummy TLD, it works for the TLD apex, for
subdomains of the TLD which are NOT signed but for signed subdomains
of the TLD, I get SERVFAIL + "EDE: 12 (NSEC Missing): (AHXI)".
Querying directly the authoritative name servers with the DO bit, I
get all the RRSIG and NSEC I need. But apparently, Knot cannot get them.
Knot-resolver 5.7.4
On 02/04/2025 23.19, oui.mages_0w(a)icloud.com wrote:
> So knot-resolver 6.0.8 with libknot15 seems to also trigger the memory
> leak I was experiencing with knot-resolver 6.0.9+ by the unidentified
> traffic pattern (or whatever is causing this).
Thanks, this is very interesting. I confirm that (for our Ubuntu 24.04
packages), libknot15 (i.e. knot 3.4) is used exactly since 6.0.9, so the
timing checks out, too. That's just a matter of binary builds. Even
the latest versions can still be built with libknot14 (3.3.x)
Have you looked into which libdnssec and libzscanner you have there?
The thing is that these two didn't change soname between knot 3.3 and
3.4, so here I see larger risks than with libknot itself.
Hello,
In short, we do use stats module with http module exposing webmgmt every
time SYSTEMD process knot is started (we do have 30 on one machine).
Unfortunately each of webmgmt shows the same stats in /stats or /metrics
endpoint (prometheus)
Checking metrics with socat, going into each process and running
stats.list() shows correctly stats per process.
Is there any way to have automatically exported per knot process statistics?
Regards,
_
*Marcel Adamski*DevOps Engineer
*M: *+48 882 115 524
*Redge *Technologies
Ostrobramska 86 | 04-163 Warsaw
*T: *+48 22 255 11 00 | *F: *+48 22 255 15 50
*www.redge.com* <https://redge.com/>
[image: RedgeTechnologies]
*Redge Technologies Sp. z o.o.*
*VAT No:* PL1132687365 | *REGON:* 141103558 | *KRS: *0000287417
District Court for the Capital City of Warsaw, XIV Commercial Division of
National Court Register | Share capital: 501 650 PLN
We use personal data in accordance with our privacy policy
<https://redge.com/privacy-policy/>
Hello
We are using knot-resolver 5.7.4 with multiple independent instances - 32. We also use tmpfs.
We starts processes by systemd.
the problem we encountered is that when systemd starts 32 processes, they get a timeout and are restarted by systemd. As a result, we have problems starting all processes. The problem does not occur when we do not use tmpfs.
How to solve this problem?
We shoud add to systemd something like this?
ExecStartPre=/bin/sleep $(( RANDOM % 5 ))
[Service]
StartLimitBurst=5
StartLimitIntervalSec=10
Or we should something to kresd configuration?
Hi,
I've found this warning in my journal:
... kresd[1071788]: [taupd ] you need to update package with trust
anchors in "/usr/share/dns/root.key" before it breaks
I don't know how to do that.
I think my system is current but just ran: apt update; apt list
--upgradable and it shows nothing regarding knot.
Thanks for any pointers,
Mike Wright
Hello,
On my local network, I have a computer fixed IP that I would like to use
a specific TLS FORWARD.
Generic TLS_FORWARD is simply configured as such.
policy.add(policy.all(policy.TLS_FORWARD({
{'9.9.9.9', hostname='dns9.quad9.net'},
{'1.1.1.1', hostname='cloudflare-dns.com'},
{'1.0.0.1', hostname='cloudflare-dns.com'},
})))
I tried to embed a specific different TLS_FORWARD with a
view:addr('10.10.10.10', ... )
but I cannot manage to restrict this TLS_FORWARD and to avoid it
poisoning the cache.
Is there somewhere an example of such setup, with ACL ending up on two
different TLS_FORWARD and one with no cache ?
Regards,
--
Mathieu Roy
Dear Knot Resolver users,
Knot Resolver 6.0.9 (early-access) has been released!
Improvements:
- rate-limiting: add these options, mechanism, docs (!1624)
- manager: secret for TLS session resumption via ticket (RFC5077) (!1567)
The manager creates and sets the secret for all running 'kresd' workers.
The secret is created automatically if the user does not configure
their own secret in the configuration.
This means that the workers will be able to resume each other's TLS
sessions, regardless of whether the user has configured it to do so.
- answer NOTIMPL for meta-types and non-IN RR classes (!1589)
- views: improve interaction with old-style policies (!1576)
- stats: add stale answer counter 'answer.stale' (!1591)
- extended_errors: answer with EDE in more cases (!1585, !1588, !1590,
!1592)
- local-data: make DNAMEs work, i.e. generate CNAMEs (!1609)
- daemon: use connected UDP sockets by default (#326, !1618)
- docker: multiplatform builds (#922, !1623)
- docker: shared VOLUMEs are prepared for configuration and cache
(!1625, !1627)
Configuration path was changed to standard
'/etc/knot-resolver/config.yaml'.
Bugfixes:
- daemon/proxyv2: fix informing the engine about TCP/TLS from the actual
client (!1578)
- forward: fix wrong pin-sha256 length; also log pins on mismatch
(!1601, #813)
Incompatible changes:
- -f/--forks is removed (#631, !1602)
- gnutls < 3.4 support is dropped, released over 9 years ago (!1601)
- libuv < 1.27 support is dropped, released over 5 years ago (!1618)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.9/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.9/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Issue Summary:
I need to forward DNS queries to a secondary DNS server if a specific value (IP address) is returned in the DNS response. Specifically, if the answer contains 192.168.1.1, I want the request to be forwarded to 10.10.10.1 for re-resolution.
Expected Behavior:
A user queries for a domain (e.g., dig alibaba.com).
If the result contains the IP address 192.168.1.1, the query should be automatically forwarded to another DNS server (e.g., 10.10.10.1) for further resolution.
Current Attempt:
lua
policy.add(policy.all(function (state, req)
log("info Policy function triggered")
-- Get the DNS answer section
local answer = req:answer()
if answer then
for _, record in ipairs(answer) do
-- Check if the response is an A record and contains the IP 192.168.1.1
if record.stype == kres.type.A and tostring(record.rdata) == '192.168.1.1' then
log("info IP is 192.168.1.1, forwarding to 10.10.10.1")
-- Forward the query to the specified DNS server
return policy.FORWARD({'10.10.10.1'})
end
end
else
log("info No answer found")
end
return kres.DONE
end), true)
Issue:
The function triggers correctly, but the query is not being forwarded to the specified DNS server when the condition (record.rdata == '192.168.1.1') is met.
Steps to Reproduce:
Add the above Lua code to the Knot Resolver configuration.
Query for a domain (dig alibaba.com).
If the result contains the IP 192.168.1.1, the query should be forwarded, but it does not.
Environment:
Knot Resolver Version: [Include version]
Operating System: [Your OS]
Configuration: [Any relevant additional configuration]
Desired Solution:
I would like the query to forward correctly to 10.10.10.1 whenever the answer contains 192.168.1.1. Any guidance on why the forward might not be triggered or if additional configurations are needed would be appreciated.
Hello,
we are observing that Knot-resolver is refusing certain queries because
of enabled DNS rebinding protection to subdomains beneath apple.com.
IMHO there is no reason for that - they are not pointing to a private
addres range. For instance:
Unbound:
dig init.ess.apple.com @127.0.0.1 -p 53
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 53
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38044
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ANSWER SECTION:
init.ess.apple.com. 81 IN CNAME
init-cdn-lb.ess-apple.com.akadns.net.
init-cdn-lb.ess-apple.com.akadns.net. 27 IN CNAME init.ess.g.aaplimg.com.
init.ess.g.aaplimg.com. 12 IN A 17.253.73.204
init.ess.g.aaplimg.com. 12 IN A 17.253.73.205
init.ess.g.aaplimg.com. 12 IN A 17.253.73.203
init.ess.g.aaplimg.com. 12 IN A 17.253.73.201
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:42:46 CEST 2024
;; MSG SIZE rcvd: 194
Knot-resolver 5.7.4-cznic.1 freshly re-installed:
dig init.ess.apple.com @127.0.0.1 -p 2053
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 2053
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 17074
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 18 (Prohibited): (EIM4)
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "blocked by DNS
rebinding protection"
;; Query time: 8 msec
;; SERVER: 127.0.0.1#2053(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:45:40 CEST 2024
;; MSG SIZE rcvd: 124
I have also tried to remove the cache under /var/cache/knot-resolver but
without any effect. There are more domain names with this beavior:
query.ess.apple.comcomm-cohort.ess.apple.comkt-prod.ess.apple.com
Thanks.
Ales Rygl
Hi,
I am running knot-resolver-5.7.4 in a FreeBSD service jail (14.1-STABLE).
Note: Because I am still pretty new to using knot resolver, I may miss something important besides [1].
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS [2]
cache('['count_entries']'): 4953
cache('['usage_percent']'): 16.953125
### stopping jail simulates shutdown server [3]:
MWN> service jail stop test
Stopping jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
### Thus, data.mdb is preserved after shutdown!
### starting jail simulates booting server:
MWN> service jail start test
Starting jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
15059 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS
cache('['count_entries']'): 87
cache('['usage_percent']'): 0.15625
1) After having stopped that jail, data.mdb is still available and hasn't been modified as shown by checksum.
2) After start of the jail including start of kresd data.mdb has been modified (checksum).
3) cache.stats() shows significantly lower numbers.
Questions:
#) become cache.stats() reset after a reboot?
#) what am I missing?
Thanks in advance and regards,
Michael
[1] https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#p…
[2] _STATS (based on https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#c…)
echo -n "cache('['count_entries']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep count_entries
echo -n "cache('['usage_percent']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep usage_percent
[3] a real server reboot shows the same issue with the cache
Dear Knot Resolver users,
due to an internal infrastructure change, released sources for Knot
Resolver have been moved from
<https://secure.nic.cz/files/knot-resolver/> to
<https://knot-resolver.nic.cz/release/>.
Apart from this movement, the rest of the directory structure remains
unchanged. Proper redirects (HTTP 301 Moved Permanently) have been put
in place to make this change as painless and transparent as possible.
These redirects can be expected to stay in place for the foreseeable
future. Still, we do recommend to change any of your direct links from
the secure.nic.cz server to knot-resolver.nic.cz, to avoid the extra
indirection step and/or unforeseen issues in the future.
Should any of you run into any issues or have any questions about this
change, please do let us know, we will be happy to help you out.
Best regards
--
Oto Šťáva | Knot Resolver team lead | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680
Hello.
In case you're using our upstream repositories for Debian or Ubuntu, as
suggested on https://www.knot-resolver.cz/download/
you'll be running into their signing key expiring since today. As we
didn't update it in time, you'll have to update it manually by re-running:
wgethttps://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
dpkg -i knot-resolver-release.deb
Ticket: https://gitlab.nic.cz/knot/knot-resolver/-/issues/747
We also forgot to add Ubuntu 22.04, so that is fixed now, too.
--Vladimir
Hello,
I am attempting to migrate some internal bind servers to
knot+knot-resolver. We have knot acting as the authoritative server for
internal views of our public zones and we're attempting to use Knot
Resolver to forward to Knot to provide recursive resolution on the internal
network.
In this configuration, when Knot is resolving a CNAME record from an
internal stub zone, it doesn't chase the target of the CNAME for the client.
To demonstrate, here are two example zones served by knot:
# zone1.com
test-a 30 IN A 192.168.1.1
# zone2.com
test-cname 30 CNAME test-a.zone1.com.
With these configured as stubs in Knot Resolver, if you query
test-cname.zone2.com, knot resolver won't chase it across zones to get the
A record value like it would for non-stub zones.
Both Bind9 and Unbound chase this as expected. Is there any way to
configure Knot Resolver to do the same?
Thanks!
- Paul
Dear Knot Resolver users,
Knot Resolver versions 5.7.4 (stable) and 6.0.8 (early-access) have been
released!
Both releases include important security fixes, an update is strongly
advised!
Version 6.0.8 additionally brings some improvements, like faster policy
reloads using a new policy-loader process, respecting system-wide crypto
policies, JSON metrics, separate IPv6 and IPv4 metrics, and more.
---
Knot Resolver 5.7.4:
Security:
- reduce buffering of transmitted data, especially TCP-based in userspace
Also expose some of the new tweaks in lua:
(require 'ffi').C.the_worker.engine.net.tcp.user_timeout = 1000
(require
'ffi').C.the_worker.engine.net.listen_{tcp,udp}_buflens.{snd,rcv}
Improvements:
- add the fresh DNSSEC root key "KSK-2024" already, Key ID 38696 (!1556)
Incompatible changes:
- libknot 3.0.x support is dropped (!1558)
Upstream last maintained 3.0.x in spring 2022.
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.7.4/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.4.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.4.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v5.7.4/
---
Knot Resolver 6.0.8:
Security:
- reduce buffering of transmitted data, especially TCP-based in userspace
Also expose some of the new tweaks in lua:
(require 'ffi').C.the_worker.engine.net.tcp.user_timeout = 1000
(require
'ffi').C.the_worker.engine.net.listen_{tcp,udp}_buflens.{snd,rcv}
Packaging:
- all packages:
- remove unused dependency on `libedit` (!1553)
- deb packages:
- packages ``knot-resolver-core`` and ``knot-resolver-manager`` have
been merged into a single ``knot-resolver6`` package. Suffix packages
``knot-resolver-*`` have been renamed to ``knot-resolver6-*``. This
change _should_ be transparent, but please do let us know if you
encounter any issues while updating. (!1549)
- package ``python3-prometheus-client`` is now only an optional
dependency
- rpm packages:
- packages ``knot-resolver-core`` and ``knot-resolver-manager`` have
been merged into a single ``knot-resolver`` package. This change
_should_ be transparent, but please do let us know if you encounter
any issues while updating. (!1549)
- bugfix: do not overwrite config.yaml (!1525)
- package ``python3-prometheus_client`` is now only an optional
dependency
- arch package:
- fix after they renamed a dependency (!1536)
Improvements:
- TLS (DoT, DoH): respect crypto policy overrides in OS (!1526)
- manager: export metrics to JSON via management HTTP API (!1527)
* JSON is the new default metrics output format
* the ``prometheus-client`` Python package is now an optional dependency,
required only for Prometheus export to work
- cache: prefetching records
* predict module: prefetching expiring records moved to prefetch module
* prefetch module: new module to prefetch expiring records
- stats: add separate metrics for IPv6 and IPv4 (!1545)
- add the fresh DNSSEC root key "KSK-2024" already, Key ID 38696 (!1556)
- manager: policy-loader: new component for separate loading of policy
rules (!1540)
The ``policy-loader`` ensures that configured policies are loaded
into the rules database
where they are made available to all running kresd workers. This
loading is no longer done
by all kresd workers as it was before, so this should significantly
improve the resolver's
startup/reload time when loading large sets of policy rules, e.g.
large RPZs.
Incompatible changes:
- cache: the ``cache.prediction`` configuration property has been
reorganized
into ``cache.prefetch.expiring`` and ``cache.prefetch.prediction``,
changing
the default behaviour as well. See the `relevant documentation section
<https://www.knot-resolver.cz/documentation/v6.0.8/config-cache-predict.html>`_
for more.
- libknot <=3.2.x support is dropped (!1565)
Bugfixes:
- arch package: fix after they renamed a dependency (!1536)
- fix startup with `dnssec: false` (!1548)
- rpm packages: do not overwrite config.yaml (!1525)
- fix NSEC3 records missing in answer for positive wildcard expansion
with the NSEC3 having over-limit iteration count (#910, !1550)
- views: fix a bug in subnet matching (!1562)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.8/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.8.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.8.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.8/
--
Oto Šťáva | Knot Resolver team lead | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680
On 31/05/2024 19.00, oui.mages_0w(a)icloud.com wrote:
> we have different TLS domains/certificates for dns64 and non dns64
Oh, OK. Such a thing hasn't occurred to us, so it's not possible. In
that case I expect you'll need to stay on 5.x for now, with separate
processes for dns64 and non-dns64 (but they can share the cache).
Overall I don't think the current code can support multiple certificates.
On 31/05/2024 13.04, oui.mages_0w(a)icloud.com wrote:
> Unless the policy module allows to filter by listened IP, I will still
> need to use split instances: we don’t select on the server side which
> client is to use dns64 or not, but as an ISP, we leave the choice to
> the clients to decide which dns resolver they want to use (one of ours
> with or without dns64, or a third party).
That is possible, but with 5.x I probably wouldn't recommend it, as it's
been left out of documentation (by mistake) and overall I don't have
much trust in that part of 5.x policies.
But after you migrate to >= 6.x, I would recommend just having a single
shared configuration. And in views you can select dst-subnet paired
with dns64 options. Such deployments were taken into account when
designing 6.x views.
https://www.knot-resolver.cz/documentation/latest/config-views.html#config-…
In 6.x it would probably also be harder for you to run multiple
configurations at once on a single machine, so that's another reason to
unify this when you migrate.
--Vladimir
On 14/05/2024 08.10, Josef Karliak via knot-dns-users wrote:
>
> ISC bind is strict about CNAME of NS server:
>
> skipping nameserver 'aa.bb.cz' *because it is a CNAME*, while
> resolving '9.4/4.3.2.1.in-addr.arpa/PTR'.
>
> How about Knot resolver ?
>
Hello. I believe there is neither effort to disallow them nor to make
them work. Off the top of my head I'm not sure how it will be in
practice; you could just try. Either way, please don't use them.
(I replied to the correct mailing-list.)
--Vladimir
On 17/05/2024 22.43, Peter Thomassen wrote:
> I think the question is whether Knot Resolver follows the letter of
> the RFC, like BIND, or whether it is less strict.
I'm not aware of RFCs saying that resolvers should fail in such
situations. My understanding is more like it's allowed not to work.
Anyway, it would be better to continue this thread on
knot-resolver-users(a)lists.nic.cz
(I already have posted there about this a couple days ago.)
--Vladimir
Trying to make Policy Based Routing routing.
Looking for HOWTO/snippents of two API features :
1) Tracking Cache TTL for some records and ability to make requests against cache.
2) Printing some text properties in request/response (kres) functions - like domain name, is cached (yes/no), TTL, etc.
As I understood docs regarding Lua/C bindings is not finished yes.
But I'm not a experienced developer so reading .c files is not an option for me ;-)
Hi,
for some time now I have a problem running kresd on my raspberry pi.
I am running pihole and use kresd as resolver behind pihole. Everything
works fine until some day where kresd "decides" to crash. It is always
the same error message (please see below).
I then have to manually stop the garbage collector, remove all cache
files and restart kresd (which automatically starts the garbage collector).
My pi is pxe boot and the root partition is on an nfs mounted volume.
The volume has several terra byte of space left.
After a restart it will run flawless for tow or three weeks just to
crash with the same error again.
Any idea how this happens?
/Ulrich
root@pi-hole1:~# systemctl status kresd
● kresd.service - Knot Resolver daemon
Loaded: loaded (/lib/systemd/system/kresd.service; enabled; vendor
preset: enabled)
Active: failed (Result: exit-code) since Fri 2024-04-05 02:31:47
CEST; 6h ago
Docs: man:kresd.systemd(7)
man:kresd(8)
Process: 5106 ExecStart=/usr/sbin/kresd -c
/usr/lib/arm-linux-gnueabihf/knot-resolver/distro-preconfig.lua -c
/etc/knot-resolver/kresd.conf -n (code=exited, status=1/FAILURE)
Process: 5110 ExecStopPost=/usr/bin/env rm -f
/run/knot-resolver/control/ (code=exited, status=1/FAILURE)
Main PID: 5106 (code=exited, status=1/FAILURE)
CPU: 648ms
Apr 05 02:31:45 pi-hole1 kresd[5106]: [C]: at 0x0001b2d8
Apr 05 02:31:45 pi-hole1 kresd[5106]: [C]: in function 'pcall'
Apr 05 02:31:45 pi-hole1 kresd[5106]:
...b/arm-linux-gnueabihf/knot-resolver/distro-preconfig.lua:9: in main chunk
Apr 05 02:31:45 pi-hole1 kresd[5106]: ERROR: net.listen() failed to bind
Apr 05 02:31:46 pi-hole1 kresd[5106]: [system] error while loading
config: /usr/lib/arm-linux-gnueabihf/knot-resolver/sandbox.lua:402:
can't open cache path '/var/cache/knot-resolver'; working directory
'/var/lib/knot-resolver'; No space left on device (workdir
'/var/lib/knot-resolver')
Apr 05 02:31:47 pi-hole1 systemd[1]: kresd.service: Main process exited,
code=exited, status=1/FAILURE
Apr 05 02:31:47 pi-hole1 env[5110]: rm: cannot remove
'/run/knot-resolver/control/': Is a directory
Apr 05 02:31:47 pi-hole1 systemd[1]: kresd.service: Control process
exited, code=exited, status=1/FAILURE
Apr 05 02:31:47 pi-hole1 systemd[1]: kresd.service: Failed with result
'exit-code'.
Apr 05 02:31:47 pi-hole1 systemd[1]: Failed to start Knot Resolver daemon.
root@pi-hole1:~# ps aux Op | grep kres
knot-re+ 569 0.1 0.1 114480 4140 ? Ss Apr02 5:15
/usr/sbin/kres-cache-gc -c /var/cache/knot-resolver -d 1000
root 23835 0.0 0.0 10292 504 pts/0 S+ 09:00 0:00 grep kres
root@pi-hole1:~# systemctl status | grep kres
│ └─23861 grep kres
├─system-kresd.slice
│ └─kres-cache-gc.service
│ └─569 /usr/sbin/kres-cache-gc -c
/var/cache/knot-resolver -d 1000
root@pi-hole1:~# systemctl stop kres-cache-gc
Dear Knot Resolver users,
Knot Resolver versions 5.7.2 (stable) and 6.0.7 (early-access) have been
released! Both fix running on 32-bit systems with 64-bit time; 6.0.7
additionally brings fixes to RPZ, cache clearing via kresctl, and more.
---
Knot Resolver 5.7.2:
Bugfixes:
- fix on 32-bit systems with 64-bit time_t (!1510)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.7.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.2.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/artifacts/1056229/index.html
---
Knot Resolver 6.0.7:
Improvements:
- manager: clear the cache via management HTTP API (#876, !1491)
- manager: added support for Python 3.12 and removed for 3.7 (!1502)
- manager: use build-time install prefix to execute `kresd` instead of
PATH (!1511)
- docs: documentation is now separated into user and developer parts (!1514)
- daemon: ignore UDP requests from ports < 1024 (!1507)
- manager: increase startup timeout for processes (!1518, !1520)
- local-data: increase default DB size to 2G on 64-bit platforms (!1518)
Bugfixes:
- fix listening by interface name containing dashes (#900, !1500)
- fix kresctl http request timeout (!1505)
- fix RPZ if it contains apex NS record (!1516)
- fix RPZ if SOA is repated, as usual in AXFR output (!1521)
- avoid RPZ overriding the root SOA (!1521)
- fix on 32-bit systems with 64-bit time_t (!1510)
- fix paths to knot-dns libs if exec_prefix != prefix (!1503)
- manager: add missing early check that neither a custom port nor TLS is
set for authoritative server forwarding (#902, !1505)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.7/NEWS
Documentation:
https://www.knot-resolver.cz/documentation/artifacts/1056245/index.html
--
Oto Šťáva | Knot Resolver team leader | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680
Hi,
I am in the process to migrate from unbound to knot-resolver.
This is on FreeBSD 14-STABLE, knot-resolver 5.7.1, and on a very small instance serving a handful users, around 100 mails a day and such.
The resolver is up and running, but I still have some questions left I cannot answer myself after reading the documentation et al.
1) I managed to run 'kres-cache-gc -c /var/run/kresd' but I am unsure whether I do need the garbage collector at all?
I read that after filling up of '/var/run/kresd/data.mdb' that file would become reset to 0 bytes, correct?
FYI: After 3 days '/var/run/kresd/data.mdb' uses less than 1 MB currently.
2) Does knot-resolver automatically update 'root.hints' and 'root.keys', or do I have to install a script in crontab doing the updates instead?
FYI: I didn't unload the modules 'ta_signal_query' and 'ta_sentinel‘
3) I am still struggeling to understand, how to get access to the statistics produced by the module 'stats'?
FYI: If I do try to use knotc (I know, it's experimental), I'll get:
|dns> kresc /var/run/kresd/control/17158
|Warning! kresc is highly experimental, use at own risk.
|Please tell authors what features you expect from client utility.
|
FYI: There is no 'kresd>' prompt …
I tried to modify that socket's privileges but to no avail.
4) If that socket is the way to get hold on all statistics information, how can one name that socket file? Currently, it is just the PID of kresd.
Thanks in advance and regards,
Michael
Dear Knot Resolver users,
DNS Shotgun v20240219, our high-performance realistic benchmarking tool
for DNS resolvers, has been released.
This new release, amongst a variety of other improvements, brings
support for testing DNS-over-QUIC.
Incompatible changes:
- CMake is now being used to build dnssim instead of Autotools
- GnuTLS 3.7.5+ is now required
Improvements:
- pcap/extract-clients: always reset UDP port numbers to 53 (!56)
- pcap/extract-clients: ability to write to stdout (!62)
- pcap/filter-dnsq: skip 'special' queries for *.dotnxdomain.net (!58)
- pcap/split-clients: new tool to split larger PCAPs into smaller ones (!61)
- pcap/merge-chunks: allow disabling randomization (!67)
- tools/plot-latency: ability to diversify lines with linestyles (!69)
- tools/plot-response-rate: estimate worst-case drop caused by discarded
packets (!74)
- tools/plot-packet-rate: handle incomplete last sampling period (!71)
- tools/plot-response-rate: ability to ignore RCODEs with small response
rate (!73)
- pcap/filter-dnsq: ability to log malformed queries (!72)
- pcap/generate-const-qps: new tool to generate constant QPS (!33)
- tools: allow customizing plot charts with `SHOTGUN_MPLSTYLES` (!65)
- replay: `--preload` argument, mainly for dnssim debugging with
sanitizers (!76)
- tools/plot-latency: use fractional values for humans in charts (!78)
- pcap/extract-clients: warn if some input packets were skipped (!80)
- dnssim: replace Autotools with CMake (!77, !86)
- configs: DoH configs with exclusively GET/POST methods (!82)
- tools/plot-response-rate: avoid division by zero (!89)
- tools/plot-latency: denser labels to improve logarithmic scale
readability (!90)
- pcap/extract-clients: allow query rewriting - anonymization (!91)
- Support for DNS-over-QUIC (!75)
Bugfixes:
- tools/plot-response-rate: avoid white lines on white background (!55)
- tools/plot-client-distribution: properly handle file limit (!59)
- pcap: proper PCAP write error handling (!60)
- tools/plot-connections: set axis limits properly (!66)
- tools/plot-packet-rate: trim chart whitespace (!79)
- replay: do not exit silently when dnssim returns non-zero (!87)
Full changelog:
https://gitlab.nic.cz/knot/shotgun/-/releases/v20240219
Sources:
https://gitlab.nic.cz/knot/shotgun/-/archive/v20240219/shotgun-v20240219.ta…
Documentation:
https://dns-shotgun.readthedocs.io/en/v20240219/
Oto Šťáva
Knot Resolver
CZ.NIC z.s.p.o.
Hello,
I am trying to figure out why some domain names are not resolving on my
instance of Knot resolver over DoH with some clients. I was able to
reproduce this issue with [doh](https://github.com/curl/doh) client built
on libcurl. The problem never manifests with kdig (neither with DoH, nor
DoT nor Do53).
During this, I noticed something strange. For domain name github.com (which
sometimes returns no A record), I always receive an answer with TTL set to
60. It seems like this name does not get cached at all. See the test output
below.
Interestingly, if I delete cache files and restart the resolver, the TTL
starts decreasing as expected. Is this a sign that something was wrong with
the cache before? Or is this some sort of cache optimization for low TTL
records?
Here is the test output:
$ for i in `seq 1 5`; do ./doh github.comhttps://nscache.mtg.ripe.net/dns-query ; echo "----"; kdig +https +noadflag
+nocookie +noall +answer github.com A @nscache.mtg.ripe.net ; echo "====";
sleep 1; done
[github.com]
TTL: 60 seconds
AAAA: 0064:ff9b:0000:0000:0000:0000:8c52:7903
----
github.com. 60 IN A 140.82.121.3
====
[github.com]
TTL: 60 seconds
A: 140.82.121.3
AAAA: 0064:ff9b:0000:0000:0000:0000:8c52:7904
----
github.com. 60 IN A 140.82.121.4
====
[github.com]
TTL: 60 seconds
A: 140.82.121.4
AAAA: 0064:ff9b:0000:0000:0000:0000:8c52:7904
----
github.com. 60 IN A 140.82.121.4
====
[github.com]
TTL: 60 seconds
A: 140.82.121.4
AAAA: 0064:ff9b:0000:0000:0000:0000:8c52:7903
----
github.com. 60 IN A 140.82.121.4
====
[github.com]
TTL: 60 seconds
A: 140.82.121.3
AAAA: 0064:ff9b:0000:0000:0000:0000:8c52:7904
----
github.com. 60 IN A 140.82.121.3
====
--
Best regards,
Ondřej Caletka
Komentator -Liga Champions✔️✔️ Owen Hargreaves mengatakan bahwa “semua orang akan terpukul” oleh penampilan buruk Bayern Munich baru-baru ini, kali ini setelah kalah dari Lazio di leg pertama babak 16 besar Liga Champions.
Situs web: https://svipbola.com/liga/liga-champions
Pertandingan Liga Champions“Mereka perlu menemukan cara untuk mempercepat permainan dan menjadi sedikit lebih cepat. Mereka melewati fase ini di bawah asuhan Pep dan memainkan sepakbola terindah. Hari ini mereka bahkan tidak memainkan sepakbola indah.”
Jadwal Liga Champions - dengan meningkatnya perhatian media terhadap manajer dan pemain, mantan gelandang Manchester United dan pemenang Liga Champions Hargreaves tidak berharap tim tamu mendapat ulasan positif atas penampilan seperti itu.
Liga Champions 2023⭐“Ketika Anda bermain untuk tim papan atas, saya tahu orang-orang tidak ingin mendengarnya, namun ketika Anda bermain seperti itu, Anda akan dikritik. Saya pikir akan ada banyak rasa frustrasi di dalam dan di sekitar Bayern saat ini.”
Address: Jl Gondosuli 8 Yk Jawa Tengah Indonesia
#Liga Champions
#pertandingan liga champions
#jadwal liga champions
#klasemen liga champions
#liga champions uefa
#liga champions 2023
#hasil liga champions
#liga champions 2022
Dear Knot Resolver users,
Knot Resolver versions 5.7.1 (stable) and 6.0.6 (early-access) have been
released!
These releases include important security fixes, an update is strongly
advised!
Security:
- CVE-2023-50868: NSEC3 closest encloser proof can exhaust CPU
* validator: lower the NSEC3 iteration limit (150 -> 50)
* validator: similarly also limit excessive NSEC3 salt length
* cache: limit the amount of work on SHA1 in NSEC3 aggressive cache
* validator: limit the amount of work on SHA1 in NSEC3 proofs
* validator: refuse to validate answers with more than 8 NSEC3 records
- CVE-2023-50387 "KeyTrap": DNSSEC verification complexity
could be exploited to exhaust CPU resources and stall DNS resolvers.
Solution boils down mainly to limiting crypto-validations per packet.
We would like to thank Elias Heftrig, Haya Schulmann, Niklas Vogel
and Michael Waidner
from the German National Research Center for Applied Cybersecurity ATHENE
for bringing this vulnerability to our attention.
Improvements:
- update addresses of B.root-servers.net (!1478)
Bugfixes:
- fix potential SERVFAIL deadlocks if net.ipv6 = false (#880)
The update affects how some cached records are being treated, which may
trip up some sanity checking mechanisms in Knot Resolver if you have
advanced debugging options enabled (disabled by default),
"debugging.assertion_abort" for version 5 (Lua) and
"logging/debugging/assertation-abort" for version 6 (YAML). In case you
encounter any issues, please try clearing the cache first.
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.7.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.7.1/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Dear Knot Resolver users,
last week a long-lasting bug in our mailing system has been discovered,
which, over the past two years, blocked quite a few e-mails from being
delivered to the list <knot-resolver-users(a)lists.nic.cz> and a few
others (namely the <knot-dns-users(a)lists.nic.cz>, whose subscriber Juha
Suhonen initially brought our attention to the issue - a thank you to
Juha is in order!).
This week the issue has been resolved and the blocked e-mails came
through. Some of these were our own, namely the Knot Resolver 5.5.0
release announcement, which is obviously outdated, as the current stable
version is Knot Resolver 5.7.0. We apologize for any confusion this
situation may have caused. Some others are still awaiting additional
approval, so after we manually identify, which are still relevant, and
which are spam, they will also come through during the following weeks.
Furthermore, later today we are planning to release new versions of the
stable Knot Resolver 5 and the early-access Knot Resolver 6. These
important updates will mitigate a few newfound DoS issues, the details
of which will soon be revealed globally. We are fully aware that this
unfortunate timing may cause further confusion, so we opted to inform
you, the subscribers, beforehand, that this next release e-mail is
indeed relevant.
We once again apologize for the confusion.
Best regards
Oto Šťáva
Knot Resolver team
CZ.NIC z.s.p.o.
On 2/12/24 01:34, Vladimír Čunát wrote:
> On 28/01/2024 02.52, Mike Wright wrote:
>> [system] error while loading config:
>> ...b/x86_64-linux-gnu/knot-resolver/kres_modules/policy.lua:378: bad
>> argument #1 to 'create' (table expected, got nil) (workdir
>> '/var/lib/knot-resolver')
>
> You don't define the `internalDomain` variable. That's correct in lua
> and evaluates as nil.
>
> (and as I already posted, please use the correct mailing-list next time)
OK, figured out my mistake.
internalDomains MUST APPEAR BEFORE any reference to it.
Thanks for your time,
Mike Wright
Dear Knot Resolver users,
we would like to introduce you to Knot Resolver 6.x!
This future version of the resolver is now in the testing phase.
An article was published on our blog as part of this introduction.
EN: https://en.blog.nic.cz/2023/12/15/knot-resolver-6-x-news
CZ: https://blog.nic.cz/2023/12/15/novinky-v-knot-resolver-6-x
We will be happy if you try the new version and give us any feedback.
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Hi!
I'm pretty new to Knot Resolver, previously i used Bind9 but wanted to try something else.
However, i can't really figure out one problem, error:
[system] error while loading config: /usr/lib/knot-resolver/kres_modules/policy.lua:43: bad argument #1 to 'kr_straddr_split' (cannot convert 'table' to 'const char *') (workdir '/var/lib/knot-resolver')
I don't absolutely know, what am I doing wrong.
Can you help me, please? Also, possible communication in Czech if better for someone?
Here is my kresd.conf (my actual domains are replaced by domain1.tld, domain2.tld respectively):
-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
net.listen('::1', 853, { kind = 'tls', freebind = true })
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
-- DNS Rebinding Configuration
policy.add(policy.todnames({'domain2.tld', 'domain1.tld'}), policy.PASS)
policy.add(policy.todnames({'domain2.tld', 'domain1.tld'}), policy.FORWARD({{'192.168.0.126'}}))
Hi there,
please, can anyone move me forward? I want to implement new stats counter for DoH requests with “Chrome” in "user-agent" header.
I don’t know how to iterate “query.request.qsource.headers”.
I have tried:
function count_chrome_doh()
return function (state, query)
if query.request.qsource.flags.http then
for k, v in ipairs(query.request.qsource.headers) do
if v.name == 'user-agent' and v.value == 'Chrome' then
if stats.get('request.agent.chrome') then
stats['request.agent.chrome'] = stats.get('request.agent.chrome') + 1
else
stats['request.agent.chrome'] = 1
end
return nil
end
end
end
return nil
end
end
policy.add(count_chrome_doh())
but it falls with error "'struct 322' has no '__ipairs’ metamethod”
Thanks!
Blažej
Hello,
What would be the best way to implement the following with kresd?
The device used has a 2 core cpu.
It has 3 (listening) ip addresses, for example: 10.2.3.4, 2001:0DB8:123::1 and 2001:0DB8:123::64
I want to have kresd to listen to:
– 10.2.3.4 and 2001:0DB8:123::1 and do a dns resolution using UDP (53), TLS and HTTPS (the question is not about these settings).
– 2001:0DB8:123::64 and use the same settings as above, but adding the dns64 module and resolution (only for requests made to 2001:0DB8:123::64).
Having 2 cores, I have 2 identical instances; should I differentiate them and have one for dns64 and one without? or could I have 2 identical instances with a shared configuration file allowing to use dns64 or not depending on the listening ip? Or 4 instances (2 identical for dns64, 2 identical without, to have a spare of each config)?
The options with view: are good to filter or do actions depending on the source ip, the queried domain or even the resolved ip (destination), but nothing about the ip used to reach the resolver (the listening address).
Thank you.
Hi there,
please, can anyone move me forward? I want to implement new stats counter for DoH requests with “Chrome” in "user-agent" header.
I don’t know how to iterate “query.request.qsource.headers”.
I have tried:
function count_chrome_doh()
return function (state, query)
if query.request.qsource.flags.http then
for k, v in ipairs(query.request.qsource.headers) do
if v.name == 'user-agent' and v.value == 'Chrome' then
if stats.get('request.agent.chrome') then
stats['request.agent.chrome'] = stats.get('request.agent.chrome') + 1
else
stats['request.agent.chrome'] = 1
end
return nil
end
end
end
return nil
end
end
policy.add(count_chrome_doh())
but it falls with error "'struct 322' has no '__ipairs’ metamethod”
Thanks!
Blažej
Dear Knot Resolver users,
Knot Resolver 5.7.0 has been released!
Security
- avoid excessive TCP reconnections in a few more cases
Like before, the remote server had to behave nonsensically in order
to inflict this upon itself, but it might be abusable for DoS.
We thank Ivan Jedek from OryxLabs for reporting this.
Improvements
- forwarding mode: tweak dealing with failures from forwarders,
in particular prefer sending CD=0 upstream (!1392)
Bugfixes
- fix unusual timestamp format in debug dumps of records (!1386)
- adjust linker options; it should help less common platforms (!1384)
- hints module: fix names inside home.arpa. (!1406)
- EDNS padding (RFC 8467) compatibility with knot-dns 3.3 libs (!1422)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.7.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.7.0/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Hello,
could you please help me with knot resolver configuration in the case when I
need to redirect each variation for the domain to some address.
e.g.
www.example.com, m.example.com, domain.example.com ...
like wildcard record
*.example.com 10.0.0.50
In my configuration is it handeled by file with static records
-- load static records
hints.add_hosts('/etc/knot-resolver/static_records.txt')
which contains address to be redirected and the domain.
10.0.0.50 1xbet.com
10.0.0.50 thelotter.com
10.0.0.50 webmoneycasino.com
10.0.0.50 betworld.com
10.0.0.50 bosscasino.eu
10.0.0.50 sportingbull.com
But I´m not able to handle the correct syntax for a wildcard domain
redirection.
Best regards,
--
Smil Milan Jeskyňka Kazatel
Hoping someone can help...
Built Knot Resolver v5.6.0 from source.
It works and resolves correctly for "regular" TLDs.
However, I would like to point it to OpenNIC for resolution /forwarding
so that I can resolve the expanded /alternative TLDs.
Default configuration with:
policy.add(policy.all(
policy.FORWARD(
{'2001:19f0:b001:379:5400:3ff:fe68:1cc6',
'138.197.140.189',
'2600:3c04::f03c:93ff:febd:be27',
'45.61.49.203'})))
and it fails to find "grep.geek" using the standard root zone /hints:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 22871
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;grep.geek. IN A
;; AUTHORITY SECTION:
. 86077 IN SOA a.root-servers.net.
nstld.verisign-grs.com. 2023050902 1800 900 604800 86400
So I checked the Documentation site and found "hints.root" which
theoretically will override any other root hints.
Using the OpenNIC root zone file (downloads as "db.root") I set:
hints.root ({
['ns13.opennic.glue.'] = { '2a01:4f8:192:43a5::2', '144.76.103.143' }
})
in kresd.conf.
Still no joy - "grep.geek" is NXDOMAIN from a.root-servers.net again.
Any thoughts? Things I might have missed along the way?
Hello,
it's me again :) I just want to make sure if behaviour of Knot
Resolver is correct.
I implemented DDR mechanism to discover DoH / DoT DNS servers. My
Macbook with Ventura successfully discovered DoH server and started to
use it.
But: Knot Resolver sends 10 seconds after establishing FIN,ACK packet
and connection is correctly closed. From this moment, Macbook starts
to use DNS over UDP again and will retry DoH connection after 10-30s
later. Then it uses DoH server again for 10 seconds ....
Is this behaviour correct? Should Macbook sends some keepalive
messages to prevent connection closing? Or should Macbook reopen DoH
connection more quickly?
Thanks,
Blažej
Hello,
is there any correct way how to do query policy based on destination
IP (IP which processed the query)? Like view:addr but on the dst
address.
I found that function view.addr(_, subnet, rules, dst) contains DST
parameter but I'm not sure how to use it.
I also found function view.rule_dst(action, subnet) but still get errors:
error: /usr/lib/knot-resolver/kres_modules/view.lua:103: attempt to
index local 'req' (a number value)
Thanks
Blažej
Hi there,
I'm trying to implement SVCB record "_dns.resolver.arpa" for DDR
mechanism for our AS50242 recursive resolvers.
When I look on Cloudflare or Google implementation, they answer with
"ADDITIONAL SECTION" also.
kdig _dns.resolver.arpa @8.8.8.8 type64
;; ->>HEADER<<- opcode: QUERY; status: NOERROR; id: 61402
;; Flags: qr aa rd ra; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 4
;; QUESTION SECTION:
;; _dns.resolver.arpa. IN SVCB
;; ANSWER SECTION:
_dns.resolver.arpa. 86400 IN SVCB 1 dns.google. alpn=dot
_dns.resolver.arpa. 86400 IN SVCB 2 dns.google. alpn=h2,h3
key7="/dns-query{?dns}"
;; ADDITIONAL SECTION:
dns.google. 86400 IN A 8.8.8.8
dns.google. 86400 IN A 8.8.4.4
dns.google. 86400 IN AAAA 2001:4860:4860::8888
dns.google. 86400 IN AAAA 2001:4860:4860::8844
In Knot Resolver documentation is an example how to answer for SVCB
request but without addition section.
policy.add(
policy.domains(
policy.ANSWER(
{ [kres.type.SVCB] = { rdata=kres.parse_rdata({
'SVCB 1 resolver.example. alpn=dot ipv4hint=192.0.2.1
ipv6hint=2001:db8::1',
'SVCB 2 resolver.example. mandatory=key65380 alpn=h2
key65380=/dns-query{?dns}',
}), ttl=5 } }
), { todname('_testing.domain') }))
Can anyone help me, how to add additional section to answer? Do we
need to use policy.custom_action(state, request)?
Thanks!
Blažej
Dear Knot Resolver users,
Knot Resolver 5.6.0 has been released!
Security
- avoid excessive TCP reconnections in some cases (!1380)
For example, a DNS server that just closes connections without answer
could cause lots of work for the resolver (and itself, too).
The number of connections could be up to around 100 per client's query.
We thank Xiang Li from NISL Lab, Tsinghua University,
and Xuesong Bai and Qifan Zhang from DSP Lab, UCI.
Improvements
- daemon: feed server selection with more kinds of bad-answer events (!1380)
- cache.max_ttl(): lower the default from six days to one day
and apply both limits to the first uncached answer already (!1323 #127)
- depend on jemalloc, preferably, to improve memory usage (!1353)
- no longer accept DNS messages with trailing data (!1365)
- policy.STUB: avoid applying aggressive DNSSEC denial proofs (!1364)
- policy.STUB: avoid copying +dnssec flag from client to upstream (!1364)
Bugfixes
- policy.DEBUG_IF: don't print client's packet unconditionally (!1366)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.6.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.6.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.6.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.6.0/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Hello everyone,
at AS50242 we experience problem with resolving connectivity.samsung.com.cn
We run two servers, each with 4 instances. Both servers have working
dual-stack (v4/v6).
knot-dnsutils/unknown,now 3.1.1-cznic.1 amd64 [installed]
knot-resolver-module-http/unknown,now 5.5.0-cznic.1 all [installed,automatic]
knot-resolver-release/unknown,now 1.9-1 all [installed]
knot-resolver/unknown,now 5.5.0-cznic.1 amd64 [installed]
Dnsviz shows problem reaching few IPv6 servers of .cn TLD via UDP. I
can not understand, why both of our servers response with SERVFAIL.
Any ideas how to troubleshoot more?
Thank you,
Blažej
Hello,
I am a user, not a developer, of knot-resolver, on ubuntu groovy.
When I look up something that has a CNAME and ask for an A record I get
a SERVFAIL. If I ask for the CNAME I get the correct answer but then I
have to do another search for the A record for that.
#-------------
# using knot-resolver
kdig @127.0.53.1 www.cdc.gov.
;; ->>HEADER<<- opcode: QUERY; status: SERVFAIL; id: 44868
#-------------
# using google dns
kdig 8.8.8.8 www.cdc.gov.
www.cdc.gov. 126 IN CNAME www.akam.cdc.gov.
www.akam.cdc.gov. 20 IN A 104.100.61.241
#-------------
My guess is I don't have a complete configuration. Here's my very
simple knot-resolver.conf
#------------
-- SPDX-License-Identifier: CC0-1.0
-- Network interface configuration
net.listen('127.0.53.1')
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
--
-- MY STUFF
--
internalDomains = policy.todnames({
'main',
'0.1.10.in-addr.arpa',
'1.10.in-addr.arpa',
'10.in-addr.arpa'
})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({'127.53.0.1'}), internalDomains))
#-------------
How do I fix this?
Thank you,
Mike Wright
Hi,
i installed knot-resolver on my mail server and i see a issue with a specific domain, dovecot.org.
Everything is working as expected but this single domain doesn't always resolve.
After some time postfix cannot check the domain where mails coming from and doesn't accept them.
If i do dig dovecot.org, i get this (SERVFAIL):
dig dovecot.org
; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> dovecot.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 27594
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;dovecot.org. IN A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fr Apr 29 08:34:55 CEST 2022
;; MSG SIZE rcvd: 40
it starts working again if do dig +cd, like this:
dig +cd dovecot.org
; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> +cd dovecot.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56130
;; flags: qr rd ra cd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;dovecot.org. IN A
;; ANSWER SECTION:
dovecot.org. 300 IN A 94.237.12.234
;; Query time: 245 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fr Apr 29 08:34:59 CEST 2022
;; MSG SIZE rcvd: 56
i didn't have this kind of issue using unbound before i switched, so i think here would be the right place to ask.
i'm using the knot-resolver 5.5.0 package from epel on rockylinux 8.5 and my kresd config is very simple:
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
--net.listen('127.0.0.1', 443, { kind = 'doh2' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
net.listen('::1', 853, { kind = 'tls', freebind = true })
--net.listen('::1', 443, { kind = 'doh2' })
-- Load useful modules
modules = {
'hints > iterate', -- Allow loading /etc/hosts or custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
-- use /etc/hosts entries
-- hints.add_hosts()
net.ipv6 = false
Anything i can do to track this down?
Thanks in advance for your help.
Juergen
Hello,
I have noticed kresd segfault:
[Sun May 15 14:22:47 2022] kresd[1791403]: segfault at 407ce590 ip
00000000407ce590 sp 00007ffc2d192668 error 15
There were also about 1300 lines from the same PID with a message like this:
May 15 14:23:45 xxxx kresd[1791403]: [primin] triggered priming query,
next in 0 seconds
Maybe it is related to the crash maybe not.
OS: Debian Linux 11.3 kernel 5.10.0-13-amd64 #1 SMP Debian 5.10.106-1
(2022-03-17)
Knot-Resolver: 5.5.0-cznic.1
With regards
Ales
Hello,
We are using Knot-Resolver 5.5.0 with rebinding protection:
modules.load('rebinding < iterate')
We have some complains about an invalid domain name being returned in
the additional section of the response to the blocked request:
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "blocked by DNS
rebinding protection"
It looks like some windows domain controllers running DNS clients do not
like it and log an error:
The DNS server encountered an invalid domain name in a packet from
<Knot-Resolver IP> The packet will be rejected. The event data contains
the DNS packet.
Is there a way how to suppress this? Or even better response with SERVFAIL?
Thanks
Ales Rygl
Hello,
I'd would like to ask for help with preload module. The issue is that
when running multiple instatnces of kresd under systemd usualy just one
of them is able to start correctly. The other hangs and fails to start.
The config is just copy/paste from the documentation:
modules.load('prefill')
prefill.config({
['.'] = {
url = 'https://www.internic.net/domain/root.zone',
interval = 86400, -- seconds
}
})
Starting instances in a sequence does not help, the 2nd one hangs - and
only if the 1st one is killed/stopped the 2nd one goes on and processes
the root zone.
Did I miss something in the documentation?
With regards
Ales Rygl
Hi everyone,
The DNS Security Extensions (DNSSEC) add integrity and authenticity to the
Domain Name System (DNS). Now, more than 17 years after their
standardization, we would like to hear from DNS recursive resolver operators
about their experience with DNSSEC. For this reason, we have set up a short
survey. It’s directed mainly towards organisations that run a recursive
resolver. Filling out the survey should take roughly 5 to 10 minutes.
https://forms.gle/FxTD9FofaogdvLqcA (link directs to Google Forms)
This survey is carried out by SIDN Labs (https://sidnlabs.nl) and by the
Swedish Internet Foundation (https://internetstiftelsen.se/en/). You can
contact us via email: moritz.muller(a)sidn.nl
Please excuse us, if you have received this email via different mailing
lists.
—
Moritz Müller Research Engineer at SIDN Labs
Running 5.4.4, adding an NTA seems very straightforward:-
>> trust_anchors.set_insecure( {"fj"} )
>
>> trust_anchors.summary()
>'fj. is negative trust anchor
>. 172800 DNSKEY 257 3 8 AwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU= ; Valid: ; KeyTag:20326
>'
What is the precise incantation to remove it when it is no longer required?
The following do not work:-
>> trust_anchors.remove('fj')
>false
>> trust_anchors.remove('fj.')
>false
>> trust_anchors.remove( {"fj"} )
>false
Any help would be appreciated.
Also, does Knot Resolver allow an automatic timeout when setting NTAs as
Bind does?
---
Best wishes,
Matthew
Hello,
I am trying to import custom generated RPZ list into kresd, but daemon
fails with following error "[policy] RPZ: invalid domain name character". I
am using UTF-8 as encoding for file. Any pointers how to convince kres to
ingest these domain names would be greatly appreciated.
Best regards,
Łukasz Jarosz
Hello,
I'd like to kindly ask for help with following issue.
I am considering to deploy knot-resolver to the DNS solution where it
should coexist with other DNS daemons namely Unbound. I am running
dnsdist <https://dnsdist.org/> in front of the pool of resolvers where I
have just added also the latest release (5.5.4) of knot-resolver. It
receives a portion of requests at the rate about 500qps. There are 6
kresd processes on a VM running with cache in tmpfs. Cache size is 2GB
(8GB mounted to tmpfs). Resolver is running Debian Bullseye. The
solution is serving real customers - the traffic is not artificial.
The configuration enables some modules:
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
'bogus_log', -- DNSSEC validation failure logging
'nsid',
'prefill',
'rebinding < iterate'
}
cache.size = cache.fssize() - 6*GB
modules = {
predict = {
window = 15, -- 15 minutes sampling window
period = 6*(60/15) -- track last 6 hours
}
}
policy.add(
policy.rpz(policy.DENY,
'/var/cache/unbound/db.rpz.xxx.cz',
true)
)
And the Carbon protocol reporting is enabled with the sample rate of 5s.
The performance in terms of response time is equal or even better when
comparing with Unbound. I am measuring the performance of the dnsdist's
backend servers (daemons) - the latency, dropped requests and the
requests rate using Carbon protocol with sample rate of 5s. The backend
servers kresd included are receiving requests from just several clients
at the moment (source IPs of the balancers).
What is wrong here is some kind of repeated packet drops reported for
kresd instance by dnsdist. It appears in about 15 min. interval. When it
occurs the drop rate increases from about 0.5 req/s (standard behavior)
to 5-10 req/s which causes a positive feedback and increased packet
rate. There are such peaks every 15 min. When I restart the kresd
instances drops go away and everything is stellar for couple of hours.
I have no explanation for that. It is apparently not related to the
requests - I have tried to simulate this by replaying the traffic
captured when the problem occurs several times without success. kresd
can even process 20k qps on this instance with no increase of drop rate
from the balancer's point of view. But after a while... The fact that
restarting kresd helps immediately shows that there might be something
wrong inside.
I have tried to increase the number of kresd processes, cache size and
disabling the cache persistence. Nothing helps here. I am pretty sure
there are no network issues on the VM or surrounding network.
I can provide some graphical representations of this behavior of it is
needed or a packet capture of the real traffic.
Many thanks
With best regards
Ales Rygl
Hello,
I'd like to post DNSTAP data to remote syslog.
In kresd.conf I have:
...
dnstap.config({
socket_path = "/tmp/dnstap.sock",
})
...
I try run socat to create and listen on socket but no data I see.
What is wrong?
Thank you.
MZ
Hi,
I am writing looking for some help with a setup where the local lan has
a machine with knot resolver and some of the hosts that are connected to
the LAN are ubuntu machines that by default use systemd-resolved as a
local caching stub resolver. For some reasons this combination appears
troublesome and I am trying to undestand all the reasons why.
One issue has already been identified as a systemd-resolved, in the
ubuntu focal version getting confused by a (correct) answer from kresd
(discussion on
https://gitlab.nic.cz/knot/knot-resolver/-/issues/686#note_234431).
Now, I find another issue in that I do not appear successful in making
systemd-resolved talk to kresd over tls. This would be important
because most of the ubuntu focal hosts are setup with systemd-resolved
using opportunistic tls. If systemd-thinks that there is a problem with
contacting the current DNS server via tls then it switches to the
fallback server and kresd ends up not being used at all.
If I use `resolvectl` to set the DNS of an ubuntu host to point to the
machine with kresd and I activate DNSoverTLS, then I get:
resolvectl query lwn.netlwn.net: resolve call failed: All attempts to contact name servers or
networks failed
Similarly, if I user resolvectl to set to use opportunistic DNSoverTLS,
things seem to work, but I see on the journal some messages about
Using degraded feature set UDP for DNS server
Thus, I'd be glad to get some pointer at how to check that DNS over TLS
works correctly with kresd and how to verify why systemd-resolved fails.
Thanks!
Sergio
Hi,
I am writing looking for some help with a setup where the local lan has
a machine with knot resolver and some of the hosts that are connected to
the LAN are ubuntu machines that by default use systemd-resolved as a
local caching stub resolver. For some reasons this combination appears
troublesome and I am trying to undestand all the reasons why.
One issue has already been identified as a systemd-resolved, in the
ubuntu focal version getting confused by a (correct) answer from kresd
(discussion on
https://gitlab.nic.cz/knot/knot-resolver/-/issues/686#note_234431).
Now, I find another issue in that I do not appear successful in making
systemd-resolved talk to kresd over tls. This would be important
because some of the ubuntu focal hosts are setup with systemd-resolved
using tls.
If I use `resolvectl` to set the DNS of an ubuntu host to point to the
machine with kresd and I activate DNSoverTLS, then I get:
resolvectl query lwn.netlwn.net: resolve call failed: All attempts to contact name servers or
networks failed
Similarly, if I user resolvectl to set to use opportunistic DNSoverTLS,
things seem to work, but I see on the journal some messages about
Using degraded feature set UDP for DNS server
Thus, I'd be glad to get some pointer at how to check that DNS over TLS
works correctly with kresd and how to verify why systemd-resolved fails.
Thanks!
Sergio
On 29/10/2021 16.59, Martin Dosch wrote:
> You're right. Although the certs are readable (and other services
> successfully read them already) it works after I created a script
> which copys the files into kresd's workdir and chowns them to
> knot-resolver.
Maybe those other services run as root user or something...
Dear all,
I am using knot-resolver for DNS over TLS (DoT) for a while now. So far
I let nginx handle the TLS part on port 853 and proxy the requests to
127.0.0.1:53. I wanted to simplify my setup and let knot-resolver do the
whole thing. But I am facing problems on my server (Debian Stable
Bullseye).
I can enable DoT on 853 successfully using without specifying certs but
I want to use my TLS certs created by certbot. Once I add the following
line kresd fails to start.
> net.tls("/etc/letsencrypt/live/mdosch.de/fullchain.pem",
> "/etc/letsencrypt/live/mdosch.de/privkey.pem")
Systemd shows me the following error:
> Oct 28 19:49:41 v220191283267104968 systemd[1]: Starting Knot Resolver daemon...
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: [tls] gnutls_certificate_set_x509_key_file(/etc/letsencrypt/live/md>
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: [system] error while loading config: error occurred here (config fi>
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: stack traceback:
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: [C]: in function 'tls'
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: /etc/knot-resolver/kresd.conf:3: in main chunk
> Oct 28 19:49:41 v220191283267104968 kresd[22488]: ERROR: Invalid argument (workdir '/var/lib/knot-resolver')
> Oct 28 19:49:41 v220191283267104968 systemd[1]: kresd(a)1.service: Main process exited, code=exited, status=1/FAILURE
> Oct 28 19:49:41 v220191283267104968 systemd[1]: kresd(a)1.service: Failed with result 'exit-code'.
> Oct 28 19:49:41 v220191283267104968 systemd[1]: Failed to start Knot
> Resolver daemon.
The files are world readable so I don't know what's going on:
> ll /etc/letsencrypt/live/mdosch.de/
> total 4.0K
> -rw-r--r-- 1 certbot prosody 692 Jun 11 00:30 README
> lrwxrwxrwx 1 root root 38 Oct 27 22:07 cert.pem -> ../../archive/mdosch.de-0003/cert9.pem
> lrwxrwxrwx 1 root root 39 Oct 27 22:07 chain.pem -> ../../archive/mdosch.de-0003/chain9.pem
> lrwxrwxrwx 1 root root 43 Oct 27 22:07 fullchain.pem -> ../../archive/mdosch.de-0003/fullchain9.pem
> lrwxrwxrwx 1 root root 41 Oct 27 22:07 privkey.pem -> ../../archive/mdosch.de-0003/privkey9.pem
Also I don't understand why it complains about the workdir as I didn't
change anything regarding workdir but only pointed to the cert and key
file.
Do you have any idea what I am doing wrong?
Best regards,
Martin
Hi there,
is it actually possible to import a zone file for a locale zone
(yyyyy.xxxx.com.lan) or does it have to be done differently?
In any case, I can't figure out how to do it correctly!
Can someone help with an example i have had some problems with my local
domains since i switched to knot?
Can you actually import the domains from knot into the knot resolver?
I would like to have a stable DNS system on my servers again with knot & knot-
resolver I hope I "think" correctly, a 25 year "bind" damaged person ;-)
--
mit freundlichen Grüßen / best regards
Günther J. Niederwimmer
Hello List,
I would like to install KNOT-resolver, first test it with DNS over TLS, but
that doesn't work?
My system is an oracle Linux 8.4
I have a Letsencrypt certificate for this system and wanted to integrate it
into kresd, but I get a GNUTLS error?
Sep 22 18:27:30 bbs kresd[446005]: [tls ]
gnutls_certificate_set_x509_key_file(/etc/letsencrypt/live/bbs.xxxx.xxxx/
fullchain_ecdsa.pem,/etc/pki/private/xxxx.xxxx_ec.key) failed: -64
(GNUTLS_E_FILE_ERROR)
Sep 22 18:27:30 bbs kresd[446005]: [system] error while loading config: error
occurred here (config filename:lineno is at the bottom, if config is
involved):#012stack traceback:#012#011[C]: in function 'tls'#012#011/etc/knot-
resolver/kresd.conf:24: in main chunk#012ERROR: Invalid argument (workdir '/
var/lib/knot-resolver')
Sep 22 18:27:30 bbs systemd[1]: kresd(a)1.serbice.service: Main process exited,
code=exited, status=1/FAILURE
Does this not work with a Letsenkrypt certificate or I have another error in
my configuration
My config
-- SPDX-License-Identifier: CC0-1.0
-- vim:syntax=lua:set ts=4 sw=4:
-- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/
-- Uncomment this only if you need to debug problems
-- verbose(true)
log_level('debug')
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('127.0.0.1', 853, { kind = 'tls' })
--net.listen('127.0.0.1', 443, { kind = 'doh2' })
net.listen('::1', 53, { kind = 'dns', freebind = true })
net.listen('::1', 853, { kind = 'tls', freebind = true })
--net.listen('::1', 443, { kind = 'doh2' })
net.listen('xxx.xxx.xxx.1', 53, { kind = 'dns' })
net.listen('xxx.xxx.xxx.1', 853, { kind = 'tls' })
net.listen('192.168.100.200', 53, { kind = 'dns' })
net.listen('192.168.100.200', 853, { kind = 'tls' })
net.listen('xxx:xxxx:xxxx:xxx::200', 53, { kind = 'dns' })
net.listen('xxx:xxxx:xxxx:xxx::200', 853, { kind = 'tls' })
-- DNS over TLS
net.tls("/etc/letsencrypt/live/bbs.xxxx.xxx/fullchain_ecdsa.pem", "/etc/pki/
tls/private/xxxx.xxx_ec.key")
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
I heard / read from a user that knot resolver must have its own rights for the
certificate, but that is not possible, because the key is also intended for
other computers and this creates a system risk? Is this a design problem or a
bug?
Thanks for an answer,
--
mit freundlichen Grüßen / best regards
Günther J. Niederwimmer
On 24/09/2021 14.29, Günther J. Niederwimmer wrote:
> I mean my cert and key are are equipped with "standard" rights ?
>
> Knot-resolver can't handle it ?
It does not run under "root" user or group (by default), so in your
settings it won't be able to read them.
--Vladimir
Hi,
I'm having trouble reading the documentation for Lua modules, is it
possible to issue multiple recursive queries and await all of the
results?
What I'm trying to achieve is a CNAME glue-zone, d.example.com, that
searches in several other zones (phy.example.com, vm.example.com,
ad.example.com, etc) and returns a CNAME record into the one with the
highest priority (configurable, probably just a list) that does not
reply with a NXDOMAIN.
I'd like to do all the recursions in parallel, to keep users as happy
as possible.
Configuration might look something like this:
local zones = {
["d.example.com"] = {
"phy.example.com",
"vm.example.com",
"ad.example.com"
},
["vm.example.com"] = {
"vmware.vm.example.com",
"hyperv.vm.example.com"
}
}
/Erik
() ascii ribbon - against html e-mail
/\ arc.pasp.de - against proprietary attachments
Hello,
it seems that Knot Resolver doesn't responde for DNS ANY queries or it does?
Unable to found how to set it up in documentaction.
Thank you for you help
Michal
Hello,
it seems that Knot Resolver doesn't responde for DNS ANY queries or it does?
Unable to found how to set it up in documentaction.
Thank you for your help
Michal
Dear Knot Resolver users,
Knot Resolver 5.4.0 has been released! It comes with improved logging
facilities and new debugging options.
Improvements
------------
- fine grained logging and syslog support (!1181)
- expose HTTP headers for processing DoH requests (!1165)
- improve assertion mechanism for debugging (!1146)
- support apkg tool for packaging workflow (!1178)
- support Knot DNS 3.1 (!1192, !1194)
Bugfixes
--------
- trust_anchors.set_insecure: improve precision (#673, !1177)
- plug memory leaks related to TCP (!1182)
- policy.FLAGS: fix not applying properly in edge cases (!1179)
- fix a crash with older libuv inside timer processing (!1195)
Incompatible changes
--------------------
- see upgrading guide:
https://knot-resolver.readthedocs.io/en/stable/upgrading.html#to-5-4
- legacy DoH implementation configuration in net.listen() was renamed
from kind="doh" to kind="doh_legacy" (!1180)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.4.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.4.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello all,
I've recently switched from the unbound resolver
to the knot resolver for all outgoing name resolution on
one of my FreeBSD servers. It all works pretty much as
expected. With one problem; initializing and terminating
kresd isn't possible w/o adding some external scripting.
Given that the BSDs don't come w/systemd. Are there any
plans to better support the sysv init system in the knot
resolver?
Thanks in advance for any hints/pointers/solutions. :-)
--Chris
Hi,
I recently stumbled about the following issue with postfix:
DANE TLSA lookup problem: Host or domain name not found. Name service
error for name=_25._tcp.smtp-relay-in-s1.neusta.de type=TLSA: Host not
found, try again
Postfix uses knot-resolver and I found [1] as a possible similar issue
with unbound.
I would like to test if the issue persists with disabled qname
minimization, but it seems to be no configurale option.
Kind Regards
Bjoern
[1]
http://postfix.1071664.n5.nabble.com/Mail-deferred-TLSA-lookup-error-td1074…
Hello,
We operate recursive resolvers in our network in AWS and from within
the AWS network there are certain authoritative nameservers that block
large swaths of the AWS IP range, causing resolution to fail for us.
So I'm attempting to write a module that will handle failures reaching
external resolvers and retry the query by forwarding it to a major
resolver like cloudflare DNS. We push a ton of DNS query traffic so we
do not want to simply forward to a public resolver, we only want to
forward if recursion doesn't work for some reason.
I've poured through the documentation and source code and tried to
hook a variety of places, but I can't seem to find a good spot to hook
the request failure. The finish layer allows me to hook the SERVFAIL,
but by then it is too late to do anything. Using a simple policy, I
was actually able to do something close by calling ensure_answer(),
clearing the answer, setting the same forwarding flags as the forward
policy, and then calling ensure_answer() again and I could see the
query getting sent to cloudflare, so it seems like this is possible,
but at the policy level it's too early to know if a query will result
in a SERVFAIL.
Could anyone point me in the right direction here?
Thank you!
Paul
Hi. Moving a couple of our servers to new hardware, and
taking that opportunity to upgrade some of the services
at the same time. The main one being knot. Moving from
a 2.65 context to 3.04. I've read the upgrade doc, and
while there isn't a 2.65 --> 3.04. It appears for our
environment that the changed/removed stanzas won't
have much of an impact *except* as they relate to the
database. That is; it appears that it isn't going to
be a smooth transition, as they appear to be incompatible.
Is that right? Do I need to rewrite all the keys &&
serials, and start from scratch? If so, as we have
a huge number of domains, this will be an enormous task.
Is this avoidable?
Thank you for any and all insight into this transition.
--Chris
hello,
is there a way to output metrics about requests sent to upstreams and their information in prometheus /metrics output?
been trying to find info and there seem to be no documentation about that functionality.
sources mention dedicated /upstreams endpoint https://gitlab.nic.cz/knot/knot-resolver/-/blob/master/modules/http/prometh… but /upstreams returns empty list.
currently trying to run this config:
modules = {
'hints > iterate',
'stats',
'predict',
'http',
}
net.listen('0.0.0.0', 53, { kind = 'dns' })
net.listen('0.0.0.0', 9053, { kind = ‘webmgmt' })
cache.size = 256 * MB
cache.storage = "lmdb:///dev/shm/knot-resolver”
policy.add(
policy.all(
policy.TLS_FORWARD({
{'1.1.1.1', hostname='cloudflare-dns.com' },
{'1.0.0.1', hostname='cloudflare-dns.com' },
})
)
)
is there a way to get upstreams info?
running modules.load('stats’) and then stats.upstreams() from ‘runtime’ configuration returns upstream request details like described here https://knot-resolver.readthedocs.io/en/stable/modules-stats.html
thanks
Apologies if this has been asked before, but I was unable to find
informative resources about this topic except this[1].
What are the downsides of having a recursive DNS server in front of an
authoritative DNS Server? I'm wondering if all the points listed in the
linked article are relevant for small scale installations.
Is anyone running such a setup and can share some advice with regards to
rate limiting?
[1]: https://www.whalebone.io/separate-dns-servers/
--
Alex JOST
Hey list,
new here. Could someone please try explain to me, what's better about the
new algorithm for choosing nameservers? I feel like it totally broke my use
case.
I use knot-resolver as local resolver and have configured this:
acme = policy.todnames({'acme.cz', 'acme2.cz'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), acme))
policy.add(policy.suffix(policy.STUB({'172.16.21.93','172.16.21.94','8.8.8.8'}),
acme))
Until the "better" algo, it worked exactly as I wanted it to. When I was in
the network where the 172.16.21.9{3,4} DNS servers were available, they
were selected. And when they were not available, google DNS was used to
resolve those domains.
Now, even when the internal nameservers are available, they are rarely used:
$ for i in `seq 1 20`; do dig intranet.acme.cz +short; done
193.165.208.153
172.16.21.1
172.16.21.1
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
$ for i in `seq 1 20`; do dig intranet.acme.cz +short; done
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
172.16.21.1
193.165.208.153
When I remove the google DNS and leave just 172...
# systemctl restart kresd@{1..4}.service && for i in `seq 1 20`; do dig
intranet.acme.cz +short; done
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
Can I somehow switch back to the old algorithm via configuration?
Thanks
Josef
Dear Knot Resolver users,
Knot Resolver 5.3.0 has been released!
Note regarding CentOS 8 packages: Due to the Red Hat's hostile decision
to exclude devel packages from their distribution, we won't be providing
upstream packages or maintaining knot-resolver package in EPEL8 until
libuv-devel has been included in official RHEL8 release [rhbz#1895872].
If you depend on these, we can find a solution for your use-case as part
of the paid support we offer [1].
[rhbz#1895872] - https://bugzilla.redhat.com/show_bug.cgi?id=1895872
[1] - https://www.knot-resolver.cz/support/pro/
Improvements
------------
- more consistency in using parent-side records for NS addresses (!1097)
- better algorithm for choosing nameservers (!1030, !1126, !1140, !1141,
!1143)
- daf module: add daf.clear() (!1114)
- dnstap module: more features and don't log internal requests (!1103)
- dnstap module: include in upstream packages and Docker image (!1110,
!1118)
- randomize record order by default, i.e. reorder_RR(true) (!1124)
- prometheus module: transform graphite tags into prometheus labels
(!1109)
- avoid excessive logging of UDP replies with sendmmsg (!1138)
Bugfixes
--------
- view: fail config if bad subnet is specified (!1112)
- doh2: fix memory leak (!1117)
- policy.ANSWER: minor fixes, mainly around NODATA answers (!1129)
- http, watchdog modules: fix stability problems (!1136)
Incompatible changes
--------------------
- dnstap module: `log_responses` option gets nested under `client`;
see new docs for config example (!1103)
- libknot >= 2.9 is required
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.3.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.3.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869