Dear Knot Resolver users,
Knot Resolver 6.0.9 (early-access) has been released!
Improvements:
- rate-limiting: add these options, mechanism, docs (!1624)
- manager: secret for TLS session resumption via ticket (RFC5077) (!1567)
The manager creates and sets the secret for all running 'kresd' workers.
The secret is created automatically if the user does not configure
their own secret in the configuration.
This means that the workers will be able to resume each other's TLS
sessions, regardless of whether the user has configured it to do so.
- answer NOTIMPL for meta-types and non-IN RR classes (!1589)
- views: improve interaction with old-style policies (!1576)
- stats: add stale answer counter 'answer.stale' (!1591)
- extended_errors: answer with EDE in more cases (!1585, !1588, !1590,
!1592)
- local-data: make DNAMEs work, i.e. generate CNAMEs (!1609)
- daemon: use connected UDP sockets by default (#326, !1618)
- docker: multiplatform builds (#922, !1623)
- docker: shared VOLUMEs are prepared for configuration and cache
(!1625, !1627)
Configuration path was changed to standard
'/etc/knot-resolver/config.yaml'.
Bugfixes:
- daemon/proxyv2: fix informing the engine about TCP/TLS from the actual
client (!1578)
- forward: fix wrong pin-sha256 length; also log pins on mismatch
(!1601, #813)
Incompatible changes:
- -f/--forks is removed (#631, !1602)
- gnutls < 3.4 support is dropped, released over 9 years ago (!1601)
- libuv < 1.27 support is dropped, released over 5 years ago (!1618)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.9/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.9/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Issue Summary:
I need to forward DNS queries to a secondary DNS server if a specific value (IP address) is returned in the DNS response. Specifically, if the answer contains 192.168.1.1, I want the request to be forwarded to 10.10.10.1 for re-resolution.
Expected Behavior:
A user queries for a domain (e.g., dig alibaba.com).
If the result contains the IP address 192.168.1.1, the query should be automatically forwarded to another DNS server (e.g., 10.10.10.1) for further resolution.
Current Attempt:
lua
policy.add(policy.all(function (state, req)
log("info Policy function triggered")
-- Get the DNS answer section
local answer = req:answer()
if answer then
for _, record in ipairs(answer) do
-- Check if the response is an A record and contains the IP 192.168.1.1
if record.stype == kres.type.A and tostring(record.rdata) == '192.168.1.1' then
log("info IP is 192.168.1.1, forwarding to 10.10.10.1")
-- Forward the query to the specified DNS server
return policy.FORWARD({'10.10.10.1'})
end
end
else
log("info No answer found")
end
return kres.DONE
end), true)
Issue:
The function triggers correctly, but the query is not being forwarded to the specified DNS server when the condition (record.rdata == '192.168.1.1') is met.
Steps to Reproduce:
Add the above Lua code to the Knot Resolver configuration.
Query for a domain (dig alibaba.com).
If the result contains the IP 192.168.1.1, the query should be forwarded, but it does not.
Environment:
Knot Resolver Version: [Include version]
Operating System: [Your OS]
Configuration: [Any relevant additional configuration]
Desired Solution:
I would like the query to forward correctly to 10.10.10.1 whenever the answer contains 192.168.1.1. Any guidance on why the forward might not be triggered or if additional configurations are needed would be appreciated.
Hello,
we are observing that Knot-resolver is refusing certain queries because
of enabled DNS rebinding protection to subdomains beneath apple.com.
IMHO there is no reason for that - they are not pointing to a private
addres range. For instance:
Unbound:
dig init.ess.apple.com @127.0.0.1 -p 53
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 53
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38044
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ANSWER SECTION:
init.ess.apple.com. 81 IN CNAME
init-cdn-lb.ess-apple.com.akadns.net.
init-cdn-lb.ess-apple.com.akadns.net. 27 IN CNAME init.ess.g.aaplimg.com.
init.ess.g.aaplimg.com. 12 IN A 17.253.73.204
init.ess.g.aaplimg.com. 12 IN A 17.253.73.205
init.ess.g.aaplimg.com. 12 IN A 17.253.73.203
init.ess.g.aaplimg.com. 12 IN A 17.253.73.201
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:42:46 CEST 2024
;; MSG SIZE rcvd: 194
Knot-resolver 5.7.4-cznic.1 freshly re-installed:
dig init.ess.apple.com @127.0.0.1 -p 2053
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 2053
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 17074
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 18 (Prohibited): (EIM4)
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "blocked by DNS
rebinding protection"
;; Query time: 8 msec
;; SERVER: 127.0.0.1#2053(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:45:40 CEST 2024
;; MSG SIZE rcvd: 124
I have also tried to remove the cache under /var/cache/knot-resolver but
without any effect. There are more domain names with this beavior:
query.ess.apple.comcomm-cohort.ess.apple.comkt-prod.ess.apple.com
Thanks.
Ales Rygl
Hi,
I am running knot-resolver-5.7.4 in a FreeBSD service jail (14.1-STABLE).
Note: Because I am still pretty new to using knot resolver, I may miss something important besides [1].
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS [2]
cache('['count_entries']'): 4953
cache('['usage_percent']'): 16.953125
### stopping jail simulates shutdown server [3]:
MWN> service jail stop test
Stopping jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
### Thus, data.mdb is preserved after shutdown!
### starting jail simulates booting server:
MWN> service jail start test
Starting jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
15059 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS
cache('['count_entries']'): 87
cache('['usage_percent']'): 0.15625
1) After having stopped that jail, data.mdb is still available and hasn't been modified as shown by checksum.
2) After start of the jail including start of kresd data.mdb has been modified (checksum).
3) cache.stats() shows significantly lower numbers.
Questions:
#) become cache.stats() reset after a reboot?
#) what am I missing?
Thanks in advance and regards,
Michael
[1] https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#p…
[2] _STATS (based on https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#c…)
echo -n "cache('['count_entries']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep count_entries
echo -n "cache('['usage_percent']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep usage_percent
[3] a real server reboot shows the same issue with the cache
Dear Knot Resolver users,
due to an internal infrastructure change, released sources for Knot
Resolver have been moved from
<https://secure.nic.cz/files/knot-resolver/> to
<https://knot-resolver.nic.cz/release/>.
Apart from this movement, the rest of the directory structure remains
unchanged. Proper redirects (HTTP 301 Moved Permanently) have been put
in place to make this change as painless and transparent as possible.
These redirects can be expected to stay in place for the foreseeable
future. Still, we do recommend to change any of your direct links from
the secure.nic.cz server to knot-resolver.nic.cz, to avoid the extra
indirection step and/or unforeseen issues in the future.
Should any of you run into any issues or have any questions about this
change, please do let us know, we will be happy to help you out.
Best regards
--
Oto Šťáva | Knot Resolver team lead | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680
Hello.
In case you're using our upstream repositories for Debian or Ubuntu, as
suggested on https://www.knot-resolver.cz/download/
you'll be running into their signing key expiring since today. As we
didn't update it in time, you'll have to update it manually by re-running:
wgethttps://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb
dpkg -i knot-resolver-release.deb
Ticket: https://gitlab.nic.cz/knot/knot-resolver/-/issues/747
We also forgot to add Ubuntu 22.04, so that is fixed now, too.
--Vladimir
Hello,
I am attempting to migrate some internal bind servers to
knot+knot-resolver. We have knot acting as the authoritative server for
internal views of our public zones and we're attempting to use Knot
Resolver to forward to Knot to provide recursive resolution on the internal
network.
In this configuration, when Knot is resolving a CNAME record from an
internal stub zone, it doesn't chase the target of the CNAME for the client.
To demonstrate, here are two example zones served by knot:
# zone1.com
test-a 30 IN A 192.168.1.1
# zone2.com
test-cname 30 CNAME test-a.zone1.com.
With these configured as stubs in Knot Resolver, if you query
test-cname.zone2.com, knot resolver won't chase it across zones to get the
A record value like it would for non-stub zones.
Both Bind9 and Unbound chase this as expected. Is there any way to
configure Knot Resolver to do the same?
Thanks!
- Paul
Dear Knot Resolver users,
Knot Resolver versions 5.7.4 (stable) and 6.0.8 (early-access) have been
released!
Both releases include important security fixes, an update is strongly
advised!
Version 6.0.8 additionally brings some improvements, like faster policy
reloads using a new policy-loader process, respecting system-wide crypto
policies, JSON metrics, separate IPv6 and IPv4 metrics, and more.
---
Knot Resolver 5.7.4:
Security:
- reduce buffering of transmitted data, especially TCP-based in userspace
Also expose some of the new tweaks in lua:
(require 'ffi').C.the_worker.engine.net.tcp.user_timeout = 1000
(require
'ffi').C.the_worker.engine.net.listen_{tcp,udp}_buflens.{snd,rcv}
Improvements:
- add the fresh DNSSEC root key "KSK-2024" already, Key ID 38696 (!1556)
Incompatible changes:
- libknot 3.0.x support is dropped (!1558)
Upstream last maintained 3.0.x in spring 2022.
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.7.4/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.4.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.7.4.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v5.7.4/
---
Knot Resolver 6.0.8:
Security:
- reduce buffering of transmitted data, especially TCP-based in userspace
Also expose some of the new tweaks in lua:
(require 'ffi').C.the_worker.engine.net.tcp.user_timeout = 1000
(require
'ffi').C.the_worker.engine.net.listen_{tcp,udp}_buflens.{snd,rcv}
Packaging:
- all packages:
- remove unused dependency on `libedit` (!1553)
- deb packages:
- packages ``knot-resolver-core`` and ``knot-resolver-manager`` have
been merged into a single ``knot-resolver6`` package. Suffix packages
``knot-resolver-*`` have been renamed to ``knot-resolver6-*``. This
change _should_ be transparent, but please do let us know if you
encounter any issues while updating. (!1549)
- package ``python3-prometheus-client`` is now only an optional
dependency
- rpm packages:
- packages ``knot-resolver-core`` and ``knot-resolver-manager`` have
been merged into a single ``knot-resolver`` package. This change
_should_ be transparent, but please do let us know if you encounter
any issues while updating. (!1549)
- bugfix: do not overwrite config.yaml (!1525)
- package ``python3-prometheus_client`` is now only an optional
dependency
- arch package:
- fix after they renamed a dependency (!1536)
Improvements:
- TLS (DoT, DoH): respect crypto policy overrides in OS (!1526)
- manager: export metrics to JSON via management HTTP API (!1527)
* JSON is the new default metrics output format
* the ``prometheus-client`` Python package is now an optional dependency,
required only for Prometheus export to work
- cache: prefetching records
* predict module: prefetching expiring records moved to prefetch module
* prefetch module: new module to prefetch expiring records
- stats: add separate metrics for IPv6 and IPv4 (!1545)
- add the fresh DNSSEC root key "KSK-2024" already, Key ID 38696 (!1556)
- manager: policy-loader: new component for separate loading of policy
rules (!1540)
The ``policy-loader`` ensures that configured policies are loaded
into the rules database
where they are made available to all running kresd workers. This
loading is no longer done
by all kresd workers as it was before, so this should significantly
improve the resolver's
startup/reload time when loading large sets of policy rules, e.g.
large RPZs.
Incompatible changes:
- cache: the ``cache.prediction`` configuration property has been
reorganized
into ``cache.prefetch.expiring`` and ``cache.prefetch.prediction``,
changing
the default behaviour as well. See the `relevant documentation section
<https://www.knot-resolver.cz/documentation/v6.0.8/config-cache-predict.html>`_
for more.
- libknot <=3.2.x support is dropped (!1565)
Bugfixes:
- arch package: fix after they renamed a dependency (!1536)
- fix startup with `dnssec: false` (!1548)
- rpm packages: do not overwrite config.yaml (!1525)
- fix NSEC3 records missing in answer for positive wildcard expansion
with the NSEC3 having over-limit iteration count (#910, !1550)
- views: fix a bug in subnet matching (!1562)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.8/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.8.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.8.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.8/
--
Oto Šťáva | Knot Resolver team lead | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680
On 31/05/2024 19.00, oui.mages_0w(a)icloud.com wrote:
> we have different TLS domains/certificates for dns64 and non dns64
Oh, OK. Such a thing hasn't occurred to us, so it's not possible. In
that case I expect you'll need to stay on 5.x for now, with separate
processes for dns64 and non-dns64 (but they can share the cache).
Overall I don't think the current code can support multiple certificates.
On 31/05/2024 13.04, oui.mages_0w(a)icloud.com wrote:
> Unless the policy module allows to filter by listened IP, I will still
> need to use split instances: we don’t select on the server side which
> client is to use dns64 or not, but as an ISP, we leave the choice to
> the clients to decide which dns resolver they want to use (one of ours
> with or without dns64, or a third party).
That is possible, but with 5.x I probably wouldn't recommend it, as it's
been left out of documentation (by mistake) and overall I don't have
much trust in that part of 5.x policies.
But after you migrate to >= 6.x, I would recommend just having a single
shared configuration. And in views you can select dst-subnet paired
with dns64 options. Such deployments were taken into account when
designing 6.x views.
https://www.knot-resolver.cz/documentation/latest/config-views.html#config-…
In 6.x it would probably also be harder for you to run multiple
configurations at once on a single machine, so that's another reason to
unify this when you migrate.
--Vladimir