Hello
We are using knot-resolver 5.7.4 with multiple independent instances - 32. We also use tmpfs.
We starts processes by systemd.
the problem we encountered is that when systemd starts 32 processes, they get a timeout and are restarted by systemd. As a result, we have problems starting all processes. The problem does not occur when we do not use tmpfs.
How to solve this problem?
We shoud add to systemd something like this?
ExecStartPre=/bin/sleep $(( RANDOM % 5 ))
[Service]
StartLimitBurst=5
StartLimitIntervalSec=10
Or we should something to kresd configuration?
Hi,
I've found this warning in my journal:
... kresd[1071788]: [taupd ] you need to update package with trust
anchors in "/usr/share/dns/root.key" before it breaks
I don't know how to do that.
I think my system is current but just ran: apt update; apt list
--upgradable and it shows nothing regarding knot.
Thanks for any pointers,
Mike Wright
Hello,
On my local network, I have a computer fixed IP that I would like to use
a specific TLS FORWARD.
Generic TLS_FORWARD is simply configured as such.
policy.add(policy.all(policy.TLS_FORWARD({
{'9.9.9.9', hostname='dns9.quad9.net'},
{'1.1.1.1', hostname='cloudflare-dns.com'},
{'1.0.0.1', hostname='cloudflare-dns.com'},
})))
I tried to embed a specific different TLS_FORWARD with a
view:addr('10.10.10.10', ... )
but I cannot manage to restrict this TLS_FORWARD and to avoid it
poisoning the cache.
Is there somewhere an example of such setup, with ACL ending up on two
different TLS_FORWARD and one with no cache ?
Regards,
--
Mathieu Roy
Dear Knot Resolver users,
Knot Resolver 6.0.9 (early-access) has been released!
Improvements:
- rate-limiting: add these options, mechanism, docs (!1624)
- manager: secret for TLS session resumption via ticket (RFC5077) (!1567)
The manager creates and sets the secret for all running 'kresd' workers.
The secret is created automatically if the user does not configure
their own secret in the configuration.
This means that the workers will be able to resume each other's TLS
sessions, regardless of whether the user has configured it to do so.
- answer NOTIMPL for meta-types and non-IN RR classes (!1589)
- views: improve interaction with old-style policies (!1576)
- stats: add stale answer counter 'answer.stale' (!1591)
- extended_errors: answer with EDE in more cases (!1585, !1588, !1590,
!1592)
- local-data: make DNAMEs work, i.e. generate CNAMEs (!1609)
- daemon: use connected UDP sockets by default (#326, !1618)
- docker: multiplatform builds (#922, !1623)
- docker: shared VOLUMEs are prepared for configuration and cache
(!1625, !1627)
Configuration path was changed to standard
'/etc/knot-resolver/config.yaml'.
Bugfixes:
- daemon/proxyv2: fix informing the engine about TCP/TLS from the actual
client (!1578)
- forward: fix wrong pin-sha256 length; also log pins on mismatch
(!1601, #813)
Incompatible changes:
- -f/--forks is removed (#631, !1602)
- gnutls < 3.4 support is dropped, released over 9 years ago (!1601)
- libuv < 1.27 support is dropped, released over 5 years ago (!1618)
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v6.0.9/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-6.0.9.tar.xz.asc
Documentation:
https://www.knot-resolver.cz/documentation/v6.0.9/
--
Ales Mrazek
PGP: 3057 EE9A 448F 362D 7420 5A77 9AB1 20DA 0A76 F6DE
Issue Summary:
I need to forward DNS queries to a secondary DNS server if a specific value (IP address) is returned in the DNS response. Specifically, if the answer contains 192.168.1.1, I want the request to be forwarded to 10.10.10.1 for re-resolution.
Expected Behavior:
A user queries for a domain (e.g., dig alibaba.com).
If the result contains the IP address 192.168.1.1, the query should be automatically forwarded to another DNS server (e.g., 10.10.10.1) for further resolution.
Current Attempt:
lua
policy.add(policy.all(function (state, req)
log("info Policy function triggered")
-- Get the DNS answer section
local answer = req:answer()
if answer then
for _, record in ipairs(answer) do
-- Check if the response is an A record and contains the IP 192.168.1.1
if record.stype == kres.type.A and tostring(record.rdata) == '192.168.1.1' then
log("info IP is 192.168.1.1, forwarding to 10.10.10.1")
-- Forward the query to the specified DNS server
return policy.FORWARD({'10.10.10.1'})
end
end
else
log("info No answer found")
end
return kres.DONE
end), true)
Issue:
The function triggers correctly, but the query is not being forwarded to the specified DNS server when the condition (record.rdata == '192.168.1.1') is met.
Steps to Reproduce:
Add the above Lua code to the Knot Resolver configuration.
Query for a domain (dig alibaba.com).
If the result contains the IP 192.168.1.1, the query should be forwarded, but it does not.
Environment:
Knot Resolver Version: [Include version]
Operating System: [Your OS]
Configuration: [Any relevant additional configuration]
Desired Solution:
I would like the query to forward correctly to 10.10.10.1 whenever the answer contains 192.168.1.1. Any guidance on why the forward might not be triggered or if additional configurations are needed would be appreciated.
Hello,
we are observing that Knot-resolver is refusing certain queries because
of enabled DNS rebinding protection to subdomains beneath apple.com.
IMHO there is no reason for that - they are not pointing to a private
addres range. For instance:
Unbound:
dig init.ess.apple.com @127.0.0.1 -p 53
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 53
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38044
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ANSWER SECTION:
init.ess.apple.com. 81 IN CNAME
init-cdn-lb.ess-apple.com.akadns.net.
init-cdn-lb.ess-apple.com.akadns.net. 27 IN CNAME init.ess.g.aaplimg.com.
init.ess.g.aaplimg.com. 12 IN A 17.253.73.204
init.ess.g.aaplimg.com. 12 IN A 17.253.73.205
init.ess.g.aaplimg.com. 12 IN A 17.253.73.203
init.ess.g.aaplimg.com. 12 IN A 17.253.73.201
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:42:46 CEST 2024
;; MSG SIZE rcvd: 194
Knot-resolver 5.7.4-cznic.1 freshly re-installed:
dig init.ess.apple.com @127.0.0.1 -p 2053
; <<>> DiG 9.18.24-1-Debian <<>> init.ess.apple.com @127.0.0.1 -p 2053
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 17074
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; EDE: 18 (Prohibited): (EIM4)
;; QUESTION SECTION:
;init.ess.apple.com. IN A
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "blocked by DNS
rebinding protection"
;; Query time: 8 msec
;; SERVER: 127.0.0.1#2053(127.0.0.1) (UDP)
;; WHEN: Wed Sep 18 10:45:40 CEST 2024
;; MSG SIZE rcvd: 124
I have also tried to remove the cache under /var/cache/knot-resolver but
without any effect. There are more domain names with this beavior:
query.ess.apple.comcomm-cohort.ess.apple.comkt-prod.ess.apple.com
Thanks.
Ales Rygl
Hi,
I am running knot-resolver-5.7.4 in a FreeBSD service jail (14.1-STABLE).
Note: Because I am still pretty new to using knot resolver, I may miss something important besides [1].
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS [2]
cache('['count_entries']'): 4953
cache('['usage_percent']'): 16.953125
### stopping jail simulates shutdown server [3]:
MWN> service jail stop test
Stopping jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
268 10240 /usr/home/jails/test/var/run/kresd/data.mdb
### Thus, data.mdb is preserved after shutdown!
### starting jail simulates booting server:
MWN> service jail start test
Starting jails: test.
MWN> sum /usr/home/jails/test/var/run/kresd/data.mdb
15059 10240 /usr/home/jails/test/var/run/kresd/data.mdb
MWN> ./_STATS
cache('['count_entries']'): 87
cache('['usage_percent']'): 0.15625
1) After having stopped that jail, data.mdb is still available and hasn't been modified as shown by checksum.
2) After start of the jail including start of kresd data.mdb has been modified (checksum).
3) cache.stats() shows significantly lower numbers.
Questions:
#) become cache.stats() reset after a reboot?
#) what am I missing?
Thanks in advance and regards,
Michael
[1] https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#p…
[2] _STATS (based on https://knot-resolver.readthedocs.io/en/stable/daemon-bindings-cache.html#c…)
echo -n "cache('['count_entries']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep count_entries
echo -n "cache('['usage_percent']'): " ; echo "cache.stats()" | nc -NU /var/run/kresd/control/kresd.sock | grep usage_percent
[3] a real server reboot shows the same issue with the cache
Dear Knot Resolver users,
due to an internal infrastructure change, released sources for Knot
Resolver have been moved from
<https://secure.nic.cz/files/knot-resolver/> to
<https://knot-resolver.nic.cz/release/>.
Apart from this movement, the rest of the directory structure remains
unchanged. Proper redirects (HTTP 301 Moved Permanently) have been put
in place to make this change as painless and transparent as possible.
These redirects can be expected to stay in place for the foreseeable
future. Still, we do recommend to change any of your direct links from
the secure.nic.cz server to knot-resolver.nic.cz, to avoid the extra
indirection step and/or unforeseen issues in the future.
Should any of you run into any issues or have any questions about this
change, please do let us know, we will be happy to help you out.
Best regards
--
Oto Šťáva | Knot Resolver team lead | CZ.NIC z.s.p.o.
PGP: 6DC2 B0CB 5935 EA7A 3961 4AA7 32B2 2D20 C9B4 E680