Hi!
Knot 2.6.3: When an incoming NOTIFY does not match any ACL the NOTIFY is
replied with "notauth" although the zone is configured. I would have
expected that Knot should response with "refused" in such a scenario. Is
the notauth intended? From operational view a "refuses" would easy
debugging.
regards
Klaus
> key
>
> An ordered list of references to TSIG keys. The query must match one of them. Empty value means that TSIG key is not required.
>
> Default: not set
This is not 100% correct. At least with a notify ACL the behavior is:
Empty value means that TSIG keys are not allowed.
regards
Klaus
Hi everybody,
I would have a question related to zone signing. Whenever I reload knot config
using knotc reload it starts to resign all DNSSEC enabled zones. It makes the
daemon sometimes unresponsive to knotc utility.
root@idunn:# knotc reload
error: failed to control (connection timeout)
Is it a design intent to sign zones while reloading config? Is it really
needed? It invokes zone transfers, consumes resources, etc.
Thanks for answer
With regards
Ales
Helly everybody,
there is a KNOT DNS master name server that I do not manage myself for my domain. I try to setup a BIND DNS server as a slave in-house. BIND fails to do the zone transfer and reports
31-Dec-2017 16:19:02.503 zone whka.de/IN: Transfer started.
31-Dec-2017 16:19:02.504
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
connected using 2001:7c7:20e8:18e::2#53509
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
failed while receiving responses: NOTAUTH
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
Transfer completed: 0 messages, 0 records, 0 bytes, 0.001 secs
If try dig (this time using the IPv4 address), I get a failure, too.
# dig axfr @141.70.45.160 whka.de.
; <<>> DiG 9.9.5-9+deb8u7-Debian <<>> axfr @141.70.45.160 whka.de.
; (1 server found)
;; global options: +cmd
; Transfer failed.
Wireshark tells me that the reply code of the name server is `1001 Server is not an authority for domain`. What is going on here?
Especially, if I query the same nameserver for an usual A-record it claims to be authoritative. Moreover, KNOT DNS manual says KNOT is an authoritative-only name server. So there is no way of being non-authoritative.
Has anybody already observed something like this?
Best regards, Matthias
--
Evang. Studentenwohnheim Karlsruhe e.V. – Hermann-Ehlers-Kolleg
Matthias Nagel
Willy-Andreas-Allee 1, 76131 Karlsruhe, Germany
Phone: +49-721-96869289, Mobile: +49-151-15998774
E-Mail: matthias.nagel(a)hermann-ehlers-kolleg.de
Dear Knot Resolver users,
Knot Resolver 1.5.1 is released, mainly with bugfixes and cleanups!
Incompatible changes
--------------------
- script supervisor.py was removed, please migrate to a real process manager
- module ketcd was renamed to etcd for consistency
- module kmemcached was renamed to memcached for consistency
Bugfixes
--------
- fix SIGPIPE crashes (#271)
- tests: work around out-of-space for platforms with larger memory pages
- lua: fix mistakes in bindings affecting 1.4.0 and 1.5.0 (and
1.99.1-alpha),
potentially causing problems in dns64 and workarounds modules
- predict module: various fixes (!399)
Improvements
------------
- add priming module to implement RFC 8109, enabled by default (#220)
- add modules helping with system time problems, enabled by default;
for details see documentation of detect_time_skew and detect_time_jump
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.5.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v1.5.1/
--Vladimir
Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hi everybody,
Is there a way how to change TTL of all zone records at once using knotc? I.e.
without editing the zone file manually. Something what I can do using $TTL
directive in Bind9 zone files?
If not I would like to ask for implementing if possible.
Thanks
Regards
Ales Rygl
Dobry den,
narazil jsem na problem s fungovanim modulu mod-synthrecord pri pouziti vice
siti soucasne:
Konfigurace:
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: [ 46.12.0.0/16, 46.13.0.0/16 ]
zone:
- domain: customers.tmcz.cz
file: db.customers.tmcz.cz
module: mod-synthrecord/customers1
S uvedenou konfiguraci Knot generuje pouze zaznamy z posledni uvedene site,
pro 46.12.0.0/16 dava NXDOMAIN. Stejne se to chova i s touto formou zapisu.
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: 46.12.0.0/16
network: 46.13.0.0/16
Konfiguracne je to ok, knot neprotestuje, ale zaznamy negeneruje. Knot je
2.6.1-1+0~20171112193256.11+stretch~1.gbp3eaef0.
Diky za pomoc ci nasmerovani.
S pozdravem
Ales Rygl
On 11/20/2017 12:37 PM, Petr Kubeš wrote:
> Je prosím někde dostupna nějaká jednoduchá "kuchařka" pro zprovoznění
> takovéhoto DNS resolveru?
V některých systémech už je přímo balíček se službou, případně máme PPA
obsahující novější verze. https://www.knot-resolver.cz/download/
Vyhnul bych se verzím před 1.3.3.
Přímo kuchařku nemáme, ale kresd funguje dobře i bez konfigurace - pak
poslouchá na všech lokálních adresách na UDP+TCP portu 53, se 100 MB
cache v momentálním adresáři. Akorát pro validaci DNSSEC je potřeba
zadat jméno souboru s kořenovými klíči, třeba "kresd -k root.keys" - ten
je při neexistenci inicializován přes https. Různé možnosti jsou
popsány v dokumentaci
http://knot-resolver.readthedocs.io/en/stable/daemon.html
V. Čunát
Dobrý den, prosím o radu.
provozujeme malou síť a v současné době využíváme externí DNS
poskytovatele (UPC).
CHtěli by jsme na hraničním uzlu zprovoznit vlastní DNS , konkrétně KNOT
v konfiguraci, kdy by majoritně fungoval jako DNS RESOLVER a v budoucnu
případně dostal i naše zony.
Není prosím u vás někde dostupný návod step by step, co konkrétně
nastavit, aby jsme mohli úspěšně takovýto KNOT zprovoznit v několika
krocích jako CZ Resolver DNS?
Asi špatné období, nedaří se mi bohužel z dostupných manuálů, nebo
návodů systém KNOT dns nastavit tak, aby odpovídal a synchronizoval DNS
zóny.
Velice děkuji za radu
P.Kubeš
Dobry den,
Rad bych pozadal o radu. Experimentuji s Knot DNS, verze 2.6.0-3+0~20171019083827.9+stretch~1.gbpe9bd69. Debian Stretch.
Mam nasazeny DNSSEC s KSK a ZSK v algoritmu 5 a Bind9, klice bez metadat. Snazim se prejit na Knot, s tim, ze mam dve testovaci zony. Pouzivam nasledujici postup.
1. Naimportuji stavajici klice pomoci keymgr
2. nastavim timestamy:
keymgr t-sound.cz set 18484 created=+0 publish=+0 active=+0
keymgr t-sound.cz set 04545 created=+0 publish=+0 active=+0
3. zavedu zonu do Knotu. lifetime je extremne kratky, abych vedel, jak mi to funguje.
zone:
- domain: t-sound.cz
template: signed
file: db.t-sound.cz
dnssec-signing: on
dnssec-policy: migration
- domain: mych5.cz
template: signed
file: db.mych5.cz
dnssec-signing: on
dnssec-policy: migration
acl: [allowed_transfer]
notify: idunn-freya-gts
policy:
- id: migration
algorithm: RSASHA1
ksk-size: 2048
zsk-size: 1024
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
Toto projde. Knot zacne podepisovat importovanymi klici. Nasledne zmenim policy u t-sound.cz na
policy:
- id: migration3
algorithm: ecdsap256sha256
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
ksk-submission: nic.cz
Knot vygeneruje nove klice:
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, algorithm rollover started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 18484, algorithm 5, KSK yes, ZSK no, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 5821, algorithm 5, KSK no, ZSK yes, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public no, ready no, active no
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39697, algorithm 13, KSK no, ZSK yes, public no, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-10T16:45:09
Rozbehne se mechanismus ZSK rolloveru, vypublikuje se CDNSKEY. Projde sumbission. Vysledny stav je, ze zona funguje,
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-12T23:03:27
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] zone file updated, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] notify, outgoing, 93.153.117.50@53: serial 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: finished, 0.00 seconds, 1 messages, 780 bytes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: finished, 0.00 seconds, 1 messages, 780 bytes
ZSK se rotuji. Pak ale dojde k chybe nize:
Nov 12 23:03:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 23:03:27 idunn knotd[24980]: warning: [t-sound.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] zone event 'DNSSEC resign' failed (unknown error -28)
Stav klicu v tomto okamziku:
root@idunn:/var/lib/knot# keymgr t-sound.cz list human
c87e00bd71d0f89ea540ef9c21020df1e0106c0f ksk=yes tag=04256 algorithm=13 public-only=no created=-2D16h24m21s pre-active=-2D16h24m21s publish=-2D16h19m21s ready=-2D16h14m21s active=-1D18h14m21s retire-active=0 retire=0 post-active=0 remove=0
fe9f432bfc5d527dc11520615d6e29e5d1799d8c ksk=no tag=22255 algorithm=13 public-only=no created=-10h26m3s pre-active=0 publish=-10h26m3s ready=0 active=-10h21m3s retire-active=0 retire=0 post-active=0 remove=0
root@idunn:/var/lib/knot#
knotc zone-sign t-sound.cz ale pojde a vse se tim opravi.
Nov 13 08:56:41 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-status'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-sign'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, dropping previous signatures, resigning zone
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, ZSK rollover started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 24386, algorithm 13, KSK no, ZSK yes, public yes, ready no, active no
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-13T09:11:23
O den drive na tom knot zcela havaroval:
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:09 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:09 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:09 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Nov 11 23:05:10 idunn knotd[23933]: info: Knot DNS 2.6.0 starting
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface 0.0.0.0@553
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface ::@553
Nov 11 23:05:10 idunn knotd[23933]: info: changing GID to 121
Nov 11 23:05:10 idunn knotd[23933]: info: changing UID to 114
Nov 11 23:05:10 idunn knotd[23933]: info: loading 2 zones
Nov 11 23:05:10 idunn knotd[23933]: info: [mych5.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: starting server
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:10 idunn knotd[23933]: warning: [mych5.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] zone event 'load' failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:10 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:10 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Delam nekde chybu? Omlouvam se za komplikovany a dlouhy popis.
Diky
S pozdravem
Ales Rygl
Dear Knot Resolver users,
Knot Resolver 1.99.1-alpha has been released!
This is an experimental release meant for testing aggressive caching.
It contains some regressions and might (theoretically) be even vulnerable.
The current focus is to minimize queries into the root zone.
Improvements
------------
- negative answers from validated NSEC (NXDOMAIN, NODATA)
- verbose log is very chatty around cache operations (maybe too much)
Regressions
-----------
- dropped support for alternative cache backends
and for some specific cache operations
- caching doesn't yet work for various cases:
* negative answers without NSEC (i.e. with NSEC3 or insecure)
* +cd queries (needs other internal changes)
* positive wildcard answers
- spurious SERVFAIL on specific combinations of cached records, printing:
<= bad keys, broken trust chain
- make check
- a few Deckard tests are broken, probably due to some problems above
+ unknown ones?
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.99.1-alpha/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz…
Documentation (not updated):
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hello,
one more question:
What is the proper way of autostarting Knot Resolver 1.4.0 on systemd (Debian Stretch in my case) to be able to listen on interfaces other from localhost?
As per the Debian README I've set up the socket override.
# systemctl edit kresd.socket:
[Socket]
ListenStream=<my.lan.ip>:53
ListenDatagram=<my.lan.ip>:53
However after reboot the service doesn't autostart.
# systemctl status kresd.service
kresd.socket - Knot DNS Resolver network listeners
Loaded: loaded (/lib/systemd/system/kresd.socket; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kresd.socket.d
└─override.conf
Active: failed (Result: resources)
Docs: man:kresd(8)
Listen: [::1]:53 (Stream)
[::1]:53 (Datagram)
127.0.0.1:53 (Stream)
127.0.0.1:53 (Datagram)
<my.lan.ip>:53 (Stream)
<my.lan.ip>:53 (Datagram)
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Failed to listen on sockets: Cannot assign requested address
Oct 01 23:17:12 <myhostname> systemd[1]: Failed to listen on Knot DNS Resolver network listeners.
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Unit entered failed state.
To get it running I have to type in manually:
# systemctl start kresd.socket
I apologize, I am now to systemd and its socket activation so it's not clear to me whether service or socket or both have to be somehow set up to autostart or not.
Could anyone clarify this?
Also, this is also in the log (again, Debian default):
Oct 01 23:18:22 <myhostname> kresd[639]: [ ta ] keyfile '/usr/share/dns/root.key': not writeable, starting in unmanaged mode
The file has permissions 644 for root:root. Should this be owned by knot, or writeable by others?
Thanks!
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Hello all,
I have come across an incompatibility with /usr/lib/knot/get_kaspdb and knot in relation to IPv6. Knot expects unquoted ip addresses however the kaspdb tool uses the python yams library which requires valid yams and therefore needs IPv6 addresses to be quoted strings as the contain ‘:’. This incompatibility causes issues when updating knot as the ubuntu packages from 'http://ppa.launchpad.net/cz.nic-labs/knot-dns/ubuntu’ call get_kaspdb during installation. The below gist shows how to recreate this issue. Please let me know if you need any further information
https://gist.github.com/b4ldr/bd549b4cf63a7d564299497be3ef868d
Thanks John
Dear Knot Resolver users,
Knot Resolver 1.4.0 has been released!
Incompatible changes
--------------------
- lua: query flag-sets are no longer represented as plain integers.
kres.query.* no longer works, and kr_query_t lost trivial methods
'hasflag' and 'resolved'.
You can instead write code like qry.flags.NO_0X20 = true.
Bugfixes
--------
- fix exiting one of multiple forks (#150)
- cache: change the way of using LMDB transactions. That in particular
fixes some cases of using too much space with multiple kresd forks (#240).
Improvements
------------
- policy.suffix: update the aho-corasick code (#200)
- root hints are now loaded from a zonefile; exposed as hints.root_file().
You can override the path by defining ROOTHINTS during compilation.
- policy.FORWARD: work around resolvers adding unsigned NS records (#248)
- reduce unneeded records previously put into authority in wildcarded answers
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hi,
I'm getting an HTTP 500 error from https://gitlab.labs.nic.cz/ that says
"Whoops, something went wrong on our end". Can someone take a look at it
and fix it?
Thanks!
--
Robert Edmonds
edmonds(a)debian.org
Hi,
we have a DNSSEC enabled zone, for which knot serves RRSIGs with expire
date in the past (expired on Sept 13th) and signed by a no longer active
ZSK. The correct RRSIGs (uptodate and signed with the current ZSK) are
served as well, so the zone still works.
Is there a way to purge these outdated RRSIGs from the database?
Regards
André
Hi,
I maybe missed something. I created kasp direcotry, added knot as a
owner.
In the kasp directory (/var/lib/knot/kasp) runned commands:
keymgr init
keymgr zone add domena.cz policy none
keymgr zone key generate domena.cz algorithm rsasha256 size 2048 ksk
Cannot retrieve policy from KASP (not found).
Did I missed something ?
Thanks and best regards
J.Karliak
--
Bc. Josef Karliak
Správa sítě a elektronické pošty
Fakultní nemocnice Hradec Králové
Odbor výpočetních systémů
Sokolská 581, 500 05 Hradec Králové
Tel.: +420 495 833 931, Mob.: +420 724 235 654
e-mail: josef.karliak(a)fnhk.cz, http://www.fnhk.cz
Hello,
this might be a rather stupid question.
I have a fresh install of Debian Stretch with all updates and Knot Resolver 1.4.0. installed from CZ.NIC repositories. I've set up a rather simple configuration allowing our users to use the resolver and everything works fine (systemd socket override for listening on LAN). I have however noticed that kresd logs every single query into /var/log/syslog, generating approx. 1 MB/min worth of logs on our server. I've looked into documentation and haven't found any directive to control the logging behavior. Is there something I might be missing? I would preferrably like to see only warnings in the log.
Here's my config:
# cat /etc/knot-resolver/kresd.conf
net = { '127.0.0.1', '::1', '<my.lan.ip>' }
user('knot-resolver','knot-resolver')
cache.size = 4 * GB
Thanks for the help.
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
I've written aknot module, which is functioning well. I've been asked
to add functionality to it that would inhibit any response from knot,
based upon the client's identity. I know the identity; I just need to
figure out how to inhibit a response.
I just noticed the rrl module, and I looked at what it does. I emulated
what I saw and set pkt->size = 0 and returned KNOTD_STATE_DONE.
When I ran host -a, it returned that no servers could be reached. When
I ran dig ANY, I ultimately got the same response, but dig complained
three times about receiving a message that was too short in length.
I really want NO message to be returned. How do I force this?
Hi Volker,
thank you for your question.
Your suggestion is almost correct, just a little correction:
knotc zone-freeze $ZONE
# wait for possibly still running events (check the logs manually or so...)
knotc zone-flush $ZONE # eventually with '-f' if zone synchronization is
disabled in config
$EDITOR $ZONEFILE # you SHALL increase the SOA serial if any changes
made in zonefile
knotc zone-reload $ZONE
knotc zone-thaw $ZONE
Reload before thaw - because after thaw, some events may start
processing, making the modified zonefile reload problematic.
BR,
Libor
Dne 5.9.2017 v 23:17 Volker Janzen napsal(a):
> Hi,
>
> I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
>
> I currently try this:
>
> knotc zone-freeze $ZONE
> knotc zone-flush $ZONE
> $EDITOR $ZONE
> knotc zone-thaw $ZONE
> knotc zone-reload $ZONE
>
> As far as I can see knot increases the serial on reload and slaves will be notified.
>
> Is this the correct command sequence?
>
>
> Regards
> Volker
>
>
> _______________________________________________
> knot-dns-users mailing list
> knot-dns-users(a)lists.nic.cz
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
Hi,
I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
I currently try this:
knotc zone-freeze $ZONE
knotc zone-flush $ZONE
$EDITOR $ZONE
knotc zone-thaw $ZONE
knotc zone-reload $ZONE
As far as I can see knot increases the serial on reload and slaves will be notified.
Is this the correct command sequence?
Regards
Volker
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand
Hello Knot DNS developers,
I have an observation about newer versions of Knot which use the new
single LMDB-based journal.
Suppose I have 3 slave zones configured in my knot.conf. Let's call them
zone1, zone2 and zone3. Knot loads the zones from the master and writes
data into the journal. Now suppose I remove one zone (zone3) from the
config, and reload knot. The zone is no longer configured, and querying
knot for it returns a REFUSED response. So far all is according to my
expectation.
However, if I run "kjournalprint <path-to-journal> -z", I get:
zone1.
zone2.
zone3.
So the zone is no longer configured, but its data persists in the
journal. If I run "knotc -f zone-purge zone3." I get:
error: [zone3.] (no such zone found)
I'm told that I should have done the purge first, *before* remove the
zone from the configuration. However, I find this problematic for 2 reasons:
1. I have to remember to do this, and I'm not used to this modus
operandi; and
2. this is impossible to do on a slave that is configured automatically
from template files. On our slave servers, where we have around 5000
zones, the zones are configured by templating out a knot.conf. Adding
zones is fine, but if a zone is being deleted, it will just disappear
from knot.conf. We keep no state, and so I don't know which zone is
being removed, and cannot purge it before-hand.
Now, the same is kind of true for Knot < 2.4. But... there is one major
difference. Under older versions of Knot, zone data was written into
individual files, and journals were written into individual .db files. I
can run a job periodically that compares zones in knot.conf with files
on disk, and delete those files that have no matching zones in the
config. This keeps the /var/lib/knot directory clean.
But in newer versions of Knot, there is no way to purge the journal of
zone data once a zone is removed from the configuration. For an operator
like me, this is a problem. I would like "knotc zone-purge" to be able
to operate on zones that are no longer configured, and remove stale data
anyway.
Hi Knot folks,
I just tried to view https://www.knot-dns.cz/ and it gave me an HTTP 404
error. After trying to reload it twice, I got the front page, but the
other parts of the site (documentation, download, etc) are all still
giving me HTTP 404 errors.
Regards,
Anand
Dear Knot DNS and Knot Resolver users,
in order to unite the git namespaces and make it more logical we
have move the repositories of Knot DNS and Knot Resolver under the
knot namespace.
The new repositories are located at:
Knot DNS
https://gitlab.labs.nic.cz/knot/knot-dns
Knot Resolver:
https://gitlab.labs.nic.cz/knot/knot-resolver
The old git:// urls were kept same:
git://gitlabs.labs.nic.cz/knot-dns.git
git://gitlabs.labs.nic.cz/knot-resolver.git
Sorry for any inconvenience it might have caused.
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Am 09.07.2017 um 12:30 schrieb Christoph Lukas:
> Hello list,
>
> I'm running knot 2.5.2 on FreeBSD.
> In an attempt to resolve a recent semantic error in one of my zonefiles,
> the $storage/$zone.db (/var/db/knot/firc.de.db) file got lost.
> Meaning: I accidentally deleted it without a backup.
> At point of deletion, the .db file was 1.6 MB in size.
> The actual zone file was kept, the journal and DNSSEC keys untouched,
> the zone still functions without any issues.
>
> The zone is configured as such in knot.conf:
> zone:
> - domain: firc.de
> file: "/usr/local/etc/knot/zones/firc.de"
> notify: inwx
> acl: acl_inwx
> dnssec-signing: on
> dnssec-policy: rsa
>
>
> This raises the following questions:
>
> 1) What is actually in those .db files?
> 2) Are these any adverse effects to be expected now that I don't have
> the file / need to re-create it?
> 3) How can I re-create the file?
>
> Any answers will be greatly appreciated.
>
> With kind regards,
> Christoph Lukas
>
As answered in
https://lists.nic.cz/pipermail/knot-dns-users/2017-July/001160.html
those .db files are not required anymore.
I should have read the archive first ;)
With kind regards,
Christoph Lukas
Hi,
I am running Knot 2.5.2-1 on a Debian Jessie, all is good, no worries.
I am very pleased with Knot's simplicity and ease of configuration -
which are still readable as well!
I noticed recently that I am getting
knotd[9957]: notice: [$DOMAIN.] journal, obsolete exists, file '/var/lib/knot/zones/$DOMAIN.db'
everytime I restart Knot. I get these for all my domains I have confgured,
and there is one in particular providing my own .dyn. service :-) - so I
am a bit reluctant - just to delete it.
But all the .db files have a fairly old timestamp (Feb 2017) and about the
same. At that time (Feb 2017) I was running just one authoritative Master
instance, nothing fancy. lsof also doesn't report any open files. At
that time (Feb 2017) I was running just one authoritative Master
instance, nothing else.
Can I just delete those files?
Cheers
Thomas
Hello knot,
I have recently started a long over due migration to knot 2.* and I have noticed that the server.workers config stanza is now split into three separate stanzas [server.tcp-workers, server.udp-workers & server.background-workers]. Although this is great for flexibility it does make automation a little bit more difficult. With the 1.6 configuration I could easily say something like the following
workers = $server_cpu_count - 2
This meant I would always have 2 cpu cores available for other processes e.g. doc, tcpdump. With the new configuration I would need to do something like the following
$avalible_workers = $server_cpu_count - 2
$udp_workers = $avalible_workers * 0.6
$tcp_workers = $avalible_workers * 0.3
$background_workers = $avalible_workers * 0.1
The above code is lacking error detection and rounding corrections which will add further complexity and potentially lacking itelagence that is available in knot to better balance resources. As you have already implemented logic in knot to ensure cpus are correctly balanced I wonder if you could add back a workers configurations to act as the upper bound used in the *-workers configuration. Such that
*-workes defaults:
"Default: auto-estimated optimal value based on the number of online CPUs or the value set by `workers` which ever is lower)
Thanks
John
Hi,
I just upgraded my Knot DNS to the newest PPA release 2.5.1-3, after
which the server process refuses to start. Relevant syslog messages:
Jun 15 11:19:41 vertigo knotd[745]: error: module, invalid directory
'/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 error: module,
invalid directory '/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: critical: failed to open
configuration database '' (invalid parameter)
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 critical: failed
to open configuration database '' (invalid parameter)
Could this have something to do with the following change:
knot (2.5.1-3) unstable; urgency=medium
.
* Enable dnstap module and set default moduledir to multiarch path
Antti
Hi there,
I'm having some issues configuring dnstap. I'm using Knot version 2.5.1,
installed via the `knot` package on Debian 3.16.43-2. As per this
documentation
<https://www.knot-dns.cz/docs/2.5/html/modules.html#dnstap-dnstap-traffic-lo…>,
I've added the following lines to my config file:
mod-dnstap:
- id: capture_all
sink: "/etc/knot/capture"
template:
- id: default
global-module: mod-dnstap/capture_all
But when starting knot (e.g. by `sudo knotc conf-begin`), I get the message:
error: config, file 'etc/knot/knot.conf', line 20, item 'mod-dnstap', value
'' (invalid item)
error: failed to load configuration file '/etc/knot/knot.conf' (invalid
item)
I also have the same setup on an Ubuntu 16.04.1 running Knot version
2.4.0-dev, and it works fine.
Any idea what might be causing the issue here? Did the syntax for
mod-dnstap change or something? Should I have installed from source? I do
remember there being some special option you needed to compile a dependency
with to use dnstap when I did this the first time, but I couldn't find
documentation for it when I looked for it.
Thanks!
-Sarah
Hi,
after upgrade to 2.5.1 the output of knotc zone-status shows strange
timestamps for refresh and expire:
[example.net.] role: slave | serial: 1497359235 | transaction: none |
freeze: no | refresh: in 415936h7m15s | update: not scheduled |
expiration: in 416101h7m15s | journal flush: not scheduled | notify: not
scheduled | DNSSEC resign: not scheduled | NSEC3 resalt: not scheduled |
parent DS query: not schedule
However the zone is refreshed within correct interval, so it seems its
just a display issue. Is this something specific to our setup?
Regards
André