Hi Oliver,
> Ah, the mistake was that changing the dnssec-policy *and* dnssec-signing
> in one go does not insert the delete-CDS/CDNSKEY records since knot
> immediately stops all dnssec related actions. Thanks!
You at least want to have the special CDNSKEY record -signed- anyway ;)
> Am I right that, unlike the signing process (KSK submission attempts),
> there is no built-in functionality in knot, that takes care about the
> right time to remove the key material from the zone?
Yes. We didn't care much for this usecase, sorry. I guess it's not so
difficult to achieve this manually. We need to have automated just those
processes, that start automatically (e.g. KSK rollover).
> So, basically I should wait
> [propagation-delay] + [max TTL seen in zone/knot_soa_minimum]
> seconds until I manually remove the material.
No, you first need to check when your parent zone removed the DS record.
Afterwards wait for its TTL + propagation_delay.
BR,
Libor
Hi,
I am experimenting with latest knot and its wonderful dnssec autosigner
functionality. It works pretty nice but I am a bit lost in the unsign
process, my zone looks basically like this:
zone:
- domain: "domain.tld."
storage: "/home/oliver/knot/zones"
file: "sign.local"
zonefile-load: "difference"
dnssec-signing: "on"
dnssec-policy: "dnssec-policy"
serial-policy: "unixtime"
policy:
- id: "dnssec-policy"
zsk-lifetime: "2592000"
ksk-lifetime: "31536000"
propagation-delay: "0"
nsec3: "off"
ksk-submission: "local"
cds-cdnskey-publish: "always"
What is the safe way to turn off dnssec once the DS has been seen by
the resolver/knot?
I tried to do dnssec-signing: "off" but that did not change anything;
I also created a second policy called "unsign-policy" where I switched
cds-cdnskey-publish to "cds-cdnskey-publish".
I expected the CDNSKEY/CDS immediately turn into "0 3 0 AA==" and so on
since my propagation-delay is 0 (for faster test results...)
Thanks for any hints!
--
Oliver PETER oliver(a)gfuzz.de 0x456D688F
Hello,
Is there a way how to force AXFR for certain masters in the
configuration? I have a situation with one of master serveres where IXFR
fails - received response is "Format error". Knot does not fall back to
AFXR in this case and the zone is going to expire. Using zone-retransfer
can fix it. Transferring this zone using kdig is also ok.
I can see this with Knot 2.7.2 and Knot 2.7.3 as well.
BR
Ales
Hi,
Is it possible to restrict DNS64 module only for specific IPv6 subnets
in Knot Resolver? The reasoning behind this is that this would make it
possible to run DNS64 resolver on the same instance with the "normal"
resolver in a way that fake AAAA records are returned only to IPv6-only
clients whereas normal dual-stack or IPv4-only clients are served with
unmodified A records.
I found an issue [1] that seems to be related to the very same thing,
but I was left a little bit uncertain what the current situation is and
how this should/could be configured.
[1] https://gitlab.labs.nic.cz/knot/knot-resolver/issues/368
Cheers,
Antti
Hi folks,
sorry for the spam.. now with the right subject..
Maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
Fon: +49 69 27235-429
Fax: +49 69 27235-239
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi,
Every time we switch DNSSEC on for a single zone, it iterates over all
zones (and logs something trivial about each). This appears to us as
not very efficient. Is there a reason for it? Following the
documentation we had not expected this behaviour,
https://www.knot-dns.cz/docs/2.6/html/configuration.html#zone-signing
We are running Knot DNS with ~3500 zones, so you can imagine this has a
bit of an impact.
Thanks,
-Rick
Hi,
Roland and I ran into a crashing condition for knotd 2.6.[689],
presumably caused by a race condition in the threaded use of PKCS #11
sessions. We use a commercial, replicated, networked HSM and not SoftHSM2.
WORKAROUND:
We do have a work-around with "conf-set server.background-workers 1" so
this is not a blocking condition for us, but handling our ~1700 zones
concurrency would be add back later.
PROBLEM DESCRIPTION:
Without this work-around, we see crashes quite reliably, on a load that
does a number of zone-set/-unset commands, fired by sequentialised knotc
processes to a knotd that continues to fire zone signing concurrently.
The commands are generated with the knot-aware option -k from ldns-zonediff,
https://github.com/SURFnet/ldns-zonediff
ANALYSIS:
Our HSM reports errors that look like a session handle is reused and
then repeatedly logged into, but not always, so it looks like a race
condition on a session variable,
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:02 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:03 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:55:50 | [0000744C:0000744E] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
These errors stopped being reported with the work-around configured.
Until that time, we have crashes, of which the following dumps one:
Thread 4 "knotd" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffcd1bd700 (LWP 27375)]
0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff696902a in __GI_abort () at abort.c:89
#2 0x00007ffff69a97ea in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7ffff6ac2ed8 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff69b237a in malloc_printerr (ar_ptr=,
ptr=,
str=0x7ffff6ac2fe8 "double free or corruption (out)", action=3) at
malloc.c:5006
#4 _int_free (av=, p=, have_lock=0) at
malloc.c:3867
#5 0x00007ffff69b653c in __GI___libc_free (mem=) at
malloc.c:2968
#6 0x0000555555597ed3 in ?? ()
#7 0x00005555555987c2 in ?? ()
#8 0x000055555559ba01 in ?? ()
#9 0x00007ffff7120338 in ?? () from /usr/lib/x86_64-linux-gnu/liburcu.so.4
#10 0x00007ffff6d036ba in start_thread (arg=0x7fffcd1bd700) at
pthread_create.c:333
#11 0x00007ffff6a3941d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:109
DEBUGGING HINTS:
Our suspicion is that you may not have set the mutex callbacks when
invoking C_Initialize() on PKCS #11, possibly due to the intermediate
layers of abstraction hiding this from view. This happens more often.
Then again, the double free might pose another hint.
This is on our soon-to-go-live platform, so I'm afraid it'll be very
difficult to do much more testing, I hope this suffices for your debugging!
I hope this helps Knot DNS to move forward!
-Rick
Hi folks,
maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hello,
I have an issue with a zone where KNOT is slave server. I am not able to
transfer a zone: refresh, failed (no usable master). BIND is able to
transfer this zone and with host command AXFR works as well. There are
more domains on this master and the others are working. The thing is
that I can see in Wireshark that the AXFR is started, zone transfer
starts and for some reason KNOT after the 1st ACK to AXFR response
terminates the TCP connection with RST resulting in AXFR fail. AXFR
response is spread over several TCP segments.
I can provide traces privately.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
Thanks for help.
BR
Ales Rygl
Dear all,
I use knot 2.7.1 with automatic DNSSEC signing and key management.
For some zones I have used "cds-cdnskey-publish: none".
As .CH/.LI is about to support CDS/CDNSKEY (rfc8078, rfc7344) I thought
I should enable to publish the CDS/CDNSKEY RR for all my zones. However,
the zones which are already secure (trust anchor in parent zone) do not
publish the CDS/CDNSKEY record when the setting is changes to
"cds-cdnskey-publish: always".
I have not been able to reproduce this error on new zones or new zones
signed and secured with a trust anchor in the parent zone for which I
then change the cds-cdnskey-publish setting from "none" to "always".
This indicates that there seems to be some state error for my existing
zones only.
I tried but w/o success:
knotc zone-sign <zone>
knotc -f zone-purge +journal <zone>
; publish a inactive KSK
keymgr <zone> generate ... ; knotc zone-sign <zone>
Completely removing the zone (and all keys) and restarting fixes the
problem obviously. However, I cannot do this for all my zones as I would
have to remove the DS record in the parent zone prior to this...
Any idea?
Daniel
Hi all,
I would like to kindly ask you to check the Debian repository state? It
looks like it is a bit outdated... The latest version available is
2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52 while 2.7.0 has been
already released.
Thanks
BR
Ales Rygl
Hey,
We're scripting around Knot, and for that we pipe sequences of commands
to knotc. We're running into a few wishes for improved rigour that look
like they are generic:
1. WAITING FOR TRANSACTION LOCKS
This would make our scripts more reliably, especially when we need to do
manual operations on the command line as well. There should be no hurry
for detecting lock freeing operations immediately, so retries with
exponential backoff would be quite alright for us.
Deadlocks are an issue when these are nested, so this would at best be
an option to knotc, but many applications call for a single level and
these could benefit from the added sureness of holding the lock.
2. FAILING ON PARTIAL OPERATIONS
When we script a *-begin, act1, act2, *-commit, and pipe it into knotc
it is not possible to see intermediate results. This could be solved
when any failures (including for non-locking *-begin) would *-abort and
return a suitable exit code. Only success in *-commit would exit(0) and
that would allow us to detect overall success.
We've considered making a wrapper around knotc, but that might actually
reduce its quality and stability, so instead we now propose these features.
Just let me know if you'd like to see the above as a patch (and a repo
to use for it).
Cheers,
-Rick
Hello,
I am seeing segfault crashes from knot + libknot7 version 2.6.8-1~ubuntu
for amd64, during a zone commit cycle. The transaction is empty by the
way, but in general we use a utility to compare Ist to Soll.
This came up while editing a zone that hasn't been configured yet, so we
are obviously doing something strange. (The reason is I'm trying to
switch DNSSEC on/off in a manner orthogonal to the zone data transport,
which is quite clearly not what Knot DNS was designed for. I will post
a feature request that could really help with orthogonality.)
I'll attach two flows, occurring virtually at the same time on our two
machines while doing the same thing locally; so the bug looks
reproducable. If you need more information, I'll try to see what I can do.
Cheers,
-Rick
Jul 24 14:22:59 menezes knotd[17733]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 menezes kernel: [1800163.196199] knotd[17733]: segfault
at 0 ip 00007f375a659410 sp 00007ffde37d46d8 error 4 in
libknot.so.7.0.0[7f375a64b000+2d000]
Jul 24 14:22:59 menezes systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 menezes systemd[1]: knot.service: Unit entered failed state.
Jul 24 14:22:59 menezes systemd[1]: knot.service: Failed with result
'signal'.
Jul 24 14:22:59 vanstone knotd[6473]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 vanstone kernel: [3451862.795573] knotd[6473]: segfault
at 0 ip 00007ffb6e817410 sp 00007ffd2b6e1d58 error 4 in
libknot.so.7.0.0[7ffb6e809000+2d000]
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Unit entered failed
state.
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Failed with result
'signal'.
Hi,
after updating from 2.6.8 to 2.7.0 none of my zones gets loaded:
failed to load persistent timers (invalid parameter)
error: [nord-west.org.] zone cannot be created
How can I fix this?
Kind Regards
Bjoern
Hi all,
I would kindly ask for help. After a tiny zone record modification I am
receiving following error(s) when trying to access zone data (zone-read):
Aug 02 15:09:34 idunn knotd[779]: warning: [xxxxxxxx.] failed to update
zone file (not enough space provided)
Aug 02 15:09:34 idunn knotd[779]: error: [xxxxxxx.] zone event 'journal
flush' failed (not enough space provided)
There is a plenty of space on the server, I suppose it is related to
journal and db.
Many thanks in advance, it is quite important zone.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
BR
Ales Rygl
Hi,
I would like to ask about the implementation of the Resource records in
RRSet in KnotDNS.
I have the domain with the three TXT record with same class IN for the
same label ('@') and with the different TTLs. In nsd and bind DNS
servers seems everything fine, but in KnotDNS I got the warning and error:
knotd[551]: warning: [xxxxxxx.xxx.] zone loader, RRSet TTLs mismatched,
node 'xxxxxxx.xxx.' (record type TXT)
knotd[551]: error: [xxxxxxx.xxx.] zone loader, failed to load zone, file
'/etc/knot/files/master.gen/xxxxxxx.xxx' (TTL mismatch)
knotd[551]: error: [xxxxxxx.xxx.] failed to parse zonefile (failed)
knotd[551]: error: [xxxxxxx.xxx.] zone event 'load' failed (failed)
Is it a correct behavior and other DNS servers don't check it or is it a
bug in KnotDNS?
Thank you for reply.
Cheers,
--
Zdenek
Hi,
I am trying to make DNSSEC signing orthogonal to zone data transport in
the DNSSEC signer solution for SURFnet. This translates directly to an
intuitive user interface, where domain owners can toggle DNSSEC on and
off with a flick of a switch.
Interestingly, keymgr can work orthogonally to zone data; keys can be
added and removed, regardless of whether a zone has been setup in Knot DNS.
Where the orthogonality is broken, is that I need to explicitly set
dnssec-signing: to on or off. This means that I need to create a zone,
just to be able to tell Knot DNS about the keys. Of course there are
complaints when configuring Knot DNS without a zone data file present.
The most elegant approach would be to setup dnssec-signing as
opportunistic option, meaning "precisely then when there are keys
available in the keymgr for this zone". Such a setting could then end
up in the policy for any such zone, and that can be done when the zone
data is first sent, without regards of what we try to make an orthogonal
dimension.
I have no idea if this is difficult to make. I do think it may be a use
case that wasn't considered before, which is why I'm posting it here.
If this is easy and doable, please let me know; otherwise I will have to
work around Knot DNS (ignoring errors, overruling previously set content
just to be sure it is set, and so on) to achieve the desired orthogonality.
Cheers,
-Rick
Hello,
We're building a replicated Signer machine, based on Knot DNS. We have
a PKCS #11 backend for keys, and replication working for it.
On one machine we run
one# keymgr orvelte.nep generate ...
and then use the key hash on the other machine in
two# keymgr orvelte.nep share ...
This, however, leads to a report that the identified key could not be
found. Clearly, there is more to the backing store than just the key
material in PKCS #11.
What is the thing I need to share across the two machines, and how can I
do this?
Thanks,
-Rick
We're experiencing occasional failures with Knot crashing while running as a slave. The behavior is as follows: the slave will run for 2 months or so and then segfault. Our system automatically restarts the process, but after 15 minutes or less, the segfault happens again. This repeats until we remove the /var/lib/knot/journal and /var/lib/knot/timers directories. This seems to fix it up for a while: a newly started process will run fine for another couple of months.
More details on our setup: These systems serve a little less than a hundred zones, some of which change at a rapid rate. We have configured the servers to not flush the zone data to regular files. The server software is 2.5.7, but with the changes from the "ecs-patch" branch applied.
A while back, I tried a release from the newer branch (I'm pretty sure it was 2.6.4), but I had a problem there where some servers were falling behind the master, as evidenced by their SOA serial number. Diagnosing this on a more recent branch probably makes more sense, but I'd be a little leery of dealing with two problems, not just one.
I can provide various data: the (gigantic) seemingly "corrupt" journal/timer files and the segfault messages from the syslog. I don't have any coredumps, but I'll turn those on today. Given the nature of the problem, it might take a while for it to manifest.
Chuck
Hello
How can I dump a zone stored in Knot DNS to a file?
DNSSEC signed zones are overwritten, apparently using a zone dump functionality; noticable by the comment ";; Zone dump (Knot DNS 2.6.3)".
Regards
Hi, just getting up to speedon knotDNS and trying to get dynamically
added secondaries working via bootstrapping.
My understanding is when the server receives a notify from an authorized
master, if it is not already in the zone like it will add it and AXFR
it, right?
In my conf:
acl:
- id: "acl_master"
address: "64.68.198.83"
address: "64.68.198.91"
action: "notify"
remote:
- id: "master"
address: "64.68.198.83@53"
address: "64.68.198.91@53"
But whenever I send NOTIFY from either of those masters, nothing happens
on the knotDNS side. I have my logging as:
log:
- target: "syslog"
any: "debug"
Thx
- mark
Hello,
I'm trying to use Knot 2.6.7 in a configuration where zone files are
preserved (including comments, ordering and formatting) yet at the same
time Knot performs DNSSEC signing – something similar to inline-signing
feature by BIND. My config file looks like this:
policy:
- id: ecdsa_fast
nsec3: on
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
template:
- id: mastersign
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
journal-content: all
dnssec-signing: on
dnssec-policy: ecdsa_fast
serial-policy: unixtime
acl: acl_slave
zone:
- domain: "example.com."
template: mastersign
It seems to work well for the first run, I can see that zone got signed
properly:
>
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 1
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> ;; Changes between zone versions: 1 -> 1529578258
> ;;Removed
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1529578258 3600 900 1814400 60
> example.com. 0 CDNSKEY 257 3 13
> …lots of DNSSEC data.
However, if I try to update the unsigned zone file, strange things
happen. If I just add something to a zone and increase the serial, I get
these errors in the log:
>
> Jun 21 13:00:08 localhost knotd[2412]: warning: [example.com.] zone file changed, but SOA serial decreased
> Jun 21 13:00:08 localhost knotd[2412]: error: [example.com.] zone event 'load' failed (value is out of range)
If I set the serial to be higher than the serial of last signed zone, I
get a slightly different error:
>
> Jun 21 13:22:36 localhost knotd[3096]: warning: [example.com.] journal, discontinuity in changes history (1529580085 -> 1529580084), dropping older changesets
> Jun 21 13:22:36 localhost knotd[3096]: error: [example.com.] zone event 'load' failed (value is out of range)
In either case, when I look into the journal after the reload of the
zone, I see just the unsigned zone:
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 2
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 2 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> second.example.com. 60 TXT "second"
Yet the server keeps serving the previous signed zone no matter what I
try. The only thing that help is a cold restart of Knot, when the zone
gets signed again.
So this approach is obviously not working as expected. If I comment out
option `zonefile-load: difference`, I get somehow working solution where
zone is completely resigned during each reload and I get this warning to
the log:
> Jun 21 13:27:38 localhost knotd[3156]: warning: [example.com.] with automatic DNSSEC signing and outgoing transfers enabled, 'zonefile-load: difference' should be set to avoid malformed IXFR after manual zone file update
I guess this should not bother me a lot as log as I keep serial numbers
of unsigned zones significantly different from signed ones. The only
problem is that this completely kills IXFR transfers as well as signing
only differences.
So far the only solution I see is to run two instances of Knot, one
reading the zone file from disk without signing, transferring it to
another instance which would do the signing is slave mode.
Is there anything I'm missing here?
Sorry for such a long e-mail and thank you for reading all the way here.
Best regards,
Ondřej Caletka
Hi!
One of our customers uses Knot 2.6.7 as hidden master which sends
NOTIFYs to our slave service. He reported that Knot can not send the
NOTIFYs, ie:
knotd[10808]: warning: [example.com.] notify, outgoing,
2a02:850:8::6@53: failed (connection reset)
It seems that Knot sometimes tries to send the NOTIFY with TCP (I see
also NOTIFYs via UDP). Unfortunatelly our NOTIFY-receiver only supports UDP.
So, this is the first time seeing a name server sending NOTIFYs over
TCP. Is this a typical behavior in Knot? Can I force Knot to send
NOTIFYs always over UDP?
Thanks
Klaus
Hello
I am using ecdsap256sha256 as algorithm. Why does the KSK DNSKEY (=257) use as digest type SHA1 (=1) and not SHA256 (=2)?
For example:
> dig DNSKEY nic.cz | grep 257
nic.cz. 871 IN DNSKEY 257 3 13 LM4zvjUgZi2XZKsYooDE0HFYGfWp242fKB+O8sLsuox8S6MJTowY8lBD jZD7JKbmaNot3+1H8zU9TrDzWmmHwQ==
> dig DNSKEY nic.cz | grep 257 > dnspub.key
> jdnssec-dstool dnspub.key
nic.cz. 868 IN DS 61281 13 1 091CECC4D2AADB7AC8C4DF413DDF9C5B0B61E5B6
Regards
dp
Hello all,
my Knot DNS is now in production and I would like to setup some
backup tasks for the configuration and of course keys. Are there any
recommendations regarding backup? And restore? I can see that it is very
easy to dump current config but I am to sure how to backup keys. What do
you recommend? Save the content of /var/lib/knot on a hourly/daily
basis? I am not using shared keys.
Thanks
With regards
Ales
Hi
I have a question about this commit
“01b00cc47efe”
Replace select() by poll()?
The performance of epoll is better then select/poll when monitoring large
numbers of file descriptions.
Could you please let me why you choose poll()?
Thanks for your reply!
Dear all,
While trying to migrate or DNS to Knot I have noticed that a slave
server with 2GB RAM is facing memory exhaustion. I am running
2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
having around 1MB in total. Knot is acting as pure slave server with
minimal configuration.
There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
root@eira:/proc/397# cat status
Name: knotd
Umask: 0007
State: S (sleeping)
Tgid: 397
Ngid: 0
Pid: 397
PPid: 1
TracerPid: 0
Uid: 108 108 108 108
Gid: 112 112 112 112
FDSize: 64
Groups: 112
NStgid: 397
NSpid: 397
NSpgid: 397
NSsid: 397
VmPeak: 24817520 kB
VmSize: 24687160 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1743400 kB
VmRSS: 1743272 kB
RssAnon: 1737088 kB
RssFile: 6184 kB
RssShmem: 0 kB
VmData: 1781668 kB
VmStk: 132 kB
VmExe: 516 kB
VmLib: 11488 kB
VmPTE: 3708 kB
VmPMD: 32 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 21
SigQ: 0/7929
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe7bfbbefc
SigIgn: 0000000000000000
SigCgt: 0000000180007003
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 260
nonvoluntary_ctxt_switches: 316
root@eira:/proc/397#
Config:
server:
listen: 0.0.0.0@53
listen: ::@53
user: knot:knot
log:
- target: syslog
any: info
mod-rrl:
- id: rrl-10
rate-limit: 10 # Allow 200 resp/s for each flow
slip: 2 # Every other response slips
mod-stats:
- id: custom
edns-presence: on
query-type: on
request-protocol: on
server-operation: on
request-bytes: on
response-bytes: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
query-size: on
reply-size: on
template:
- id: default
storage: "/var/lib/knot"
module: mod-rrl/rrl-10
module: mod-stats/custom
acl: [allowed_transfer]
disable-any: on
master: idunn
I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
BR
Ales
Hello all,
I noticed that Knot (2.6.5) creates an RRSIG for the CDS/CDNSKEY RRset
with the ZSK/CSK only.
I was wondering if this is an acceptable behavior as RFC 7344, section
4.1. CDS and CDNSKEY Processing Rules states:
o Signer: MUST be signed with a key that is represented in both the
current DNSKEY and DS RRsets, unless the Parent uses the CDS or
CDNSKEY RRset for initial enrollment; in that case, the Parent
validates the CDS/CDNSKEY through some other means (see
Section 6.1 and the Security Considerations).
Specifically I read that "represented in both the current DNSKEY and DS
RRsets" means that the CDS/CDNSKEY RRset must be signed with a KSK/CSK
and not only with a ZSK and a trust chain to KSK <- DS.
I tested both BIND 9.12.1 and PowerDNS Auth 4.0.5 as well. PowerDNS Auth
behaves the same as Knot 2.6.5 but BIND 9.12.1 always signs the
CDS/CDNSKEY RRset with at least the KSK.
Do I read the RFC rule too strict? To be honest, I see nothing wrong
with the CDS/CDNSKEY RRset only signed by the ZSK but BINDs behavior and
the not so clear RFC statement keeps me wondering.
Thanks,
Daniel
I'm getting started with knot resolver and am a bit unclear as to how this
config should be structured.
The result I'm looking for is to forward queries to resolver A if the
source is subnet A; unless the query is for the local domain if so then
query the local DNS.
I've been working with the config below to accomplish this. However I'm
finding that this config will if the request does not match the local
todname and will use root hints if not but will not use the FORWARD server.
Ultimately, this server will resolve DNS for several subnets and will
forward queries to different servers based on the source subnet.
Would someone mind pointing me in the right direction on this, please?
for name, addr_list in pairs(net.interfaces()) do
net.listen(addr_list)
end
-- drop root
user('knot', 'knot')
-- Auto-maintain root TA
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --view filters
'hints', -- Load /etc/hosts and allow custom root hints
'stats',
}
-- 4GB local cache for record storage
cache.size = 4 * GB
--If the request is from eng subnet
if (view:addr('192.168.168.0/24')) then
if (todname('localnet.mydomain.com')) then
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')}))
else
view:addr('192.168.168.0/24', policy.FORWARD('68.111.106.68'))
end
end
--
855.ONTRAPORT
ontraport.com
------------------------------
Get a Demo <https://ontraport.com/demo> | Blog <https://ontraport.com/blog>
| Free Tools <https://ontraport.com/tools>
zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hello,
Knot DNS looks awesome, thanks for that!
The benchmarks show a clear picture (for hosting) that the size of zones
doesn't matter, but DNSSEC does. I'm intruiged by the differences with NSD.
What is less clear, is what form of DNSSEC was used -- online signing,
or just signed for policy refreshes and updates, or signed before it
gets to knotd? This distinction seems important, as it might explain
the structural difference with NSD.
Also, the documentation speaks of "DNSSEC signing for static zones" but
leaves some doubt if this includes editing of the records using zonec
transactions, or if it relates to rosedb, or something else.
https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#automatic-dnssec-sig…https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#rosedb-static-resour…
Other thant his uncertainty (and confusion over the meaning of the
master: parameter) the documentation is a real treat. Thanks for a job
done well!
Best wishes,
-Rick
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC
Hello all,
We had a weird issue with Knot serving an old version of a zone after a server reboot. After the reboot, our monitoring alerted that the zone was out of sync. Knot was then serving an older version of the zone (the zone did not update during the reboot, Knot was serving a version of the zone that was older than what it had before the reboot). The zone file on the disk had the correct serial, and knotc zone-status <zone> showed the current serial as well. However, dig @localhost soa <zone> on that box, showed the old serial. Running knotc zone-refresh <zone> didn't help, as in the logs when it went to do the refresh, it showed 'zone is up-to-date'. Running knotc zone-retransfer also did not resolve the problem, only a restart of the knotd process resolved this issue. While we were able to resolve this ourselves, it is certainly a strange issue and we were wondering if we could get any input on this.
Command output:
[root@ns02 ~]# knotc
knotc> zone-status <zone>
[<zone>] role: slave | serial: 2017121812 | transaction: none | freeze: no | refresh: +3h59m42s | update: not scheduled | expiration: +6D23h59m42s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: not scheduled | parent DS query: not scheduled
knotc> exit
[root@ns02 ~]# dig @localhost soa <zone>
…
… 2017090416 …
…
Logs after retransfer and refresh:
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] control, received command 'zone-refresh'
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] control, received command 'zone-retransfer'
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: starting
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: finished, 0.00 seconds, 1 messages, 5119 bytes
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: zone updated, serial none -> 2017121812
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:53:03 ns02 knot[7187]: info: [<zone>] control, received command 'zone-status'
And a dig after that:
[root@ns02 ~]# dig @localhost soa crnet.cr
…
… 2017090416 …
…
-Rob
Hi,
I wrote a collectd plugin which fetches the metrics from "knotc
[zone-]status" directly from the control socket.
The code is still a bit work in progress but should be mostly done. If
you want to try it out, the code is on Github, feedback welcome:
https://github.com/julianbrost/collectd/tree/knot-pluginhttps://github.com/collectd/collectd/pull/2649
Also, I'd really like some feedback on how I use libknot, as I only
found very little documentation on it. If you have any questions, just ask.
Regards,
Julian
Hi!
I installed the Knot 2.6.3 packages from PPA on Ubuntu 14.04. This
confuses the syslog logging. I am not sure but as I think the problem is
that Knot requires systemd for logging.
The problem is, that I do not see any logging of Knot in my
syslogserver, only in journald. Is this something special in Knot that
the logging is not forwarded to syslog?
Is it possible to use your Ubuntu Packages without systemd logging?
I think it would be better to build the packages on non-systemd distros
(ie Ubuntu 14.04) without systemd dependencies.
Thanks
Klaus