Hi,
I am experimenting with latest knot and its wonderful dnssec autosigner
functionality. It works pretty nice but I am a bit lost in the unsign
process, my zone looks basically like this:
zone:
- domain: "domain.tld."
storage: "/home/oliver/knot/zones"
file: "sign.local"
zonefile-load: "difference"
dnssec-signing: "on"
dnssec-policy: "dnssec-policy"
serial-policy: "unixtime"
policy:
- id: "dnssec-policy"
zsk-lifetime: "2592000"
ksk-lifetime: "31536000"
propagation-delay: "0"
nsec3: "off"
ksk-submission: "local"
cds-cdnskey-publish: "always"
What is the safe way to turn off dnssec once the DS has been seen by
the resolver/knot?
I tried to do dnssec-signing: "off" but that did not change anything;
I also created a second policy called "unsign-policy" where I switched
cds-cdnskey-publish to "cds-cdnskey-publish".
I expected the CDNSKEY/CDS immediately turn into "0 3 0 AA==" and so on
since my propagation-delay is 0 (for faster test results...)
Thanks for any hints!
--
Oliver PETER oliver(a)gfuzz.de 0x456D688F
Hello,
Is there a way how to force AXFR for certain masters in the
configuration? I have a situation with one of master serveres where IXFR
fails - received response is "Format error". Knot does not fall back to
AFXR in this case and the zone is going to expire. Using zone-retransfer
can fix it. Transferring this zone using kdig is also ok.
I can see this with Knot 2.7.2 and Knot 2.7.3 as well.
BR
Ales
Hi,
Is it possible to restrict DNS64 module only for specific IPv6 subnets
in Knot Resolver? The reasoning behind this is that this would make it
possible to run DNS64 resolver on the same instance with the "normal"
resolver in a way that fake AAAA records are returned only to IPv6-only
clients whereas normal dual-stack or IPv4-only clients are served with
unmodified A records.
I found an issue [1] that seems to be related to the very same thing,
but I was left a little bit uncertain what the current situation is and
how this should/could be configured.
[1] https://gitlab.labs.nic.cz/knot/knot-resolver/issues/368
Cheers,
Antti
Hi folks,
sorry for the spam.. now with the right subject..
Maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
Fon: +49 69 27235-429
Fax: +49 69 27235-239
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi,
Every time we switch DNSSEC on for a single zone, it iterates over all
zones (and logs something trivial about each). This appears to us as
not very efficient. Is there a reason for it? Following the
documentation we had not expected this behaviour,
https://www.knot-dns.cz/docs/2.6/html/configuration.html#zone-signing
We are running Knot DNS with ~3500 zones, so you can imagine this has a
bit of an impact.
Thanks,
-Rick
Hi,
Roland and I ran into a crashing condition for knotd 2.6.[689],
presumably caused by a race condition in the threaded use of PKCS #11
sessions. We use a commercial, replicated, networked HSM and not SoftHSM2.
WORKAROUND:
We do have a work-around with "conf-set server.background-workers 1" so
this is not a blocking condition for us, but handling our ~1700 zones
concurrency would be add back later.
PROBLEM DESCRIPTION:
Without this work-around, we see crashes quite reliably, on a load that
does a number of zone-set/-unset commands, fired by sequentialised knotc
processes to a knotd that continues to fire zone signing concurrently.
The commands are generated with the knot-aware option -k from ldns-zonediff,
https://github.com/SURFnet/ldns-zonediff
ANALYSIS:
Our HSM reports errors that look like a session handle is reused and
then repeatedly logged into, but not always, so it looks like a race
condition on a session variable,
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AEE] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:48:59 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:01 | [00006AE9:00006AED] C_GetAttributeValue
| E: Error CKR_USER_NOT_LOGGED_IN occurred.
27.08.2018 11:49:02 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:49:03 | [00006AE9:00006AEE] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
27.08.2018 11:55:50 | [0000744C:0000744E] C_Login
| E: Error CKR_USER_ALREADY_LOGGED_IN occurred.
These errors stopped being reported with the work-around configured.
Until that time, we have crashes, of which the following dumps one:
Thread 4 "knotd" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffcd1bd700 (LWP 27375)]
0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6967428 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff696902a in __GI_abort () at abort.c:89
#2 0x00007ffff69a97ea in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7ffff6ac2ed8 "*** Error in `%s': %s: 0x%s ***\n") at
../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff69b237a in malloc_printerr (ar_ptr=,
ptr=,
str=0x7ffff6ac2fe8 "double free or corruption (out)", action=3) at
malloc.c:5006
#4 _int_free (av=, p=, have_lock=0) at
malloc.c:3867
#5 0x00007ffff69b653c in __GI___libc_free (mem=) at
malloc.c:2968
#6 0x0000555555597ed3 in ?? ()
#7 0x00005555555987c2 in ?? ()
#8 0x000055555559ba01 in ?? ()
#9 0x00007ffff7120338 in ?? () from /usr/lib/x86_64-linux-gnu/liburcu.so.4
#10 0x00007ffff6d036ba in start_thread (arg=0x7fffcd1bd700) at
pthread_create.c:333
#11 0x00007ffff6a3941d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:109
DEBUGGING HINTS:
Our suspicion is that you may not have set the mutex callbacks when
invoking C_Initialize() on PKCS #11, possibly due to the intermediate
layers of abstraction hiding this from view. This happens more often.
Then again, the double free might pose another hint.
This is on our soon-to-go-live platform, so I'm afraid it'll be very
difficult to do much more testing, I hope this suffices for your debugging!
I hope this helps Knot DNS to move forward!
-Rick
Hi folks,
maybe anybody can help me..
Is there any possibility to sign with more than one core ? The
"background-workers" parameter didn't help...
KnotDNS is using only one core for signing..
thanks a lot
best regards
--
Christian Petrasch
Senior System Engineer
DNS/Infrastructure
IT-Services
DENIC eG
Kaiserstraße 75-77
60329 Frankfurt am Main
GERMANY
E-Mail: petrasch(a)denic.de
http://www.denic.de
PGP-KeyID: 549BE0AE, Fingerprint: 0E0B 6CBE 5D8C B82B 0B49 DE61 870E 8841
549B E0AE
Angaben nach § 25a Absatz 1 GenG: DENIC eG (Sitz: Frankfurt am Main)
Vorstand: Helga Krüger, Martin Küchenthal, Andreas Musielak, Dr. Jörg
Schweiger
Vorsitzender des Aufsichtsrats: Thomas Keller
Eingetragen unter Nr. 770 im Genossenschaftsregister, Amtsgericht
Frankfurt am Main
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hi admin,
I found your knot-dns is really amazing software but I have a issue during
master & slaves configuration . While I configuring any domain zone in
master.
The zone details not propagate to slave until and unless I manually
specifies the domain name in slave zone.
Is there any way to configure this .
I hope your reply soon after you receive the email. I'm doing it for my
personal use and demo.
Regard
Innus Ali
>From : India
Hello,
I have an issue with a zone where KNOT is slave server. I am not able to
transfer a zone: refresh, failed (no usable master). BIND is able to
transfer this zone and with host command AXFR works as well. There are
more domains on this master and the others are working. The thing is
that I can see in Wireshark that the AXFR is started, zone transfer
starts and for some reason KNOT after the 1st ACK to AXFR response
terminates the TCP connection with RST resulting in AXFR fail. AXFR
response is spread over several TCP segments.
I can provide traces privately.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
Thanks for help.
BR
Ales Rygl
Dear all,
I use knot 2.7.1 with automatic DNSSEC signing and key management.
For some zones I have used "cds-cdnskey-publish: none".
As .CH/.LI is about to support CDS/CDNSKEY (rfc8078, rfc7344) I thought
I should enable to publish the CDS/CDNSKEY RR for all my zones. However,
the zones which are already secure (trust anchor in parent zone) do not
publish the CDS/CDNSKEY record when the setting is changes to
"cds-cdnskey-publish: always".
I have not been able to reproduce this error on new zones or new zones
signed and secured with a trust anchor in the parent zone for which I
then change the cds-cdnskey-publish setting from "none" to "always".
This indicates that there seems to be some state error for my existing
zones only.
I tried but w/o success:
knotc zone-sign <zone>
knotc -f zone-purge +journal <zone>
; publish a inactive KSK
keymgr <zone> generate ... ; knotc zone-sign <zone>
Completely removing the zone (and all keys) and restarting fixes the
problem obviously. However, I cannot do this for all my zones as I would
have to remove the DS record in the parent zone prior to this...
Any idea?
Daniel
Hi all,
I would like to kindly ask you to check the Debian repository state? It
looks like it is a bit outdated... The latest version available is
2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52 while 2.7.0 has been
already released.
Thanks
BR
Ales Rygl
Hey,
We're scripting around Knot, and for that we pipe sequences of commands
to knotc. We're running into a few wishes for improved rigour that look
like they are generic:
1. WAITING FOR TRANSACTION LOCKS
This would make our scripts more reliably, especially when we need to do
manual operations on the command line as well. There should be no hurry
for detecting lock freeing operations immediately, so retries with
exponential backoff would be quite alright for us.
Deadlocks are an issue when these are nested, so this would at best be
an option to knotc, but many applications call for a single level and
these could benefit from the added sureness of holding the lock.
2. FAILING ON PARTIAL OPERATIONS
When we script a *-begin, act1, act2, *-commit, and pipe it into knotc
it is not possible to see intermediate results. This could be solved
when any failures (including for non-locking *-begin) would *-abort and
return a suitable exit code. Only success in *-commit would exit(0) and
that would allow us to detect overall success.
We've considered making a wrapper around knotc, but that might actually
reduce its quality and stability, so instead we now propose these features.
Just let me know if you'd like to see the above as a patch (and a repo
to use for it).
Cheers,
-Rick
Hello,
I am seeing segfault crashes from knot + libknot7 version 2.6.8-1~ubuntu
for amd64, during a zone commit cycle. The transaction is empty by the
way, but in general we use a utility to compare Ist to Soll.
This came up while editing a zone that hasn't been configured yet, so we
are obviously doing something strange. (The reason is I'm trying to
switch DNSSEC on/off in a manner orthogonal to the zone data transport,
which is quite clearly not what Knot DNS was designed for. I will post
a feature request that could really help with orthogonality.)
I'll attach two flows, occurring virtually at the same time on our two
machines while doing the same thing locally; so the bug looks
reproducable. If you need more information, I'll try to see what I can do.
Cheers,
-Rick
Jul 24 14:22:59 menezes knotd[17733]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 menezes kernel: [1800163.196199] knotd[17733]: segfault
at 0 ip 00007f375a659410 sp 00007ffde37d46d8 error 4 in
libknot.so.7.0.0[7f375a64b000+2d000]
Jul 24 14:22:59 menezes systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 menezes systemd[1]: knot.service: Unit entered failed state.
Jul 24 14:22:59 menezes systemd[1]: knot.service: Failed with result
'signal'.
Jul 24 14:22:59 vanstone knotd[6473]: info: [example.com.] control,
received command 'zone-commit'
Jul 24 14:22:59 vanstone kernel: [3451862.795573] knotd[6473]: segfault
at 0 ip 00007ffb6e817410 sp 00007ffd2b6e1d58 error 4 in
libknot.so.7.0.0[7ffb6e809000+2d000]
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Main process exited,
code=killed, status=11/SEGV
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Unit entered failed
state.
Jul 24 14:22:59 vanstone systemd[1]: knot.service: Failed with result
'signal'.
Hi,
after updating from 2.6.8 to 2.7.0 none of my zones gets loaded:
failed to load persistent timers (invalid parameter)
error: [nord-west.org.] zone cannot be created
How can I fix this?
Kind Regards
Bjoern
Hi all,
I would kindly ask for help. After a tiny zone record modification I am
receiving following error(s) when trying to access zone data (zone-read):
Aug 02 15:09:34 idunn knotd[779]: warning: [xxxxxxxx.] failed to update
zone file (not enough space provided)
Aug 02 15:09:34 idunn knotd[779]: error: [xxxxxxx.] zone event 'journal
flush' failed (not enough space provided)
There is a plenty of space on the server, I suppose it is related to
journal and db.
Many thanks in advance, it is quite important zone.
KNOT 2.6.7-1+0~20180710153240.24+stretch~1.gbpfa6f52
BR
Ales Rygl
Hi,
I would like to ask about the implementation of the Resource records in
RRSet in KnotDNS.
I have the domain with the three TXT record with same class IN for the
same label ('@') and with the different TTLs. In nsd and bind DNS
servers seems everything fine, but in KnotDNS I got the warning and error:
knotd[551]: warning: [xxxxxxx.xxx.] zone loader, RRSet TTLs mismatched,
node 'xxxxxxx.xxx.' (record type TXT)
knotd[551]: error: [xxxxxxx.xxx.] zone loader, failed to load zone, file
'/etc/knot/files/master.gen/xxxxxxx.xxx' (TTL mismatch)
knotd[551]: error: [xxxxxxx.xxx.] failed to parse zonefile (failed)
knotd[551]: error: [xxxxxxx.xxx.] zone event 'load' failed (failed)
Is it a correct behavior and other DNS servers don't check it or is it a
bug in KnotDNS?
Thank you for reply.
Cheers,
--
Zdenek
Hi,
I am trying to make DNSSEC signing orthogonal to zone data transport in
the DNSSEC signer solution for SURFnet. This translates directly to an
intuitive user interface, where domain owners can toggle DNSSEC on and
off with a flick of a switch.
Interestingly, keymgr can work orthogonally to zone data; keys can be
added and removed, regardless of whether a zone has been setup in Knot DNS.
Where the orthogonality is broken, is that I need to explicitly set
dnssec-signing: to on or off. This means that I need to create a zone,
just to be able to tell Knot DNS about the keys. Of course there are
complaints when configuring Knot DNS without a zone data file present.
The most elegant approach would be to setup dnssec-signing as
opportunistic option, meaning "precisely then when there are keys
available in the keymgr for this zone". Such a setting could then end
up in the policy for any such zone, and that can be done when the zone
data is first sent, without regards of what we try to make an orthogonal
dimension.
I have no idea if this is difficult to make. I do think it may be a use
case that wasn't considered before, which is why I'm posting it here.
If this is easy and doable, please let me know; otherwise I will have to
work around Knot DNS (ignoring errors, overruling previously set content
just to be sure it is set, and so on) to achieve the desired orthogonality.
Cheers,
-Rick
Hello,
We're building a replicated Signer machine, based on Knot DNS. We have
a PKCS #11 backend for keys, and replication working for it.
On one machine we run
one# keymgr orvelte.nep generate ...
and then use the key hash on the other machine in
two# keymgr orvelte.nep share ...
This, however, leads to a report that the identified key could not be
found. Clearly, there is more to the backing store than just the key
material in PKCS #11.
What is the thing I need to share across the two machines, and how can I
do this?
Thanks,
-Rick
We're experiencing occasional failures with Knot crashing while running as a slave. The behavior is as follows: the slave will run for 2 months or so and then segfault. Our system automatically restarts the process, but after 15 minutes or less, the segfault happens again. This repeats until we remove the /var/lib/knot/journal and /var/lib/knot/timers directories. This seems to fix it up for a while: a newly started process will run fine for another couple of months.
More details on our setup: These systems serve a little less than a hundred zones, some of which change at a rapid rate. We have configured the servers to not flush the zone data to regular files. The server software is 2.5.7, but with the changes from the "ecs-patch" branch applied.
A while back, I tried a release from the newer branch (I'm pretty sure it was 2.6.4), but I had a problem there where some servers were falling behind the master, as evidenced by their SOA serial number. Diagnosing this on a more recent branch probably makes more sense, but I'd be a little leery of dealing with two problems, not just one.
I can provide various data: the (gigantic) seemingly "corrupt" journal/timer files and the segfault messages from the syslog. I don't have any coredumps, but I'll turn those on today. Given the nature of the problem, it might take a while for it to manifest.
Chuck
Hello
How can I dump a zone stored in Knot DNS to a file?
DNSSEC signed zones are overwritten, apparently using a zone dump functionality; noticable by the comment ";; Zone dump (Knot DNS 2.6.3)".
Regards
Hi, just getting up to speedon knotDNS and trying to get dynamically
added secondaries working via bootstrapping.
My understanding is when the server receives a notify from an authorized
master, if it is not already in the zone like it will add it and AXFR
it, right?
In my conf:
acl:
- id: "acl_master"
address: "64.68.198.83"
address: "64.68.198.91"
action: "notify"
remote:
- id: "master"
address: "64.68.198.83@53"
address: "64.68.198.91@53"
But whenever I send NOTIFY from either of those masters, nothing happens
on the knotDNS side. I have my logging as:
log:
- target: "syslog"
any: "debug"
Thx
- mark
Hello,
I'm trying to use Knot 2.6.7 in a configuration where zone files are
preserved (including comments, ordering and formatting) yet at the same
time Knot performs DNSSEC signing – something similar to inline-signing
feature by BIND. My config file looks like this:
policy:
- id: ecdsa_fast
nsec3: on
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
template:
- id: mastersign
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
journal-content: all
dnssec-signing: on
dnssec-policy: ecdsa_fast
serial-policy: unixtime
acl: acl_slave
zone:
- domain: "example.com."
template: mastersign
It seems to work well for the first run, I can see that zone got signed
properly:
>
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 1
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> ;; Changes between zone versions: 1 -> 1529578258
> ;;Removed
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1529578258 3600 900 1814400 60
> example.com. 0 CDNSKEY 257 3 13
> …lots of DNSSEC data.
However, if I try to update the unsigned zone file, strange things
happen. If I just add something to a zone and increase the serial, I get
these errors in the log:
>
> Jun 21 13:00:08 localhost knotd[2412]: warning: [example.com.] zone file changed, but SOA serial decreased
> Jun 21 13:00:08 localhost knotd[2412]: error: [example.com.] zone event 'load' failed (value is out of range)
If I set the serial to be higher than the serial of last signed zone, I
get a slightly different error:
>
> Jun 21 13:22:36 localhost knotd[3096]: warning: [example.com.] journal, discontinuity in changes history (1529580085 -> 1529580084), dropping older changesets
> Jun 21 13:22:36 localhost knotd[3096]: error: [example.com.] zone event 'load' failed (value is out of range)
In either case, when I look into the journal after the reload of the
zone, I see just the unsigned zone:
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 2
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 2 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> second.example.com. 60 TXT "second"
Yet the server keeps serving the previous signed zone no matter what I
try. The only thing that help is a cold restart of Knot, when the zone
gets signed again.
So this approach is obviously not working as expected. If I comment out
option `zonefile-load: difference`, I get somehow working solution where
zone is completely resigned during each reload and I get this warning to
the log:
> Jun 21 13:27:38 localhost knotd[3156]: warning: [example.com.] with automatic DNSSEC signing and outgoing transfers enabled, 'zonefile-load: difference' should be set to avoid malformed IXFR after manual zone file update
I guess this should not bother me a lot as log as I keep serial numbers
of unsigned zones significantly different from signed ones. The only
problem is that this completely kills IXFR transfers as well as signing
only differences.
So far the only solution I see is to run two instances of Knot, one
reading the zone file from disk without signing, transferring it to
another instance which would do the signing is slave mode.
Is there anything I'm missing here?
Sorry for such a long e-mail and thank you for reading all the way here.
Best regards,
Ondřej Caletka
Hi!
One of our customers uses Knot 2.6.7 as hidden master which sends
NOTIFYs to our slave service. He reported that Knot can not send the
NOTIFYs, ie:
knotd[10808]: warning: [example.com.] notify, outgoing,
2a02:850:8::6@53: failed (connection reset)
It seems that Knot sometimes tries to send the NOTIFY with TCP (I see
also NOTIFYs via UDP). Unfortunatelly our NOTIFY-receiver only supports UDP.
So, this is the first time seeing a name server sending NOTIFYs over
TCP. Is this a typical behavior in Knot? Can I force Knot to send
NOTIFYs always over UDP?
Thanks
Klaus
Hello
I am using ecdsap256sha256 as algorithm. Why does the KSK DNSKEY (=257) use as digest type SHA1 (=1) and not SHA256 (=2)?
For example:
> dig DNSKEY nic.cz | grep 257
nic.cz. 871 IN DNSKEY 257 3 13 LM4zvjUgZi2XZKsYooDE0HFYGfWp242fKB+O8sLsuox8S6MJTowY8lBD jZD7JKbmaNot3+1H8zU9TrDzWmmHwQ==
> dig DNSKEY nic.cz | grep 257 > dnspub.key
> jdnssec-dstool dnspub.key
nic.cz. 868 IN DS 61281 13 1 091CECC4D2AADB7AC8C4DF413DDF9C5B0B61E5B6
Regards
dp
Hello all,
my Knot DNS is now in production and I would like to setup some
backup tasks for the configuration and of course keys. Are there any
recommendations regarding backup? And restore? I can see that it is very
easy to dump current config but I am to sure how to backup keys. What do
you recommend? Save the content of /var/lib/knot on a hourly/daily
basis? I am not using shared keys.
Thanks
With regards
Ales
Hi
I have a question about this commit
“01b00cc47efe”
Replace select() by poll()?
The performance of epoll is better then select/poll when monitoring large
numbers of file descriptions.
Could you please let me why you choose poll()?
Thanks for your reply!
Dear all,
While trying to migrate or DNS to Knot I have noticed that a slave
server with 2GB RAM is facing memory exhaustion. I am running
2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
having around 1MB in total. Knot is acting as pure slave server with
minimal configuration.
There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
root@eira:/proc/397# cat status
Name: knotd
Umask: 0007
State: S (sleeping)
Tgid: 397
Ngid: 0
Pid: 397
PPid: 1
TracerPid: 0
Uid: 108 108 108 108
Gid: 112 112 112 112
FDSize: 64
Groups: 112
NStgid: 397
NSpid: 397
NSpgid: 397
NSsid: 397
VmPeak: 24817520 kB
VmSize: 24687160 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1743400 kB
VmRSS: 1743272 kB
RssAnon: 1737088 kB
RssFile: 6184 kB
RssShmem: 0 kB
VmData: 1781668 kB
VmStk: 132 kB
VmExe: 516 kB
VmLib: 11488 kB
VmPTE: 3708 kB
VmPMD: 32 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 21
SigQ: 0/7929
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe7bfbbefc
SigIgn: 0000000000000000
SigCgt: 0000000180007003
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 260
nonvoluntary_ctxt_switches: 316
root@eira:/proc/397#
Config:
server:
listen: 0.0.0.0@53
listen: ::@53
user: knot:knot
log:
- target: syslog
any: info
mod-rrl:
- id: rrl-10
rate-limit: 10 # Allow 200 resp/s for each flow
slip: 2 # Every other response slips
mod-stats:
- id: custom
edns-presence: on
query-type: on
request-protocol: on
server-operation: on
request-bytes: on
response-bytes: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
query-size: on
reply-size: on
template:
- id: default
storage: "/var/lib/knot"
module: mod-rrl/rrl-10
module: mod-stats/custom
acl: [allowed_transfer]
disable-any: on
master: idunn
I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
BR
Ales
Hello all,
I noticed that Knot (2.6.5) creates an RRSIG for the CDS/CDNSKEY RRset
with the ZSK/CSK only.
I was wondering if this is an acceptable behavior as RFC 7344, section
4.1. CDS and CDNSKEY Processing Rules states:
o Signer: MUST be signed with a key that is represented in both the
current DNSKEY and DS RRsets, unless the Parent uses the CDS or
CDNSKEY RRset for initial enrollment; in that case, the Parent
validates the CDS/CDNSKEY through some other means (see
Section 6.1 and the Security Considerations).
Specifically I read that "represented in both the current DNSKEY and DS
RRsets" means that the CDS/CDNSKEY RRset must be signed with a KSK/CSK
and not only with a ZSK and a trust chain to KSK <- DS.
I tested both BIND 9.12.1 and PowerDNS Auth 4.0.5 as well. PowerDNS Auth
behaves the same as Knot 2.6.5 but BIND 9.12.1 always signs the
CDS/CDNSKEY RRset with at least the KSK.
Do I read the RFC rule too strict? To be honest, I see nothing wrong
with the CDS/CDNSKEY RRset only signed by the ZSK but BINDs behavior and
the not so clear RFC statement keeps me wondering.
Thanks,
Daniel
I'm getting started with knot resolver and am a bit unclear as to how this
config should be structured.
The result I'm looking for is to forward queries to resolver A if the
source is subnet A; unless the query is for the local domain if so then
query the local DNS.
I've been working with the config below to accomplish this. However I'm
finding that this config will if the request does not match the local
todname and will use root hints if not but will not use the FORWARD server.
Ultimately, this server will resolve DNS for several subnets and will
forward queries to different servers based on the source subnet.
Would someone mind pointing me in the right direction on this, please?
for name, addr_list in pairs(net.interfaces()) do
net.listen(addr_list)
end
-- drop root
user('knot', 'knot')
-- Auto-maintain root TA
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --view filters
'hints', -- Load /etc/hosts and allow custom root hints
'stats',
}
-- 4GB local cache for record storage
cache.size = 4 * GB
--If the request is from eng subnet
if (view:addr('192.168.168.0/24')) then
if (todname('localnet.mydomain.com')) then
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')}))
else
view:addr('192.168.168.0/24', policy.FORWARD('68.111.106.68'))
end
end
--
855.ONTRAPORT
ontraport.com
------------------------------
Get a Demo <https://ontraport.com/demo> | Blog <https://ontraport.com/blog>
| Free Tools <https://ontraport.com/tools>
zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hello,
Knot DNS looks awesome, thanks for that!
The benchmarks show a clear picture (for hosting) that the size of zones
doesn't matter, but DNSSEC does. I'm intruiged by the differences with NSD.
What is less clear, is what form of DNSSEC was used -- online signing,
or just signed for policy refreshes and updates, or signed before it
gets to knotd? This distinction seems important, as it might explain
the structural difference with NSD.
Also, the documentation speaks of "DNSSEC signing for static zones" but
leaves some doubt if this includes editing of the records using zonec
transactions, or if it relates to rosedb, or something else.
https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#automatic-dnssec-sig…https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#rosedb-static-resour…
Other thant his uncertainty (and confusion over the meaning of the
master: parameter) the documentation is a real treat. Thanks for a job
done well!
Best wishes,
-Rick
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC
Hello all,
We had a weird issue with Knot serving an old version of a zone after a server reboot. After the reboot, our monitoring alerted that the zone was out of sync. Knot was then serving an older version of the zone (the zone did not update during the reboot, Knot was serving a version of the zone that was older than what it had before the reboot). The zone file on the disk had the correct serial, and knotc zone-status <zone> showed the current serial as well. However, dig @localhost soa <zone> on that box, showed the old serial. Running knotc zone-refresh <zone> didn't help, as in the logs when it went to do the refresh, it showed 'zone is up-to-date'. Running knotc zone-retransfer also did not resolve the problem, only a restart of the knotd process resolved this issue. While we were able to resolve this ourselves, it is certainly a strange issue and we were wondering if we could get any input on this.
Command output:
[root@ns02 ~]# knotc
knotc> zone-status <zone>
[<zone>] role: slave | serial: 2017121812 | transaction: none | freeze: no | refresh: +3h59m42s | update: not scheduled | expiration: +6D23h59m42s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: not scheduled | parent DS query: not scheduled
knotc> exit
[root@ns02 ~]# dig @localhost soa <zone>
…
… 2017090416 …
…
Logs after retransfer and refresh:
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] control, received command 'zone-refresh'
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] control, received command 'zone-retransfer'
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: starting
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: finished, 0.00 seconds, 1 messages, 5119 bytes
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: zone updated, serial none -> 2017121812
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:53:03 ns02 knot[7187]: info: [<zone>] control, received command 'zone-status'
And a dig after that:
[root@ns02 ~]# dig @localhost soa crnet.cr
…
… 2017090416 …
…
-Rob
Hi,
I wrote a collectd plugin which fetches the metrics from "knotc
[zone-]status" directly from the control socket.
The code is still a bit work in progress but should be mostly done. If
you want to try it out, the code is on Github, feedback welcome:
https://github.com/julianbrost/collectd/tree/knot-pluginhttps://github.com/collectd/collectd/pull/2649
Also, I'd really like some feedback on how I use libknot, as I only
found very little documentation on it. If you have any questions, just ask.
Regards,
Julian
Hi!
I installed the Knot 2.6.3 packages from PPA on Ubuntu 14.04. This
confuses the syslog logging. I am not sure but as I think the problem is
that Knot requires systemd for logging.
The problem is, that I do not see any logging of Knot in my
syslogserver, only in journald. Is this something special in Knot that
the logging is not forwarded to syslog?
Is it possible to use your Ubuntu Packages without systemd logging?
I think it would be better to build the packages on non-systemd distros
(ie Ubuntu 14.04) without systemd dependencies.
Thanks
Klaus
Hi!
Knot 2.6.3: When an incoming NOTIFY does not match any ACL the NOTIFY is
replied with "notauth" although the zone is configured. I would have
expected that Knot should response with "refused" in such a scenario. Is
the notauth intended? From operational view a "refuses" would easy
debugging.
regards
Klaus
> key
>
> An ordered list of references to TSIG keys. The query must match one of them. Empty value means that TSIG key is not required.
>
> Default: not set
This is not 100% correct. At least with a notify ACL the behavior is:
Empty value means that TSIG keys are not allowed.
regards
Klaus
Hi everybody,
I would have a question related to zone signing. Whenever I reload knot config
using knotc reload it starts to resign all DNSSEC enabled zones. It makes the
daemon sometimes unresponsive to knotc utility.
root@idunn:# knotc reload
error: failed to control (connection timeout)
Is it a design intent to sign zones while reloading config? Is it really
needed? It invokes zone transfers, consumes resources, etc.
Thanks for answer
With regards
Ales
Helly everybody,
there is a KNOT DNS master name server that I do not manage myself for my domain. I try to setup a BIND DNS server as a slave in-house. BIND fails to do the zone transfer and reports
31-Dec-2017 16:19:02.503 zone whka.de/IN: Transfer started.
31-Dec-2017 16:19:02.504
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
connected using 2001:7c7:20e8:18e::2#53509
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
failed while receiving responses: NOTAUTH
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
Transfer completed: 0 messages, 0 records, 0 bytes, 0.001 secs
If try dig (this time using the IPv4 address), I get a failure, too.
# dig axfr @141.70.45.160 whka.de.
; <<>> DiG 9.9.5-9+deb8u7-Debian <<>> axfr @141.70.45.160 whka.de.
; (1 server found)
;; global options: +cmd
; Transfer failed.
Wireshark tells me that the reply code of the name server is `1001 Server is not an authority for domain`. What is going on here?
Especially, if I query the same nameserver for an usual A-record it claims to be authoritative. Moreover, KNOT DNS manual says KNOT is an authoritative-only name server. So there is no way of being non-authoritative.
Has anybody already observed something like this?
Best regards, Matthias
--
Evang. Studentenwohnheim Karlsruhe e.V. – Hermann-Ehlers-Kolleg
Matthias Nagel
Willy-Andreas-Allee 1, 76131 Karlsruhe, Germany
Phone: +49-721-96869289, Mobile: +49-151-15998774
E-Mail: matthias.nagel(a)hermann-ehlers-kolleg.de
Dear Knot Resolver users,
Knot Resolver 1.5.1 is released, mainly with bugfixes and cleanups!
Incompatible changes
--------------------
- script supervisor.py was removed, please migrate to a real process manager
- module ketcd was renamed to etcd for consistency
- module kmemcached was renamed to memcached for consistency
Bugfixes
--------
- fix SIGPIPE crashes (#271)
- tests: work around out-of-space for platforms with larger memory pages
- lua: fix mistakes in bindings affecting 1.4.0 and 1.5.0 (and
1.99.1-alpha),
potentially causing problems in dns64 and workarounds modules
- predict module: various fixes (!399)
Improvements
------------
- add priming module to implement RFC 8109, enabled by default (#220)
- add modules helping with system time problems, enabled by default;
for details see documentation of detect_time_skew and detect_time_jump
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.5.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v1.5.1/
--Vladimir
Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hi everybody,
Is there a way how to change TTL of all zone records at once using knotc? I.e.
without editing the zone file manually. Something what I can do using $TTL
directive in Bind9 zone files?
If not I would like to ask for implementing if possible.
Thanks
Regards
Ales Rygl
Dobry den,
narazil jsem na problem s fungovanim modulu mod-synthrecord pri pouziti vice
siti soucasne:
Konfigurace:
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: [ 46.12.0.0/16, 46.13.0.0/16 ]
zone:
- domain: customers.tmcz.cz
file: db.customers.tmcz.cz
module: mod-synthrecord/customers1
S uvedenou konfiguraci Knot generuje pouze zaznamy z posledni uvedene site,
pro 46.12.0.0/16 dava NXDOMAIN. Stejne se to chova i s touto formou zapisu.
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: 46.12.0.0/16
network: 46.13.0.0/16
Konfiguracne je to ok, knot neprotestuje, ale zaznamy negeneruje. Knot je
2.6.1-1+0~20171112193256.11+stretch~1.gbp3eaef0.
Diky za pomoc ci nasmerovani.
S pozdravem
Ales Rygl
On 11/20/2017 12:37 PM, Petr Kubeš wrote:
> Je prosím někde dostupna nějaká jednoduchá "kuchařka" pro zprovoznění
> takovéhoto DNS resolveru?
V některých systémech už je přímo balíček se službou, případně máme PPA
obsahující novější verze. https://www.knot-resolver.cz/download/
Vyhnul bych se verzím před 1.3.3.
Přímo kuchařku nemáme, ale kresd funguje dobře i bez konfigurace - pak
poslouchá na všech lokálních adresách na UDP+TCP portu 53, se 100 MB
cache v momentálním adresáři. Akorát pro validaci DNSSEC je potřeba
zadat jméno souboru s kořenovými klíči, třeba "kresd -k root.keys" - ten
je při neexistenci inicializován přes https. Různé možnosti jsou
popsány v dokumentaci
http://knot-resolver.readthedocs.io/en/stable/daemon.html
V. Čunát
Dobrý den, prosím o radu.
provozujeme malou síť a v současné době využíváme externí DNS
poskytovatele (UPC).
CHtěli by jsme na hraničním uzlu zprovoznit vlastní DNS , konkrétně KNOT
v konfiguraci, kdy by majoritně fungoval jako DNS RESOLVER a v budoucnu
případně dostal i naše zony.
Není prosím u vás někde dostupný návod step by step, co konkrétně
nastavit, aby jsme mohli úspěšně takovýto KNOT zprovoznit v několika
krocích jako CZ Resolver DNS?
Asi špatné období, nedaří se mi bohužel z dostupných manuálů, nebo
návodů systém KNOT dns nastavit tak, aby odpovídal a synchronizoval DNS
zóny.
Velice děkuji za radu
P.Kubeš
Dobry den,
Rad bych pozadal o radu. Experimentuji s Knot DNS, verze 2.6.0-3+0~20171019083827.9+stretch~1.gbpe9bd69. Debian Stretch.
Mam nasazeny DNSSEC s KSK a ZSK v algoritmu 5 a Bind9, klice bez metadat. Snazim se prejit na Knot, s tim, ze mam dve testovaci zony. Pouzivam nasledujici postup.
1. Naimportuji stavajici klice pomoci keymgr
2. nastavim timestamy:
keymgr t-sound.cz set 18484 created=+0 publish=+0 active=+0
keymgr t-sound.cz set 04545 created=+0 publish=+0 active=+0
3. zavedu zonu do Knotu. lifetime je extremne kratky, abych vedel, jak mi to funguje.
zone:
- domain: t-sound.cz
template: signed
file: db.t-sound.cz
dnssec-signing: on
dnssec-policy: migration
- domain: mych5.cz
template: signed
file: db.mych5.cz
dnssec-signing: on
dnssec-policy: migration
acl: [allowed_transfer]
notify: idunn-freya-gts
policy:
- id: migration
algorithm: RSASHA1
ksk-size: 2048
zsk-size: 1024
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
Toto projde. Knot zacne podepisovat importovanymi klici. Nasledne zmenim policy u t-sound.cz na
policy:
- id: migration3
algorithm: ecdsap256sha256
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
ksk-submission: nic.cz
Knot vygeneruje nove klice:
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, algorithm rollover started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 18484, algorithm 5, KSK yes, ZSK no, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 5821, algorithm 5, KSK no, ZSK yes, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public no, ready no, active no
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39697, algorithm 13, KSK no, ZSK yes, public no, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-10T16:45:09
Rozbehne se mechanismus ZSK rolloveru, vypublikuje se CDNSKEY. Projde sumbission. Vysledny stav je, ze zona funguje,
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-12T23:03:27
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] zone file updated, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] notify, outgoing, 93.153.117.50@53: serial 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: finished, 0.00 seconds, 1 messages, 780 bytes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: finished, 0.00 seconds, 1 messages, 780 bytes
ZSK se rotuji. Pak ale dojde k chybe nize:
Nov 12 23:03:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 23:03:27 idunn knotd[24980]: warning: [t-sound.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] zone event 'DNSSEC resign' failed (unknown error -28)
Stav klicu v tomto okamziku:
root@idunn:/var/lib/knot# keymgr t-sound.cz list human
c87e00bd71d0f89ea540ef9c21020df1e0106c0f ksk=yes tag=04256 algorithm=13 public-only=no created=-2D16h24m21s pre-active=-2D16h24m21s publish=-2D16h19m21s ready=-2D16h14m21s active=-1D18h14m21s retire-active=0 retire=0 post-active=0 remove=0
fe9f432bfc5d527dc11520615d6e29e5d1799d8c ksk=no tag=22255 algorithm=13 public-only=no created=-10h26m3s pre-active=0 publish=-10h26m3s ready=0 active=-10h21m3s retire-active=0 retire=0 post-active=0 remove=0
root@idunn:/var/lib/knot#
knotc zone-sign t-sound.cz ale pojde a vse se tim opravi.
Nov 13 08:56:41 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-status'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-sign'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, dropping previous signatures, resigning zone
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, ZSK rollover started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 24386, algorithm 13, KSK no, ZSK yes, public yes, ready no, active no
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-13T09:11:23
O den drive na tom knot zcela havaroval:
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:09 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:09 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:09 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Nov 11 23:05:10 idunn knotd[23933]: info: Knot DNS 2.6.0 starting
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface 0.0.0.0@553
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface ::@553
Nov 11 23:05:10 idunn knotd[23933]: info: changing GID to 121
Nov 11 23:05:10 idunn knotd[23933]: info: changing UID to 114
Nov 11 23:05:10 idunn knotd[23933]: info: loading 2 zones
Nov 11 23:05:10 idunn knotd[23933]: info: [mych5.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: starting server
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:10 idunn knotd[23933]: warning: [mych5.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] zone event 'load' failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:10 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:10 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Delam nekde chybu? Omlouvam se za komplikovany a dlouhy popis.
Diky
S pozdravem
Ales Rygl
Dear Knot Resolver users,
Knot Resolver 1.99.1-alpha has been released!
This is an experimental release meant for testing aggressive caching.
It contains some regressions and might (theoretically) be even vulnerable.
The current focus is to minimize queries into the root zone.
Improvements
------------
- negative answers from validated NSEC (NXDOMAIN, NODATA)
- verbose log is very chatty around cache operations (maybe too much)
Regressions
-----------
- dropped support for alternative cache backends
and for some specific cache operations
- caching doesn't yet work for various cases:
* negative answers without NSEC (i.e. with NSEC3 or insecure)
* +cd queries (needs other internal changes)
* positive wildcard answers
- spurious SERVFAIL on specific combinations of cached records, printing:
<= bad keys, broken trust chain
- make check
- a few Deckard tests are broken, probably due to some problems above
+ unknown ones?
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.99.1-alpha/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz…
Documentation (not updated):
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hello,
one more question:
What is the proper way of autostarting Knot Resolver 1.4.0 on systemd (Debian Stretch in my case) to be able to listen on interfaces other from localhost?
As per the Debian README I've set up the socket override.
# systemctl edit kresd.socket:
[Socket]
ListenStream=<my.lan.ip>:53
ListenDatagram=<my.lan.ip>:53
However after reboot the service doesn't autostart.
# systemctl status kresd.service
kresd.socket - Knot DNS Resolver network listeners
Loaded: loaded (/lib/systemd/system/kresd.socket; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kresd.socket.d
└─override.conf
Active: failed (Result: resources)
Docs: man:kresd(8)
Listen: [::1]:53 (Stream)
[::1]:53 (Datagram)
127.0.0.1:53 (Stream)
127.0.0.1:53 (Datagram)
<my.lan.ip>:53 (Stream)
<my.lan.ip>:53 (Datagram)
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Failed to listen on sockets: Cannot assign requested address
Oct 01 23:17:12 <myhostname> systemd[1]: Failed to listen on Knot DNS Resolver network listeners.
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Unit entered failed state.
To get it running I have to type in manually:
# systemctl start kresd.socket
I apologize, I am now to systemd and its socket activation so it's not clear to me whether service or socket or both have to be somehow set up to autostart or not.
Could anyone clarify this?
Also, this is also in the log (again, Debian default):
Oct 01 23:18:22 <myhostname> kresd[639]: [ ta ] keyfile '/usr/share/dns/root.key': not writeable, starting in unmanaged mode
The file has permissions 644 for root:root. Should this be owned by knot, or writeable by others?
Thanks!
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Hello all,
I have come across an incompatibility with /usr/lib/knot/get_kaspdb and knot in relation to IPv6. Knot expects unquoted ip addresses however the kaspdb tool uses the python yams library which requires valid yams and therefore needs IPv6 addresses to be quoted strings as the contain ‘:’. This incompatibility causes issues when updating knot as the ubuntu packages from 'http://ppa.launchpad.net/cz.nic-labs/knot-dns/ubuntu’ call get_kaspdb during installation. The below gist shows how to recreate this issue. Please let me know if you need any further information
https://gist.github.com/b4ldr/bd549b4cf63a7d564299497be3ef868d
Thanks John
Dear Knot Resolver users,
Knot Resolver 1.4.0 has been released!
Incompatible changes
--------------------
- lua: query flag-sets are no longer represented as plain integers.
kres.query.* no longer works, and kr_query_t lost trivial methods
'hasflag' and 'resolved'.
You can instead write code like qry.flags.NO_0X20 = true.
Bugfixes
--------
- fix exiting one of multiple forks (#150)
- cache: change the way of using LMDB transactions. That in particular
fixes some cases of using too much space with multiple kresd forks (#240).
Improvements
------------
- policy.suffix: update the aho-corasick code (#200)
- root hints are now loaded from a zonefile; exposed as hints.root_file().
You can override the path by defining ROOTHINTS during compilation.
- policy.FORWARD: work around resolvers adding unsigned NS records (#248)
- reduce unneeded records previously put into authority in wildcarded answers
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hi,
I'm getting an HTTP 500 error from https://gitlab.labs.nic.cz/ that says
"Whoops, something went wrong on our end". Can someone take a look at it
and fix it?
Thanks!
--
Robert Edmonds
edmonds(a)debian.org
Hi,
we have a DNSSEC enabled zone, for which knot serves RRSIGs with expire
date in the past (expired on Sept 13th) and signed by a no longer active
ZSK. The correct RRSIGs (uptodate and signed with the current ZSK) are
served as well, so the zone still works.
Is there a way to purge these outdated RRSIGs from the database?
Regards
André
Hi,
I maybe missed something. I created kasp direcotry, added knot as a
owner.
In the kasp directory (/var/lib/knot/kasp) runned commands:
keymgr init
keymgr zone add domena.cz policy none
keymgr zone key generate domena.cz algorithm rsasha256 size 2048 ksk
Cannot retrieve policy from KASP (not found).
Did I missed something ?
Thanks and best regards
J.Karliak
--
Bc. Josef Karliak
Správa sítě a elektronické pošty
Fakultní nemocnice Hradec Králové
Odbor výpočetních systémů
Sokolská 581, 500 05 Hradec Králové
Tel.: +420 495 833 931, Mob.: +420 724 235 654
e-mail: josef.karliak(a)fnhk.cz, http://www.fnhk.cz
Hello,
this might be a rather stupid question.
I have a fresh install of Debian Stretch with all updates and Knot Resolver 1.4.0. installed from CZ.NIC repositories. I've set up a rather simple configuration allowing our users to use the resolver and everything works fine (systemd socket override for listening on LAN). I have however noticed that kresd logs every single query into /var/log/syslog, generating approx. 1 MB/min worth of logs on our server. I've looked into documentation and haven't found any directive to control the logging behavior. Is there something I might be missing? I would preferrably like to see only warnings in the log.
Here's my config:
# cat /etc/knot-resolver/kresd.conf
net = { '127.0.0.1', '::1', '<my.lan.ip>' }
user('knot-resolver','knot-resolver')
cache.size = 4 * GB
Thanks for the help.
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
I've written aknot module, which is functioning well. I've been asked
to add functionality to it that would inhibit any response from knot,
based upon the client's identity. I know the identity; I just need to
figure out how to inhibit a response.
I just noticed the rrl module, and I looked at what it does. I emulated
what I saw and set pkt->size = 0 and returned KNOTD_STATE_DONE.
When I ran host -a, it returned that no servers could be reached. When
I ran dig ANY, I ultimately got the same response, but dig complained
three times about receiving a message that was too short in length.
I really want NO message to be returned. How do I force this?
Hi Volker,
thank you for your question.
Your suggestion is almost correct, just a little correction:
knotc zone-freeze $ZONE
# wait for possibly still running events (check the logs manually or so...)
knotc zone-flush $ZONE # eventually with '-f' if zone synchronization is
disabled in config
$EDITOR $ZONEFILE # you SHALL increase the SOA serial if any changes
made in zonefile
knotc zone-reload $ZONE
knotc zone-thaw $ZONE
Reload before thaw - because after thaw, some events may start
processing, making the modified zonefile reload problematic.
BR,
Libor
Dne 5.9.2017 v 23:17 Volker Janzen napsal(a):
> Hi,
>
> I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
>
> I currently try this:
>
> knotc zone-freeze $ZONE
> knotc zone-flush $ZONE
> $EDITOR $ZONE
> knotc zone-thaw $ZONE
> knotc zone-reload $ZONE
>
> As far as I can see knot increases the serial on reload and slaves will be notified.
>
> Is this the correct command sequence?
>
>
> Regards
> Volker
>
>
> _______________________________________________
> knot-dns-users mailing list
> knot-dns-users(a)lists.nic.cz
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
Hi,
I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
I currently try this:
knotc zone-freeze $ZONE
knotc zone-flush $ZONE
$EDITOR $ZONE
knotc zone-thaw $ZONE
knotc zone-reload $ZONE
As far as I can see knot increases the serial on reload and slaves will be notified.
Is this the correct command sequence?
Regards
Volker
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand
Hello Knot DNS developers,
I have an observation about newer versions of Knot which use the new
single LMDB-based journal.
Suppose I have 3 slave zones configured in my knot.conf. Let's call them
zone1, zone2 and zone3. Knot loads the zones from the master and writes
data into the journal. Now suppose I remove one zone (zone3) from the
config, and reload knot. The zone is no longer configured, and querying
knot for it returns a REFUSED response. So far all is according to my
expectation.
However, if I run "kjournalprint <path-to-journal> -z", I get:
zone1.
zone2.
zone3.
So the zone is no longer configured, but its data persists in the
journal. If I run "knotc -f zone-purge zone3." I get:
error: [zone3.] (no such zone found)
I'm told that I should have done the purge first, *before* remove the
zone from the configuration. However, I find this problematic for 2 reasons:
1. I have to remember to do this, and I'm not used to this modus
operandi; and
2. this is impossible to do on a slave that is configured automatically
from template files. On our slave servers, where we have around 5000
zones, the zones are configured by templating out a knot.conf. Adding
zones is fine, but if a zone is being deleted, it will just disappear
from knot.conf. We keep no state, and so I don't know which zone is
being removed, and cannot purge it before-hand.
Now, the same is kind of true for Knot < 2.4. But... there is one major
difference. Under older versions of Knot, zone data was written into
individual files, and journals were written into individual .db files. I
can run a job periodically that compares zones in knot.conf with files
on disk, and delete those files that have no matching zones in the
config. This keeps the /var/lib/knot directory clean.
But in newer versions of Knot, there is no way to purge the journal of
zone data once a zone is removed from the configuration. For an operator
like me, this is a problem. I would like "knotc zone-purge" to be able
to operate on zones that are no longer configured, and remove stale data
anyway.
Hi Knot folks,
I just tried to view https://www.knot-dns.cz/ and it gave me an HTTP 404
error. After trying to reload it twice, I got the front page, but the
other parts of the site (documentation, download, etc) are all still
giving me HTTP 404 errors.
Regards,
Anand
Dear Knot DNS and Knot Resolver users,
in order to unite the git namespaces and make it more logical we
have move the repositories of Knot DNS and Knot Resolver under the
knot namespace.
The new repositories are located at:
Knot DNS
https://gitlab.labs.nic.cz/knot/knot-dns
Knot Resolver:
https://gitlab.labs.nic.cz/knot/knot-resolver
The old git:// urls were kept same:
git://gitlabs.labs.nic.cz/knot-dns.git
git://gitlabs.labs.nic.cz/knot-resolver.git
Sorry for any inconvenience it might have caused.
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Am 09.07.2017 um 12:30 schrieb Christoph Lukas:
> Hello list,
>
> I'm running knot 2.5.2 on FreeBSD.
> In an attempt to resolve a recent semantic error in one of my zonefiles,
> the $storage/$zone.db (/var/db/knot/firc.de.db) file got lost.
> Meaning: I accidentally deleted it without a backup.
> At point of deletion, the .db file was 1.6 MB in size.
> The actual zone file was kept, the journal and DNSSEC keys untouched,
> the zone still functions without any issues.
>
> The zone is configured as such in knot.conf:
> zone:
> - domain: firc.de
> file: "/usr/local/etc/knot/zones/firc.de"
> notify: inwx
> acl: acl_inwx
> dnssec-signing: on
> dnssec-policy: rsa
>
>
> This raises the following questions:
>
> 1) What is actually in those .db files?
> 2) Are these any adverse effects to be expected now that I don't have
> the file / need to re-create it?
> 3) How can I re-create the file?
>
> Any answers will be greatly appreciated.
>
> With kind regards,
> Christoph Lukas
>
As answered in
https://lists.nic.cz/pipermail/knot-dns-users/2017-July/001160.html
those .db files are not required anymore.
I should have read the archive first ;)
With kind regards,
Christoph Lukas
Hi,
I am running Knot 2.5.2-1 on a Debian Jessie, all is good, no worries.
I am very pleased with Knot's simplicity and ease of configuration -
which are still readable as well!
I noticed recently that I am getting
knotd[9957]: notice: [$DOMAIN.] journal, obsolete exists, file '/var/lib/knot/zones/$DOMAIN.db'
everytime I restart Knot. I get these for all my domains I have confgured,
and there is one in particular providing my own .dyn. service :-) - so I
am a bit reluctant - just to delete it.
But all the .db files have a fairly old timestamp (Feb 2017) and about the
same. At that time (Feb 2017) I was running just one authoritative Master
instance, nothing fancy. lsof also doesn't report any open files. At
that time (Feb 2017) I was running just one authoritative Master
instance, nothing else.
Can I just delete those files?
Cheers
Thomas
Hello knot,
I have recently started a long over due migration to knot 2.* and I have noticed that the server.workers config stanza is now split into three separate stanzas [server.tcp-workers, server.udp-workers & server.background-workers]. Although this is great for flexibility it does make automation a little bit more difficult. With the 1.6 configuration I could easily say something like the following
workers = $server_cpu_count - 2
This meant I would always have 2 cpu cores available for other processes e.g. doc, tcpdump. With the new configuration I would need to do something like the following
$avalible_workers = $server_cpu_count - 2
$udp_workers = $avalible_workers * 0.6
$tcp_workers = $avalible_workers * 0.3
$background_workers = $avalible_workers * 0.1
The above code is lacking error detection and rounding corrections which will add further complexity and potentially lacking itelagence that is available in knot to better balance resources. As you have already implemented logic in knot to ensure cpus are correctly balanced I wonder if you could add back a workers configurations to act as the upper bound used in the *-workers configuration. Such that
*-workes defaults:
"Default: auto-estimated optimal value based on the number of online CPUs or the value set by `workers` which ever is lower)
Thanks
John
Hi,
I just upgraded my Knot DNS to the newest PPA release 2.5.1-3, after
which the server process refuses to start. Relevant syslog messages:
Jun 15 11:19:41 vertigo knotd[745]: error: module, invalid directory
'/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 error: module,
invalid directory '/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: critical: failed to open
configuration database '' (invalid parameter)
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 critical: failed
to open configuration database '' (invalid parameter)
Could this have something to do with the following change:
knot (2.5.1-3) unstable; urgency=medium
.
* Enable dnstap module and set default moduledir to multiarch path
Antti
Hi there,
I'm having some issues configuring dnstap. I'm using Knot version 2.5.1,
installed via the `knot` package on Debian 3.16.43-2. As per this
documentation
<https://www.knot-dns.cz/docs/2.5/html/modules.html#dnstap-dnstap-traffic-lo…>,
I've added the following lines to my config file:
mod-dnstap:
- id: capture_all
sink: "/etc/knot/capture"
template:
- id: default
global-module: mod-dnstap/capture_all
But when starting knot (e.g. by `sudo knotc conf-begin`), I get the message:
error: config, file 'etc/knot/knot.conf', line 20, item 'mod-dnstap', value
'' (invalid item)
error: failed to load configuration file '/etc/knot/knot.conf' (invalid
item)
I also have the same setup on an Ubuntu 16.04.1 running Knot version
2.4.0-dev, and it works fine.
Any idea what might be causing the issue here? Did the syntax for
mod-dnstap change or something? Should I have installed from source? I do
remember there being some special option you needed to compile a dependency
with to use dnstap when I did this the first time, but I couldn't find
documentation for it when I looked for it.
Thanks!
-Sarah
Hi,
after upgrade to 2.5.1 the output of knotc zone-status shows strange
timestamps for refresh and expire:
[example.net.] role: slave | serial: 1497359235 | transaction: none |
freeze: no | refresh: in 415936h7m15s | update: not scheduled |
expiration: in 416101h7m15s | journal flush: not scheduled | notify: not
scheduled | DNSSEC resign: not scheduled | NSEC3 resalt: not scheduled |
parent DS query: not schedule
However the zone is refreshed within correct interval, so it seems its
just a display issue. Is this something specific to our setup?
Regards
André
Dear Knot Resolver users,
CZ.NIC is proud to announce the release of Knot Resolver 1.3.0.
The biggest feature of this release is the support for DNSSEC Validation
in the forwarding mode, the feature many people were eagerly awaiting for.
We have also squeezed refactoring of AD flag handling and several other
bugfixes. The 1.3.0 is currently the recommended release to run at your
recursive nameservers.
Here's the 1.3.0 changelog:
Security
--------
- Refactor handling of AD flag and security status of resource records.
In some cases it was possible for secure domains to get cached as
insecure, even for a TLD, leading to disabled validation.
It also fixes answering with non-authoritative data about nameservers.
Improvements
------------
- major feature: support for forwarding with validation (#112).
The old policy.FORWARD action now does that; the previous non-validating
mode is still avaliable as policy.STUB except that also uses caching (#122).
- command line: specify ports via @ but still support # for compatibility
- policy: recognize 100.64.0.0/10 as local addresses
- layer/iterate: *do* retry repeatedly if REFUSED, as we can't yet easily
retry with other NSs while avoiding retrying with those who REFUSED
- modules: allow changing the directory where modules are found,
and do not search the default library path anymore.
Bugfixes
--------
- validate: fix insufficient caching for some cases (relatively rare)
- avoid putting "duplicate" record-sets into the answer (#198)
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/latest/
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------