Hi,
for the transition of a TLD I need to import the current providers KSK
into my zone. I use the "keymgr import-pub" command for this. I have
done that a few times in the past and it worked very well.
I have now installed the most current version of Knot (3.0.10) and did
the same procedure. But after importing the KSK the zone can't be signed
anymore. It seems like Knot doesn't recognize that this imported key is
a "public-only" key. Knot throws an error and complains that the private
key could not be loaded.
The zone's keys (.example) before the import of the KSK:
# keymgr example list
0b94a3f9fef3ae531fc5ee1334ddd2876db7cd9a ksk=yes zsk=no tag=12595
algorithm=7 size=2048 public-only=no pre-active=0 publish=1650495677
ready=1650495677 active=1650659051 retire-active=0 retire=0
post-active=0 revoke=0 remove=0
13cc082655ddf7160787ef945ad7edb6406bb70e ksk=no zsk=yes tag=05477
algorithm=7 size=1024 public-only=no pre-active=0 publish=1650495677
ready=0 active=1650495677 retire-active=0 retire=0 post-active=0
revoke=0 remove=0
Imported the KSK with the following command:
# keymgr example import-pub /etc/knot/public.key
2c135e77b7f48475a837ad0d28a9459f0e7ce621
OK
The zone's keys (.example) after the import of the KSK:
# keymgr example list
0b94a3f9fef3ae531fc5ee1334ddd2876db7cd9a ksk=yes zsk=no tag=12595
algorithm=7 size=2048 public-only=no pre-active=0 publish=1650495677
ready=1650495677 active=1650659051 retire-active=0 retire=0
post-active=0 revoke=0 remove=0
13cc082655ddf7160787ef945ad7edb6406bb70e ksk=no zsk=yes tag=05477
algorithm=7 size=1024 public-only=no pre-active=0 publish=1650495677
ready=0 active=1650495677 retire-active=0 retire=0 post-active=0
revoke=0 remove=0
2c135e77b7f48475a837ad0d28a9459f0e7ce621 ksk=yes zsk=no tag=35421
algorithm=7 size=2048 public-only=yes pre-active=0 publish=1650660072
ready=0 active=0 retire-active=0 retire=0 post-active=0 revoke=0 remove=0
The imported key (tag 35421) has the flag "public-only=yes", as expected.
But when I now sign the zone, the log shows this errors:
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] control, received
command 'zone-sign'
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, dropping
previous signatures, re-signing zone
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
12595, algorithm RSASHA1_NSEC3_SHA1, KSK, public, active
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
35421, algorithm RSASHA1_NSEC3_SHA1, KSK, public, active+
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
5477, algorithm RSASHA1_NSEC3_SHA1, public, active
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] DNSSEC, failed to
load private keys (not exists)
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] DNSSEC, failed to
load keys (not exists)
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, next
signing at 2022-04-22T21:43:24+0000
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] zone event
'DNSSEC re-sign' failed (not exists)
The imported key should not have the "active" flag:
info: [example.] DNSSEC, key, tag 35421, algorithm RSASHA1_NSEC3_SHA1,
KSK, public, active+
It seems to me that the imported key is not seen as a "public-only" key
anymore and therefore Knot is looking for the corresponding private key,
which of course fails.
I attached an strace output, with the signing operation. But that
doesn't seem to be helpful because the signing command itself doesn't fail.
Thanks,
Thomas
Just to clarify some semantics of the config format.
Is each individual 'remote' ID considered to be a single server,
regardless of the number of addresses it has?
For the notify case, it looks like knot will send to each address in a
remote ID in serial, and stop as soon as one replies. That suggests
the above semantics, but I wanted to make sure I'm interpreting this
behaviour correctly before I complicate my config by adding a lot more
remotes. I am currently treating each remote as an organisation,
lumping all of that organisation's servers together under a single ID.
I'm trying to find a way to poll for any zones where knot is currently
waiting on DS submission to the parent.
I'm aware of the structured logging sent to systemd-journald but I see
this as not particularly useful for monitoring, as the event could be
missed by a dead daemon, bug in code, etc. I'd much prefer to be able
to actively monitor states by polling.
It looks like the only way I can do that right now is to run `keymgr
list` and analyze the output. If I'm reading the documentation
correctly, all I need to look for is a key that is `ksk=yes`, `ready
!= 0`, and `active = 0`.
Does that seem correct? Am I missing something simpler? :)
I have found a situation where I think the Knot behaviour around
algorithm rolls could be better. It's one of those "prevent the user
from hurting themselves" situations, in which I would have hurt myself
if this had involved anything other than an old, unused zone. :)
The suggestion is a simple one: when doing an automated algorithm
roll, the KSK submission check should remain negative until the
parental DS set exactly matches the set requested by the published CDS
set (substitute DNSKEY/CDNSKEY as appropriate).
In a situation where CDS scanning is not being done by my parent, I
slipped up and only added the new DS record to the parent, leaving the
old algorithm's DS record also present. Knot did its submission
check, saw the new DS record, and happily continued on with the
algorithm roll. This eventually led to a situation that was in
violation of RFC 6840 § 5.11 [0]:
A signed zone MUST include a DNSKEY for each algorithm present in
the zone's DS RRset and expected trust anchors for the zone.
I ended up with a situation where I had the new and old DS, but only
the new DNSKEY [1]. This seems like a situation that could be avoided
by extending the logic of the KSK submission check. In addition to
saving users from themselves, it would also help if a situation
occurred where the parent had a bug in their CDS processing
implementation and failed to remove the old DS.
[0]: <https://datatracker.ietf.org/doc/html/rfc6840#section-5.11>
[1]: <https://dnsviz.net/d/dns-oarc.org/Yg7ZDw/dnssec/>
Hello,
what is wrong in my policy section? I can't found any in the docs ?
Have I missing Parameters or ..............
The Warning is,
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[rsa2k].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc1].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[rsa2k].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc1].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc2].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc2].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
my policy,
policy:
- id: rsa2k
algorithm: RSASHA256
ksk-size: 4096
zsk-size: 2048
nsec3: on
- id: ececc1
algorithm: ECDSAP256SHA256
nsec3: on
- id: ececc2
algorithm: ecdsap384sha384
nsec3: on
--
mit freundlichen Grüßen / best regards
Günther J. Niederwimmer
Hi,
I've got a staging environment running, with two signers signing a
staging version of the .is zone (~115k records).
The staging servers are configured with 2 GB RAM and 4 CPU cores,
running on FreeBSD 12.2.
We've experienced that knot crashes because the server runs out of
memory. The zone is configured with:
zonefile-sync: -1
zonefile-load: difference-no-serial
journal-content: all
One of the signers, after an hour of running is showing 215 MB resident
memory used.
The other signer, that's been running for a whole day is showing 1582 MB
resident memory.
I attached a screenshot showing the amount of memory used over a period
of time, and the graph shows that the amount of memory used very
suddenly increases.
I assume the servers are using a lot of memory for the journals, but I'd
like to understand why the sudden increase in used memory, and what to
expect about needed memory?
.einar
Hi,
i have knot dns setup with dns cookie module enabled but if i check with
dnsviz.net i always get:
The server appears to support DNS cookies but did not return a COOKIE
option.
Relevant parts of my knot.conf:
template:
- id: default storage: "/var/lib/knot"
dnssec-signing: on
dnssec-policy: rsa2048
global-module: [ "mod-cookies", "mod-rrl/default" ]
mod-rrl:
- id: default
rate-limit: 200
slip: 2
- domain: mydomain.de
file: "/etc/knot/zones/mydomain.de.zone"
notify: secondary
acl: acl_secondary
zonefile-load: difference
I thought about maybe it's the slip: 2, but that didn't change anything
if set to 1
Do you guys see anything obvious causing this "issue"?
Thanks for your time
Juergen