Hello!
I have a Knot 3.2.5 server running here which, for most zones, acts as a
bump-in-the-wire signer, and it's doing exactly what I expect it to do.
The same server carries a few secondary zones which are not signed, and I
notice that when Knot transfers these zones in, it doesn't NOTIFY its
secondaries, something which works fine for DNSSEC signed zones.
The following configuration is in place:
remote:
- id: pdns
address: 192.168.25.45@53
key: dsupload
block-notify-after-transfer: on # <-------
automatic-acl: on
template:
- id: default
zonefile-load: difference
file: "%s"
serial-policy: dateserial
master: pdns
catalog-role: member
catalog-zone: katz1
acl: [ xfr, notify_from_pdns, xfer_to_bind ]
notify: [ s1, s2, s3 ]
policy:
- id: manualHSM
manual: on
keystore: thales
cds-cdnskey-publish: rollover
ksk-submission: ds_checker
ds-push: pdns
zone:
- domain: sig.example
dnssec-policy: manualHSM
dnssec-signing: on
- domain: notsig.example
dnssec-signing: off
When sig.example is transferred in, Knot signs it, NOTIFYs its secondaries
(s1--s3), they XFR the zone and all's well.
When the unsigned notsig.example is transferred in, the logs indicate Knot is
seeing the new serial, and that's it; the secondaries are not NOTIFYd. (I can
manually `knotc notify', but that's not the point.)
Setting `block-notify-after-transfer: off' on the remote remediates this. Knot
then does NOTIFY its secondaries for the unsigned zone (and for the signed
zone).
The documentation states:
"When incoming AXFR/IXFR from this remote (as a primary server),
suppress sending NOTIFY messages to all configured secondary servers."
However, if I swich it off (i.e. enable notification), I do not see the NOTIFY
when knot initially transfers the unsigned zone which is then signed and hence
then notified.
Is this behavior expected, and have I interpreted it correctly?
Thanks & best regards,
-JP
I have a zone for which I'd like to ensure an admin cannot mistakenly kick off
a KSK rollover, so I am considering setting configuring its dnssec-policy to
one with `manual: on' which prevents even a `knotc zone-key-rollover' on it. I
have experimented with switching `manual: on' to `manual: off', and the idea
seems to work. I have also apparently successfully been able to alter
`ksk-lifetime', and have not noticed anything going wrong.
Based on this, I wish to know if it is considered safe to alter many (all?) of
a policy's settings, as long as neither algorithm nor key sizes are changed, and
whether it is safe to alter the policy itself (i.e. also change a policy
name for a zone).
ksk-lifetime, delays, rrsig-lifetimes, ksk-submission, etc.: can all these be
changed without breaking signing of a zone?
Thank you & regards,
-JP
Hello!
I have here a catalog zone with roughly 230 member zones in it, and I
occasionally see the following warning/error in the log:
2023-07-12T18:01:17+0200 warning: [k-catalog.] failed to update zone file (operation not permitted)
2023-07-12T18:01:17+0200 error: [k-catalog.] zone event 'flush' failed (operation not permitted)
The catalog itself appears to work correctly; it's transferred to secondary
BIND servers and they correctly process member zones.
template:
- id: default
storage: "..."
zonefile-load: difference
file: "%s"
serial-policy: dateserial
master: pdns
catalog-role: member
catalog-zone: k-catalog
acl: [ xfr, notify_from_pdns, xfer_to_bind ]
- id: catzonetemplate
catalog-role: generate
acl: xfer_to_bind
zone:
- domain: k-catalog
semantic-checks: off
template: catzonetemplate
journal-content: none
acl: [ xfr, xfer_to_bind ]
While pasting the configuration it occurs to me it might be due to there not
being a 'backing' file for the catalog. Is that the problem? Is it even
possible on a catalog-role:generate to have a file?
Thanks for your help.
-JP
Hi Knotty people,
I've started uprading my infrastructure to Debian 12 which comes with knot
3.2.6 (instead of 3.0.5) and since the upgrade my catalog zone
configuration doesn't seem to work anymore.
I have this in knot.conf:
zone:
- domain: dxld.catalog
catalog-role: interpret
catalog-template: dxld-master
template: dxld-master
dxld.catalog gets loaded properly, `knotc zone-read dxld.catalog`:
[dxld.catalog.] dxld.catalog. 0 SOA ns0.dxld.at. hostmaster.dxld.at. 2023070601 900 300 86400 300
[dxld.catalog.] version.dxld.catalog. 0 TXT "2"
[dxld.catalog.] zones.dxld.catalog. 0 PTR dxld.at.
[dxld.catalog.] zones.dxld.catalog. 0 PTR dxld.net.
[dxld.catalog.] zones.dxld.catalog. 0 PTR darkboxed.org.
but the zones pointed to don't get instantiated (as seen by `knotc
zone-status`). Any ideas what could have changed to break this?
Thanks,
--Daniel
Hey,
after an (unattended) upgrade to 3.2,7, one of my zones (the one that does
rapid KSK rollovers) failed to load. Trying ro reload emits these errors in
the log:
info: [83.204.91.in-addr.arpa.] zone file parsed, serial 1622013488
error: [83.204.91.in-addr.arpa.] failed to apply journal changes, serial
1622013488 -> 1686209286 (loop detected)
2023-06-23T12:11:57+0200 error: [83.204.91.in-addr.arpa.] failed to apply
journal changes, serial 1622013488 -> 1686209286 (loop detected)
warning: [83.204.91.in-addr.arpa.] failed to load journal (loop detected)
2023-06-23T12:11:57+0200 warning: [83.204.91.in-addr.arpa.] failed to load
journal (loop detected)
info: [83.204.91.in-addr.arpa.] zone not found
error: [83.204.91.in-addr.arpa.] zone event 'load' failed (not exists)
2023-06-23T12:11:57+0200 error: [83.204.91.in-addr.arpa.] zone event 'load'
failed (not exists)
Calling `kjournalprint 83.204.91.in-addr.arpa` yields 600 lines of journal
full of both additions and deletions, nothing seems particularly wrong. Is
there anything I should try before purging the journal and starting from
scratch?
There are other zones on the same server with similar config that just work
normally, so I guess this is somehow related to the size of the journal for
this zone, which rotates DNSSEC keys very often.
--
Cheers,
Ondřej Caletka
The 3.0 documentation for catalog zones says the following:
«The difference is that standard DNS queries to a catalog zone are
answered with REFUSED as though the zone doesn’t exist, unless
querying over TCP from an address with transfers enabled by ACL.»
This seems like an odd requirement, and it breaks interoperability
with other vendors' authoritative servers. BIND, for example, does
not send the SOA check for a zone transfer over TCP, and so it's
impossible to use a Knot primary and BIND secondary with catalog
zones.
Is there some way to work around this?
Hi,
Yesterday we got hit by the per-zone journal becoming full [1]
As a result we're looking into how we can monitor the status to warn us
if we're near the journal limits, but I can't find a way to report the
currant journal usage (global and per-zone).
Any ideas?
[1] https://gitlab.nic.cz/knot/knot-dns/-/issues/842
Hi,
Just upgraded knot-resolver on ubuntu-22.04. It installs and runs but
shows that the socket is not where it thinks it is.
Socket is at /run/knot-resolver/control@1 but the new version is looking
for it at /run/knot-resolver/control/1
systemctl status is available here: https://pastebin.com/56g22e2a
Thanks for the great software!
Mike Wright
Hello!
Knot 3.2.0 with a Thales HSM configured this way: (btw, I am not obfuscating
addresses or zone names -- these are actual testing names :)
keystore:
- id: thales
backend: pkcs11
config: "pkcs11:token=XX;pin-value=XXX /opt/nfast/toolkits/pkcs11/libcknfast.so"
key-label: on
policy:
- id: manualHSM
keystore: thales
single-type-signing: off
manual: on
zone:
- domain: tt05
dnssec-signing: on
dnssec-policy: manualHSM
master: pdns
acl: [ xfr, notify_from_pdns ]
The zone `tt05' exists on the primary and can be transferred, as the following
logs will show.
I start off by generating a KSK and a ZSK, and verify that the keys are
actually on the HSM:
$ keymgr tt05 generate algorithm=8 size=2048 ksk=yes zsk=no
579f877d1739efb7bcf551e41c8777e965f8416f
$ keymgr tt05 generate algorithm=8 size=2048 ksk=no zsk=yes
eb8ef53ebffbd9950bfa914f7f2b0f1cd43bbe63
$ cklist -n | grep tt05
CKA_LABEL "tt05. KSK"
CKA_LABEL "tt05. ZSK"
I now reload Knot, and at this point I am actually expecting the server to
"see" the new zone, get the keys, perform the transfer (XFR) and sign the zone.
But all that doesn't happen:
$ knotc reload
Reloaded
2023-02-09T12:10:10+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, started
2023-02-09T12:10:10+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, finished, 0.00 seconds, 3 messages, 377 bytes
2023-02-09T12:10:10+0100 info: [tt05.] DNSSEC, key, tag 38930, algorithm RSASHA256, KSK, public, active
2023-02-09T12:10:10+0100 info: [tt05.] DNSSEC, key, tag 511, algorithm RSASHA256, public, active
2023-02-09T12:10:10+0100 error: [tt05.] DNSSEC, failed to load private keys (not exists)
2023-02-09T12:10:10+0100 error: [tt05.] DNSSEC, failed to load keys (not exists)
2023-02-09T12:10:10+0100 info: [tt05.] DNSSEC, next signing at 2023-02-09T13:10:10+0100
2023-02-09T12:10:10+0100 error: [tt05.] refresh, failed (not exists)
2023-02-09T12:10:10+0100 error: [tt05.] zone event 'refresh' failed (not exists)
In the above, I don't understand why it's failed to load the keys. My
_assumption_ is that the server has enumerated the keys from the HSM but did
that before the two keys for this zone were created. Invoking `knotc
zone-keys-load' doesn't alter the situation.
I do understand the 'refresh' failing, as the zone tt05 doesn't as yet exist on
this knot secondary.
So I initiate a zone transfer:
$ knotc zone-retransfer tt05
OK
2023-02-09T12:11:04+0100 info: [tt05.] control, received command 'zone-retransfer'
2023-02-09T12:11:04+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, started
2023-02-09T12:11:04+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, finished, 0.00 seconds, 3 messages, 377 bytes
2023-02-09T12:11:04+0100 info: [tt05.] DNSSEC, key, tag 38930, algorithm RSASHA256, KSK, public, active
2023-02-09T12:11:04+0100 info: [tt05.] DNSSEC, key, tag 511, algorithm RSASHA256, public, active
2023-02-09T12:11:04+0100 error: [tt05.] DNSSEC, failed to load private keys (not exists)
2023-02-09T12:11:04+0100 error: [tt05.] DNSSEC, failed to load keys (not exists)
2023-02-09T12:11:04+0100 info: [tt05.] DNSSEC, next signing at 2023-02-09T13:11:04+0100
2023-02-09T12:11:04+0100 error: [tt05.] refresh, failed (not exists)
2023-02-09T12:11:04+0100 error: [tt05.] zone event 'refresh' failed (not exists)
It is clear that the transfer succeeds (the logs on the primary corroborate
this), and knot apparently knows the correct keys to use for the zone.
Why is it not signing it?
$ knotc zone-sign tt05
OK
2023-02-09T12:11:38+0100 info: [tt05.] control, received command 'zone-sign'
2023-02-09T12:11:38+0100 info: [tt05.] DNSSEC, dropping previous signatures, re-signing zone
2023-02-09T12:11:38+0100 error: [tt05.] zone event 're-sign' failed (invalid parameter)
I now restart the server:
# <restart knotd>
2023-02-09T12:12:16+0100 info: Knot DNS 3.2.0 starting
2023-02-09T12:12:16+0100 info: loaded configuration file '/etc/knot.conf', mapsize 500 MiB
2023-02-09T12:12:16+0100 info: using UDP reuseport, incoming TCP Fast Open
2023-02-09T12:12:16+0100 info: binding to interface 10.24.34.16@5353
2023-02-09T12:12:16+0100 info: changed directory to /
2023-02-09T12:12:16+0100 info: loading 7 zones
2023-02-09T12:12:16+0100 info: [tt05.] zone will be loaded
2023-02-09T12:12:16+0100 info: starting server
2023-02-09T12:12:18+0100 info: [tt05.] failed to parse zone file '/var/zones/tt05' (not exists)
Here again, I understand it cannot parse the zone, because the transfer hasn't
actually been comitted to disk.
So I manually transfer:
$ knotc zone-retransfer tt05
OK
2023-02-09T12:13:23+0100 info: [tt05.] control, received command 'zone-retransfer'
2023-02-09T12:13:23+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, started
2023-02-09T12:13:23+0100 info: [tt05.] AXFR, incoming, remote 192.168.33.31@53, finished, 0.00 seconds, 3 messages, 377 bytes
2023-02-09T12:13:23+0100 info: [tt05.] DNSSEC, key, tag 38930, algorithm RSASHA256, KSK, public, active
2023-02-09T12:13:23+0100 info: [tt05.] DNSSEC, key, tag 511, algorithm RSASHA256, public, active
2023-02-09T12:13:23+0100 info: [tt05.] DNSSEC, signing started
2023-02-09T12:13:23+0100 info: [tt05.] DNSSEC, successfully signed
2023-02-09T12:13:23+0100 info: [tt05.] DNSSEC, next signing at 2023-02-23T11:13:24+0100
2023-02-09T12:13:23+0100 info: [tt05.] refresh, remote 192.168.33.31@53, zone updated, 0.44 seconds, serial none -> 2023010100, remote serial 2023010100, expires in 604800 seconds
2023-02-09T12:13:23+0100 info: [tt05.] zone file updated, serial 2023010100
And now the zone is signed.
Is there some way to 'streamline' this? :-) Or am I just doing something wrong
or being too impatient?
Best regards,
-JP
I note that the key label is not set when Knot generates new keys via PKCS#11.
Invoking `p11tool --list-all' shows a key as
Object 449:
URL: pkcs11:model=;manufacturer=nCipher%20Corp.%20Ltd;serial=xxx;\
token=YYY;\
id=%04%66%D0%9C%0D%9E%24%D9%79%0A%17%D3%5D%A0%CC%5A%3F%E2%A3%26;\
type=public
Type: Public key (RSA-2048)
Label:
ID: 04:66:d0:9c:0d:9e:24:d9:79:0a:17:d3:5d:a0:cc:5a:3f:e2:a3:26
The ID is that which `keymgr list' displays (with colons in it), but the label
is empty.
Is this by design? Would it be possible for Knot to actually set the label
(e.g. zone name - key type: example.com-ksk)?
Best regards,
-JP
Hello,
Since I upgraded to 3.2.2-cznic.1~bullseye, my scripts using knsupdate
fails, on every machine they run on.
I can reproduce this with gdb, here is the trace I get in this case :
Starting program: /usr/bin/knsupdate -k
/etc/letsencrypt.sh/hooks/tsig.key /tmp/tmp.17EBRdeI8j
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGSEGV, Segmentation fault.
0x000055555556161c in net_send (net=net@entry=0x7ffffffcbb80,
buf=0x555555592a10 "\345\327(", buf_len=196)
at utils/common/netio.c:587
587 utils/common/netio.c: No such file or directory.
(gdb) where
#0 0x000055555556161c in net_send (net=net@entry=0x7ffffffcbb80,
buf=0x555555592a10 "\345\327(", buf_len=196)
at utils/common/netio.c:587
#1 0x000055555555adee in pkt_sendrecv
(params=params@entry=0x7ffffffcbdd0) at utils/knsupdate/knsupdate_exec.c:456
#2 0x000055555555ba01 in cmd_send (lp=<optimized out>,
params=0x7ffffffcbdd0)
at utils/knsupdate/knsupdate_exec.c:851
#3 0x000055555555b3e6 in knsupdate_process_line
(line=line@entry=0x5555555755e0 "send",
params=params@entry=0x7ffffffcbdd0) at
utils/knsupdate/knsupdate_exec.c:498
#4 0x000055555555b5ed in knsupdate_process_line (params=0x7ffffffcbdd0,
line=0x5555555755e0 "send")
at utils/knsupdate/knsupdate_exec.c:486
#5 process_lines (params=params@entry=0x7ffffffcbdd0,
input=input@entry=0x5555555910f0)
at utils/knsupdate/knsupdate_exec.c:527
#6 0x000055555555bd3e in knsupdate_exec
(params=params@entry=0x7ffffffcbdd0)
at utils/knsupdate/knsupdate_exec.c:575
#7 0x0000555555559dc9 in main (argc=<optimized out>,
argv=0x7fffffffe548) at utils/knsupdate/knsupdate_main.c:35
The nsupdate script (/tmp/tmp.17EBRdeI8j) is here (with truncated TXT) :
server 10.42.42.21
zone durel.eu.
origin durel.eu.
ttl 600
add _acme-challenge.ns.durel.eu. 600 in TXT "qLH_KkbQ_IUVr[...]7rs-iUE"
send
quit
As I can reproduce this with any tsig key, I can provide a core if you
need it.
--
Bastien Durel
Good morning,
In Knot 3.2.0 the rrsig-refresh default changed, excerpt changlog:
knotd: default value for 'policy.rrsig-refresh' is propagation delay +
zone maximum TTL
I'd like to understand the rationale behind this change and whether or
not we should tune this parameter in our deployment.
We currently have monitoring in place to ensure that we always serve
valid signatures. In my understanding with the old defaults < 3.2.0 of
rrsig-refresh of 7d and rrsig-lifetime of 14d, we always ended up with
signatures that were at least valid for 7 days. As I understand, with
the new defaults, signatures might be refreshed way closer to their
expiry date. This makes me a bit uneasy, as if there are issues with
signing this gives us hardly any time to react and fix potential issues
before the current signatures expire.
I assume setting rrsig-refresh explicitly to 7d would restore the old
behavior, but I'm wondering if this is somehow bad practice and if we
are overly paranoid with our monitoring.
How do other people handle this? Are there any downsides of setting a
higher value of rrsig-refresh that we are not aware of?
Regards
André
Hello,
I tried to upgrade to knot 3.2 using the debian packages
from https://deb.knot-dns.cz/knot-latest bullseye/main, but the server
does not use my HSM anymore. All zones fails with :
août 22 14:38:13 arrakeen knotd[1285865]: info: [durel.org.] zone file parsed, serial 2021120479
août 22 14:38:13 arrakeen knotd[1285865]: error: [durel.org.] DNSSEC, failed to initialize signing context (PKCS #11 token not available)
août 22 14:38:13 arrakeen knotd[1285865]: 2022-08-22T14:38:13+0200 error: [durel.org.] DNSSEC, failed to initialize signing context (PKCS #11 token not available)
août 22 14:38:13 arrakeen knotd[1285865]: 2022-08-22T14:38:13+0200 error: [durel.org.] zone event 'load' failed (PKCS #11 token not available)
août 22 14:38:13 arrakeen knotd[1285865]: error: [durel.org.] zone event 'load' failed (PKCS #11 token not available)
debug log does not seems to print more details about error
keystore is defined as :
keystore:
- id: hsmkey
backend: pkcs11
config: "pkcs11:pin-value=REDACTED /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so"
The HSM itself is an USB key from CardContact.de
Downgrading to 3.1.9-cznic.1~bullseye re-enable signing
Is there anything I can do to debug/solves this problem ?
Regards,
--
Bastien
Hi,
thank you for contacting us with your issues with Knot DNS. However, you
have hit wrong channel: knot-resolver-users mailing-list is intended for
users of Knot Resolver. I'm sending this reply already to proper channel.
You correctly pointed out that Knot did not delete old key after the
delete-delay period.
This seems to be an effect of an actually intentional, but perhaps
tricky feature: Knot postpones this (relatively unnecessary) key
deletion until next signing process. The point is, that initializing the
whole "signing machinery" just in order to purge a deleted (marked as
such) key might be an overkill (mostly on configurations with many many
zones).
You can see the next planned singing event when calling `knotc
zone-status` or when inspecting the logfile for logs of the previous
signing event. Please let me know if the deleted key disappears once the
zone is re-signed. I guess it might take up to a week, since this long
it takes between RRSIGs re-creation according to your configuration.
If you need to delete the key immediately, you can use keymgr utility,
or it might be also done with `knotc zone-keys-load` (basically
triggering the zone signing process out of schedule).
Thank you,
Libor
Hi,
my knot installation (3.0.5) gives me this notice:
notice: config, non-default 'template[default].storage' detected, please
configure also 'db.storage' to avoid compatibility issues with future
versions
I have searched the docs to find out what I have to do, but did not find
any specific information. Can you give me a hint what needs to be done here?
Thanks a lot,
Thomas
The documentation for `keymgr' says that the subcommand `del-all-old' is
related to offline KSK, but it also seems to work for online KSK.
Moments ago I had the following keys of which e381* had just been marked as
removed:
$ keymgr -c knot.conf tm list -b iso
e381198aea254a1dbceb3c5b153cbefaa98c959a 31943 KSK ECDSAP256SHA256 publish=2022-05-12T11:43:56Z ready=2022-05-12T11:43:56Z active=2022-05-12T11:43:56Z retire=2022-05-12T12:35:42Z revoke=2022-05-12T12:33:42Z remove=2022-05-12T12:37:42Z
d68e6803daa3e3ee34dd07d6966df0c402594fb2 26288 ZSK ECDSAP256SHA256 publish=2022-05-12T12:28:18Z active=2022-05-12T12:28:18Z
b0cc879e9b9f5faae647c7019a12821e62150378 62610 KSK ECDSAP256SHA256 publish=2022-05-12T12:30:49Z ready=2022-05-12T12:30:49Z active=2022-05-12T12:30:49Z
$ keymgr -c knot.conf tm del-all-old
OK
$ keymgr -c knot.conf tm list -b iso
d68e6803daa3e3ee34dd07d6966df0c402594fb2 26288 ZSK ECDSAP256SHA256 publish=2022-05-12T12:28:18Z active=2022-05-12T12:28:18Z
b0cc879e9b9f5faae647c7019a12821e62150378 62610 KSK ECDSAP256SHA256 publish=2022-05-12T12:30:49Z ready=2022-05-12T12:30:49Z active=2022-05-12T12:30:49Z
and the PEM key file has also been removed.
Is this to be expected? Would it be a good idea to add a note to the
documentation clarifying this?
Best regards,
-JP
Hello,
I'd like to be able to do automatic ZSK and manual KSK rollovers. Basically the
KSK should have an endless validity but I might want to roll it with
(manually-trigerred) RFC 5011 semantics.
It it permissible to have a policy such as shown below and then explicitly
use `keymgr' commands to generate new keys and set `revoke', `retire' and
`remove' timers on the older key?
Testing indicates that it works as desired, I'm just unsure whether key
manipulation is permitted.
policy:
- id: autoHSM
keystore: pemstore
single-type-signing: off
manual: off
ksk-shared: off
ksk-lifetime: 0
zsk-lifetime: 30d
cds-cdnskey-publish: rollover
Thank you,
-JP
Hello,
keymgr(8) lists keys in plain text which is fine for processing with awk(1)
et.al. Are there any plans to make it output JSON? I'm thinking along the lines
of making parsing future-proof:
[
{
"id": "a982d72174a48a3ef083a97e5aae02cc47f58762",
"ksk": true,
"zsk": false,
"key_tag": 61676,
"algo": 8,
"size": 2048,
"public-only": false,
"pre-active": 0,
"publish": 1652161461,
"ready": 1652161581,
"active": 1652161642,
"retire-active": 1652168902,
"retire": 0,
"post-active": 0,
"revoke": 0,
"remove": 1652168962
}
]
keymgr_list_keys() calls either of print_key_full() / print_key_brief() to do
the work, and I think it would be quite easy to add support for JSON.
Is this something I should make happen?
-JP
Hello,
I need to migrate away from an HSM-backed OpenDNSSEC installation which uses a
Thales nCipher for key storage and am experimenting with Knot DNS 3.1.8 (on
CentOS 7, FWIW).
I've compiled Knot, and it is able to access said HSM via PKCS#11. I have
configured a zone with a manual policy.
policy:
- id: manualHSM
keystore: thales
single-type-signing: on
manual: on
After importing keys from the HSM with `keymgr import-kcs11', knotd launches
and signs the zone with KSK/ZSK as expected.
What I would then like to have happen is to have periodic ZSK rollovers as well
as periodic KSK rollovers. In order to accomplish this I have changed the
zone's policy to
policy:
- id: autoHSM
keystore: thales
single-type-signing: off
manual: off
algorithm: rsasha256
ksk-size: 2048
zsk-size: 1024
zone-max-ttl: 60
dnskey-ttl: 60
propagation-delay: 60
nsec3: on
nsec3-iterations: 0
nsec3-salt-length: 0
nsec3-salt-lifetime: 0
ksk-lifetime: 7200
zsk-lifetime: 3600
A restart of knotd then begins by creating a new ZSK and rolling it, and the
KSK is rolled automatically after 7200 seconds. (These timers are for testing
only.)
So far no complaints whatsoever -- this is working exactly as I had hoped it
would. I am assuming that it is permissible to change a zone's policy in flight.
What I'd like is confirmation that the KSK roll will actually never occur
immediately, but only after a first period has elapsed.
Can I rely on this behavior, i.e. that the first KSK roll will occur only after
a first `ksk-lifetime' period?
Best regards,
-JP
Hi,
for the transition of a TLD I need to import the current providers KSK
into my zone. I use the "keymgr import-pub" command for this. I have
done that a few times in the past and it worked very well.
I have now installed the most current version of Knot (3.0.10) and did
the same procedure. But after importing the KSK the zone can't be signed
anymore. It seems like Knot doesn't recognize that this imported key is
a "public-only" key. Knot throws an error and complains that the private
key could not be loaded.
The zone's keys (.example) before the import of the KSK:
# keymgr example list
0b94a3f9fef3ae531fc5ee1334ddd2876db7cd9a ksk=yes zsk=no tag=12595
algorithm=7 size=2048 public-only=no pre-active=0 publish=1650495677
ready=1650495677 active=1650659051 retire-active=0 retire=0
post-active=0 revoke=0 remove=0
13cc082655ddf7160787ef945ad7edb6406bb70e ksk=no zsk=yes tag=05477
algorithm=7 size=1024 public-only=no pre-active=0 publish=1650495677
ready=0 active=1650495677 retire-active=0 retire=0 post-active=0
revoke=0 remove=0
Imported the KSK with the following command:
# keymgr example import-pub /etc/knot/public.key
2c135e77b7f48475a837ad0d28a9459f0e7ce621
OK
The zone's keys (.example) after the import of the KSK:
# keymgr example list
0b94a3f9fef3ae531fc5ee1334ddd2876db7cd9a ksk=yes zsk=no tag=12595
algorithm=7 size=2048 public-only=no pre-active=0 publish=1650495677
ready=1650495677 active=1650659051 retire-active=0 retire=0
post-active=0 revoke=0 remove=0
13cc082655ddf7160787ef945ad7edb6406bb70e ksk=no zsk=yes tag=05477
algorithm=7 size=1024 public-only=no pre-active=0 publish=1650495677
ready=0 active=1650495677 retire-active=0 retire=0 post-active=0
revoke=0 remove=0
2c135e77b7f48475a837ad0d28a9459f0e7ce621 ksk=yes zsk=no tag=35421
algorithm=7 size=2048 public-only=yes pre-active=0 publish=1650660072
ready=0 active=0 retire-active=0 retire=0 post-active=0 revoke=0 remove=0
The imported key (tag 35421) has the flag "public-only=yes", as expected.
But when I now sign the zone, the log shows this errors:
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] control, received
command 'zone-sign'
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, dropping
previous signatures, re-signing zone
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
12595, algorithm RSASHA1_NSEC3_SHA1, KSK, public, active
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
35421, algorithm RSASHA1_NSEC3_SHA1, KSK, public, active+
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, key, tag
5477, algorithm RSASHA1_NSEC3_SHA1, public, active
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] DNSSEC, failed to
load private keys (not exists)
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] DNSSEC, failed to
load keys (not exists)
Apr 22 20:43:24 lab-nic knotd[2831]: info: [example.] DNSSEC, next
signing at 2022-04-22T21:43:24+0000
Apr 22 20:43:24 lab-nic knotd[2831]: error: [example.] zone event
'DNSSEC re-sign' failed (not exists)
The imported key should not have the "active" flag:
info: [example.] DNSSEC, key, tag 35421, algorithm RSASHA1_NSEC3_SHA1,
KSK, public, active+
It seems to me that the imported key is not seen as a "public-only" key
anymore and therefore Knot is looking for the corresponding private key,
which of course fails.
I attached an strace output, with the signing operation. But that
doesn't seem to be helpful because the signing command itself doesn't fail.
Thanks,
Thomas
Just to clarify some semantics of the config format.
Is each individual 'remote' ID considered to be a single server,
regardless of the number of addresses it has?
For the notify case, it looks like knot will send to each address in a
remote ID in serial, and stop as soon as one replies. That suggests
the above semantics, but I wanted to make sure I'm interpreting this
behaviour correctly before I complicate my config by adding a lot more
remotes. I am currently treating each remote as an organisation,
lumping all of that organisation's servers together under a single ID.
I'm trying to find a way to poll for any zones where knot is currently
waiting on DS submission to the parent.
I'm aware of the structured logging sent to systemd-journald but I see
this as not particularly useful for monitoring, as the event could be
missed by a dead daemon, bug in code, etc. I'd much prefer to be able
to actively monitor states by polling.
It looks like the only way I can do that right now is to run `keymgr
list` and analyze the output. If I'm reading the documentation
correctly, all I need to look for is a key that is `ksk=yes`, `ready
!= 0`, and `active = 0`.
Does that seem correct? Am I missing something simpler? :)
I have found a situation where I think the Knot behaviour around
algorithm rolls could be better. It's one of those "prevent the user
from hurting themselves" situations, in which I would have hurt myself
if this had involved anything other than an old, unused zone. :)
The suggestion is a simple one: when doing an automated algorithm
roll, the KSK submission check should remain negative until the
parental DS set exactly matches the set requested by the published CDS
set (substitute DNSKEY/CDNSKEY as appropriate).
In a situation where CDS scanning is not being done by my parent, I
slipped up and only added the new DS record to the parent, leaving the
old algorithm's DS record also present. Knot did its submission
check, saw the new DS record, and happily continued on with the
algorithm roll. This eventually led to a situation that was in
violation of RFC 6840 § 5.11 [0]:
A signed zone MUST include a DNSKEY for each algorithm present in
the zone's DS RRset and expected trust anchors for the zone.
I ended up with a situation where I had the new and old DS, but only
the new DNSKEY [1]. This seems like a situation that could be avoided
by extending the logic of the KSK submission check. In addition to
saving users from themselves, it would also help if a situation
occurred where the parent had a bug in their CDS processing
implementation and failed to remove the old DS.
[0]: <https://datatracker.ietf.org/doc/html/rfc6840#section-5.11>
[1]: <https://dnsviz.net/d/dns-oarc.org/Yg7ZDw/dnssec/>
Hello,
what is wrong in my policy section? I can't found any in the docs ?
Have I missing Parameters or ..............
The Warning is,
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[rsa2k].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc1].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[rsa2k].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc1].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc2].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc2].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
my policy,
policy:
- id: rsa2k
algorithm: RSASHA256
ksk-size: 4096
zsk-size: 2048
nsec3: on
- id: ececc1
algorithm: ECDSAP256SHA256
nsec3: on
- id: ececc2
algorithm: ecdsap384sha384
nsec3: on
--
mit freundlichen Grüßen / best regards
Günther J. Niederwimmer
Hi,
I've got a staging environment running, with two signers signing a
staging version of the .is zone (~115k records).
The staging servers are configured with 2 GB RAM and 4 CPU cores,
running on FreeBSD 12.2.
We've experienced that knot crashes because the server runs out of
memory. The zone is configured with:
zonefile-sync: -1
zonefile-load: difference-no-serial
journal-content: all
One of the signers, after an hour of running is showing 215 MB resident
memory used.
The other signer, that's been running for a whole day is showing 1582 MB
resident memory.
I attached a screenshot showing the amount of memory used over a period
of time, and the graph shows that the amount of memory used very
suddenly increases.
I assume the servers are using a lot of memory for the journals, but I'd
like to understand why the sudden increase in used memory, and what to
expect about needed memory?
.einar
Hi,
i have knot dns setup with dns cookie module enabled but if i check with
dnsviz.net i always get:
The server appears to support DNS cookies but did not return a COOKIE
option.
Relevant parts of my knot.conf:
template:
- id: default storage: "/var/lib/knot"
dnssec-signing: on
dnssec-policy: rsa2048
global-module: [ "mod-cookies", "mod-rrl/default" ]
mod-rrl:
- id: default
rate-limit: 200
slip: 2
- domain: mydomain.de
file: "/etc/knot/zones/mydomain.de.zone"
notify: secondary
acl: acl_secondary
zonefile-load: difference
I thought about maybe it's the slip: 2, but that didn't change anything
if set to 1
Do you guys see anything obvious causing this "issue"?
Thanks for your time
Juergen
Hi,
For many months now, we've been preparing new signers for our internal
zones and eventually .is.
We've got the first of our test zones live on the production signers,
but some things are troubling us.
This is the config we're using for zones:
template:
- id: default
semantic-checks: on
storage: "/usr/local/etc/knot"
file: "zones/unsigned/%s/%s-soa"
serial-policy: dateserial
zonefile-sync: -1
zonefile-load: difference-no-serial
journal-content: all
notify: hidden_primary
acl: hidden_primary_acl
policy:
- id: isnic
algorithm: RSASHA256
ksk-size: 4096
zsk-size: 2048
ksk-lifetime: 365d
zsk-lifetime: 30d
propagation-delay: 1h
rrsig-lifetime: 14d
rrsig-refresh: 7d
rrsig-pre-refresh: 1h
---
zones/unsigned is stored in a git repo and changes are deployed by an
ansible playbook that checks out the latest revision and reloads the zones.
Someone pointed out that zonefile-load: difference-no-serial was risky
for something as important as a TLD, but what is the alternative when
doing automatic DNSSEC signing on zone data from git? Also, we turned
off zonefile-sync, since our current deployment script overwrites the
zonefile. Is there a way to load initial zone data from one file, but do
zonefile-sync to another?
We're seeing this in our logs:
Jan 20 09:32:06 ht-signer01 knot[49715]: info: [pp.is.] zone file
parsed, serial corrected 1970010100 -> 2022012000
Jan 20 09:32:06 ht-signer01 knot[49715]: info: [pp.is.] loaded, serial
2022011900 -> 2022012000 -> 2022011900, 3830 bytes
Any idea what's happening on the second line? It's like knot wants to
increment the serial, but then changes it's mind :)
.einar
OK I recently decided to change the algorithm on all our domains
from RSASHA1 to RSASHA256. Before making the change globally; I
experimented with one domain. I did so by adding a new policy:
CURRENT
policy:
- id: rsa1
algorithm: RSASHA1
ksk-size: 2048
zsk-size: 1024
dnskey-ttl: 43200
zsk-lifetime: 30d
ksk-lifetime: 365d
NEW (PROPOSED)
policy:
- id: rsa2
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 2048
dnskey-ttl: 43200
zsk-lifetime: 30d
ksk-lifetime: 365d
DOMAIN TESTED ON
# a-domain
- domain: a-domain
file: "masters/a-domain"
zonefile-load: difference
dnssec-signing: on
# dnssec-policy: rsa1
dnssec-policy: rsa2
semantic-checks: on
serial-policy: dateserial
acl: [locals, remotes01, remotes03, remotes04]
To preform the intended change. I first set the the current keys on the
test domain to: retire=+1hr
I then added the new policy and assigned it to the testing domain. Then
restarted the knot service. After the hour and some had passed. I performed a
keymgr a-domain del-all-old which removed the old algorithm (RSASHA1) keys.
But I think this was a mistake.
How would I best make this change? Is it enough to simply change algorithm:
and knot will just do the right thing?
Thanks!
-- Chris
Hi,
We're preparing to migrate our zones from OpenDNSSEC 1.4 to Knot DNS 3.1
(and eventually the .is zone).
We've already migrated one unsigned zone to the new signers, but next on
the list is first currently signed zone.
We're going to migrate the zone by doing a key rollover, so we'll add
DNSKEY records for the new keys to the zone on the old signer and vice
versa. While we're migrating the zone we have to stop automatic key
rollovers, and I planned to create a new policy 'dnssec_freeze' with
`manual: on` and apply it to zones during migration.
Am I correct that this will stop all automatic key rolloveres, but keep
the signatures updated?
The the migration is complete, DS records and delegations have been
updated etc., I'll change the policy to an automatic policy. Will knot
seamlessly start automatically rolling over keys according to the new
policy?
.einar
I am trying to import a public key generated by BIND into Knot, when using
the SoftHSM2 key store. I have the following pieces of information:
In my knot.conf file:
policy:
- id: SoftHSMRSAPolicy
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 2048
ksk-lifetime: 7h
zsk-lifetime: 6h
dnskey-ttl: 12s
zone-max-ttl: 15s
keystore: SoftHSM
zone:
- domain: 00.mydomain.com
storage: /srv/knot
file: db.mydomain00
dnssec-signing: on
dnssec-policy: SoftHSMRSAPolicy
The public key is in a file named pubkey, and has the following contents:
; This is a zone-signing key, keyid 14694, for 00.mydomain.com.
; Created: 20211109192137 (Tue Nov 9 12:21:37 2021)
; Publish: 20211109192137 (Tue Nov 9 12:21:37 2021)
; Activate: 20211109192137 (Tue Nov 9 12:21:37 2021)
00.mydomain.com. IN DNSKEY 256 3 8 AwEAAd1XmDMiF4/WWW+lneSg2hScxQl
TJHU/cIyBnDJDnW3MFkuyR7e+y3UqZScTXz5tfcGkDYGpqFqZ3+RgyN7A3ZAC3RsayivUuE9lec25IT97
jPZaTsHUjalDQjXkBhCIHBb79vVsz0SMZOeez78qzhRtpdkFYVNRcAW4EZVgdQAdiuJGeDEuxsaTkRnLwujnaqURyAzevqfQfjz319CPsYr4tN4K9nu2Fc0Sh+b5pdM6ejRieLnUUgZpuefRfgsSHJQErNe
FevdtihLpq93r
E5OARwmK0c4vyzgpmREloMJlwV+lrZdlKqZnnIZIXgkD+59Tjh0XZ72exdvonun4uG8=
(The DNSKEY record is in a single line.)
The command I am using to import this key is
# ./keymgr 00.mydomain.com. import-pub ./pubkey
This spins for a few seconds and then prints out:
Error: file error
Any ideas as to what it is that I am doing wrong?
The command that I am invoking to import this public key is the following:
I have been trying to get a better understanding concerning the information
Knot stores in its KASP. Knot adds new key information into the KASP by
means of the kasp_db_add_key function. One of the arguments to this
function is a pointer to a key_params_t structure, one of whose members is
called is_pub_only. This would seem to imply that the KASP may contain
information about key pairs such that only the public component of the key
pair is available to Knot.
Under what set of circumstances would such a key be stored in the KASP?
Since they are used for signing RRs, any KSKs and ZSKs in the KASP have to
be complete, in that both the private and the public components are
available to Knot (I know that the private component itself is not present
in the KASP, but that's OK). A KASP key for which the private component is
not available could be used for verifying signatures - but that's not
something that Knot does, right?
So, under what set circumstances would Knot add a key to the KASP such that
the is_pub_only member is set to true?
Hi,
I'm trying to dig into the dns-benchmark tools, but am running into a
few issues. I realized that it requires some specific settings, like
/home/dnsbench seems to be hardcoded for the logs somehow, and it being
run as root, but now I see a lot of syntax error message flooding me,
with lines in between about that hostname is an invalid number:
standard_in) 2: syntax error
(standard_in) 2: syntax error
(standard_in) 2: syntax error
(standard_in) 2: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
/tmp/benchmark-1636019360/modules/responses.sh: line 170: printf:
hostname: invalid number
knot2 ssh: % answered for Could q/s, not B avg (resolve a/s, 0 B avg)
(repeated output)
Anyone else using the tools and potentially can give me a prod in the
right direction? We'd like to check how knot performs in our
environment with the systems we have at hand.
Thanks in advance for any advice,
Rhonda