Just to clarify some semantics of the config format.
Is each individual 'remote' ID considered to be a single server,
regardless of the number of addresses it has?
For the notify case, it looks like knot will send to each address in a
remote ID in serial, and stop as soon as one replies. That suggests
the above semantics, but I wanted to make sure I'm interpreting this
behaviour correctly before I complicate my config by adding a lot more
remotes. I am currently treating each remote as an organisation,
lumping all of that organisation's servers together under a single ID.
I'm trying to find a way to poll for any zones where knot is currently
waiting on DS submission to the parent.
I'm aware of the structured logging sent to systemd-journald but I see
this as not particularly useful for monitoring, as the event could be
missed by a dead daemon, bug in code, etc. I'd much prefer to be able
to actively monitor states by polling.
It looks like the only way I can do that right now is to run `keymgr
list` and analyze the output. If I'm reading the documentation
correctly, all I need to look for is a key that is `ksk=yes`, `ready
!= 0`, and `active = 0`.
Does that seem correct? Am I missing something simpler? :)
I have found a situation where I think the Knot behaviour around
algorithm rolls could be better. It's one of those "prevent the user
from hurting themselves" situations, in which I would have hurt myself
if this had involved anything other than an old, unused zone. :)
The suggestion is a simple one: when doing an automated algorithm
roll, the KSK submission check should remain negative until the
parental DS set exactly matches the set requested by the published CDS
set (substitute DNSKEY/CDNSKEY as appropriate).
In a situation where CDS scanning is not being done by my parent, I
slipped up and only added the new DS record to the parent, leaving the
old algorithm's DS record also present. Knot did its submission
check, saw the new DS record, and happily continued on with the
algorithm roll. This eventually led to a situation that was in
violation of RFC 6840 § 5.11 [0]:
A signed zone MUST include a DNSKEY for each algorithm present in
the zone's DS RRset and expected trust anchors for the zone.
I ended up with a situation where I had the new and old DS, but only
the new DNSKEY [1]. This seems like a situation that could be avoided
by extending the logic of the KSK submission check. In addition to
saving users from themselves, it would also help if a situation
occurred where the parent had a bug in their CDS processing
implementation and failed to remove the old DS.
[0]: <https://datatracker.ietf.org/doc/html/rfc6840#section-5.11>
[1]: <https://dnsviz.net/d/dns-oarc.org/Yg7ZDw/dnssec/>
Hello,
what is wrong in my policy section? I can't found any in the docs ?
Have I missing Parameters or ..............
The Warning is,
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[rsa2k].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc1].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[rsa2k].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc1].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: 2022-02-13T12:33:05+0100 warning: config,
policy[ececc2].nsec3-iterations defaults to 10, since version 3.2 the default
becomes 0
Feb 13 12:33:05 dns1 knotd[184636]: warning: config, policy[ececc2].nsec3-
iterations defaults to 10, since version 3.2 the default becomes 0
my policy,
policy:
- id: rsa2k
algorithm: RSASHA256
ksk-size: 4096
zsk-size: 2048
nsec3: on
- id: ececc1
algorithm: ECDSAP256SHA256
nsec3: on
- id: ececc2
algorithm: ecdsap384sha384
nsec3: on
--
mit freundlichen Grüßen / best regards
Günther J. Niederwimmer
Hi,
I've got a staging environment running, with two signers signing a
staging version of the .is zone (~115k records).
The staging servers are configured with 2 GB RAM and 4 CPU cores,
running on FreeBSD 12.2.
We've experienced that knot crashes because the server runs out of
memory. The zone is configured with:
zonefile-sync: -1
zonefile-load: difference-no-serial
journal-content: all
One of the signers, after an hour of running is showing 215 MB resident
memory used.
The other signer, that's been running for a whole day is showing 1582 MB
resident memory.
I attached a screenshot showing the amount of memory used over a period
of time, and the graph shows that the amount of memory used very
suddenly increases.
I assume the servers are using a lot of memory for the journals, but I'd
like to understand why the sudden increase in used memory, and what to
expect about needed memory?
.einar
Hi,
i have knot dns setup with dns cookie module enabled but if i check with
dnsviz.net i always get:
The server appears to support DNS cookies but did not return a COOKIE
option.
Relevant parts of my knot.conf:
template:
- id: default storage: "/var/lib/knot"
dnssec-signing: on
dnssec-policy: rsa2048
global-module: [ "mod-cookies", "mod-rrl/default" ]
mod-rrl:
- id: default
rate-limit: 200
slip: 2
- domain: mydomain.de
file: "/etc/knot/zones/mydomain.de.zone"
notify: secondary
acl: acl_secondary
zonefile-load: difference
I thought about maybe it's the slip: 2, but that didn't change anything
if set to 1
Do you guys see anything obvious causing this "issue"?
Thanks for your time
Juergen