Hi,
In our setup, we have one active signer and one backup signer. Both use
softhsm, but only the active signer does automatic key management.
There is an hourly cron job that syncs keys from active to backup signer.
It runs knotc zone-backup on the active signer, only backing up the kaspdb.
It then syncs the files over to the secondary and runs knotc zone-restore.
This has been running for a few years now without problems.
These last two weeks we’ve been performing algorithm rollovers for
some of our zones, and after we run `knotc zone-ksk-submitted nic.is`
we start seeing these errors when the zone-restore is run on the backup:
error: [nic.is.] zone event 'backup/restore' failed (already exists)
warning: [nic.is.] zone restore failed (already exists)
warning: [nic.is.] restore, key copy failed (already exists)
I searched the knot dns source code, but couldn't find where these
errors are output. Like I said, we’ve been running like this for a few
years, doing regular ZSK rollovers, and a few KSK rollovers, without
problems. There’s something about the algorithm rollover that
causes this problem with our setup.
I assume I can just delete the keys on the secondary and sync again,
but I want to understand what causes these errors so we can avoid them
or at best document them in our process.
.einar
debian 12
knot 5
so i believe i have a server with antique data in cache. the net of a
million lies says use `kresctl clear foo` but i can not find kresctl.
clue bat please
randy
Hello,
I upgraded my signing server to Debian 13, but I have a problem with my HSM :
Oct 15 21:09:18 arrakeen knotd[29552]: error: [durel.org.] zone event 'load' failed (PKCS #11 token not available)
Oct 15 21:09:18 arrakeen knotd[29552]: error: [geekwu.org.] zone event 'load' failed (PKCS #11 token not available)
keymgr gives me the same error :
# keymgr geekwu.org list
error: failed to initialize KASP (PKCS #11 token not available)
despite hsmwiz being able to access the key :
# hsmwiz identify
Using reader with a card: Nitrokey Nitrokey HSM (DENK01067960000 ) 00 00
Version : 3.4
Config options :
User PIN reset with SO-PIN enabled
SO-PIN tries left : 15
User PIN tries left : 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Default SO-PIN: 3537363231383830 Default PIN: 648219
Now executing: pkcs15-tool --dump
Using reader with a card: Nitrokey Nitrokey HSM (DENK01067960000 ) 00 00
PKCS#15 Card [knot]:
Version : 0
Serial number : DENK0106796
Manufacturer ID: www.CardContact.de
Flags : PRN generation[...]
Public EC Key [Private Key]
Object Flags : [0x00]
Usage : [0x140], verify, derive
Access Flags : [0x02], extract
FieldLength : 384
Key ref : 0 (0x00)
Native : no
ID : 74f59bc17317bfccc5806108d84df1abd275faef
DirectValue : <present>
Knot is using this keystore :
keystore:
- id: nitrokey
backend: pkcs11
config: "pkcs11:pin-value=*** /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so"
I verified /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so still exists, and ldd doesn't report any missing dependency
strace let me see communication with pcscd, whose logs have these :
Oct 15 21:20:14 arrakeen systemd[1]: Started pcscd.service - PC/SC Smart Card Daemon.
Oct 15 21:20:20 arrakeen pcscd[33186]: 00000000 ../src/auth.c:166:IsClientAuthorized() Process 33204 (user: 134) is NOT authorized for action: access_pcsc
Oct 15 21:20:20 arrakeen pcscd[33186]: 00000071 ../src/winscard_svc.c:357:ContextThread() Rejected unauthorized PC/SC client
After a bit of digging, I found it's controlled by polkit, and added a brutal rule :
cat /etc/polkit-1/rules.d/pcsc.rules
/* -*- mode: js; js-indent-level: 4; indent-tabs-mode: nil -*- */
polkit.addRule(function(action, subject) { if (subject.isInGroup("pcsc")) { return polkit.Result.YES; } })
with knot added to the pcsc group, it can access the HSM again.
Do you know of a better way to configure ?
NB: I'm using another account, as I began to write this with no DNS server running
Regards,
--
Bastien Durel
Hi
We recently tried to upgrade to knot 3.5.0, but ran into a problem. It appears zones added via `conf-set include` are not working until knot is reloaded.
So to reduce calls to knotc when inserting a number of domains, we build a config fragment and then use `knotc conf-set include fragment.conf` to load it
With 3.4.8 this worked fine. For example:
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock status version
3.4.8
# dig +short foo.com @10.37.129.215 SOA
# cat > /local/knot_dns/zones/foo.com.zone <<EOF
foo.com. 3600 IN SOA ( ns1.fastmaildev.com.
postmaster.fastmaildev.com.
2025091802 ;serial
86133 ;refresh
600 ;retry
1209600 ;expire
3600 ;minimum
)
foo.com. 3600 IN NS ns1.fastmaildev.com.
foo.com. 3600 IN NS ns2.fastmaildev.com.
EOF
# cat > /tmp/zone.conf <<EOF
zone:
- domain: foo.com
template: "default"
EOF
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-begin
OK
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-set include /tmp/zone.conf
OK
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-commit
OK
# dig +short foo.com @10.37.129.215 SOA
ns1.fastmaildev.com. postmaster.fastmaildev.com. 2025091802 86133 600 1209600 3600
As you can see, immediately after the `conf-commit`, the zone can be queried via dig.
However this doesn't work in 3.5.0.
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock status version
3.5.0
# dig +short foo2.com @10.37.129.215 SOA
# cat > /local/knot_dns/zones/foo2.com.zone <<EOF
foo2.com. 3600 IN SOA ( ns1.fastmaildev.com.
postmaster.fastmaildev.com.
2025091802 ;serial
86133 ;refresh
600 ;retry
1209600 ;expire
3600 ;minimum
)
foo2.com. 3600 IN NS ns1.fastmaildev.com.
foo2.com. 3600 IN NS ns2.fastmaildev.com.
EOF
# cat > /tmp/zone.conf <<EOF
zone:
- domain: foo2.com
template: "default"
EOF
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-begin
OK
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-set include /tmp/zone.conf
OK
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock conf-commit
OK
# dig +short foo2.com @10.37.129.215 SOA
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock zone-status foo2.com
error: [foo2.com] (no such zone found)
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock zone-reload foo2.com
error: [foo2.com] (no such zone found)
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock zone-check foo2.com
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock reload
Reloaded
# dig +short foo2.com @10.37.129.215 SOA
ns1.fastmaildev.com. postmaster.fastmaildev.com. 2025091802 86133 600 1209600 3600
# /opt/knot/sbin/knotc -C /local/knot_dns/conf/ -s /run/knot_dns/knot_dns.sock zone-status foo2.com
[foo2.com.] role: master | serial: 2025091802
As you can see, after the `conf-commit` the zone isn't visible in knot at all, either via dig or even via knotc commands `zone-status` or `zone-reload`. However immediately after a knot server `reload`, it does become visible.
This feels like a bug and regression in 3.5.0 to me, or am I holding something wrong?
Rob Mueller
robm(a)fastmail.com
I am definitely interested in examples!
Reading up on groups, is it that 'group A' may represent 'customer A'
and have a specific set of Primary/Master nameservers, Group B ==
Customer B, different Primary, and so on?
(also fixed to be plain text - hope this is more legible in the archives)
--Chris
Hello DNS people,
I am exploring migrating from PowerDNS where we have a hidden primary (ns0)
and two public resolvers (ns1/ns2) using SQL replication, to instead use
Knot DNS for ns1/ns2 and Catalog zones to update them. ns0 would remain
Powerdns (frontend, zone edits for customers, etc). We are looking at
changing due to performance issues - "dns water torture" or "random
subdomain attacks" or whatever we're calling this these days.
Our test environment is more or less setup as listed here:
* https://nick.bouwhuis.net/posts/2024-12-31-catalog-zones-powerdns-knot/
This is similar to the architectured listed here:
*
https://indico.dns-oarc.net/event/47/contributions/1008/attachments/963/185…
(Klaus from nic.at)
For some zones, we're secondary to a customer's zone. In this case the
Primariy IPs are listed in PowerDNS metadata. I am trying to wrap my head
around how this could work seamlessly, where we keep the same workflow -
add the zone to PowerDNS, then it gets replicated with catalog zone to
ns1/ns2 (knot). Does anyone have this working? Secondary is mentioned in
the PDF above but no details about that are listed.
The issues appear to be at least these two things:
1) How to tell ns1/ns2 (knot) which IP's are its primaries in these zones?
The only thing I can think of is a separate script to generate a knot
config file with this info - effectively the same as "back in the day" with
BIND. This completely negates the function of catalog zones that are
secondaries. rfc9432 does address this:
"Catalog zones on secondary name servers would have to be set up manually,
perhaps as static configuration, similar to how ordinary DNS zones are
configured when catalog zones or another automatic configuration mechanism
are not in place. "
That RFC then says you still have to keep it in the catalog anyhow - it's
not immediately clear to me how/why - and how it could be configured per
the lasts sentence (manually in knot conf) as well as in the catalog -
wouldn't this be two declarations of the same zone?
"Additionally, the secondary needs to be configured as a catalog consumer
for the catalog zone to enable processing of the member zones in the
catalog, such as automatic synchronization of the member zones for
secondary service"
2) How would NOTIFY work? our hidden ns0 (powerdns) runs a copy of the
zones, but ns1/ns2 would be notified from the actual primary, and our ns0
would become out of date. Does knot have something like also-notify to
always notify that server? This may or may not be a problem, but the zone
data would completely become stale without this. Some customers log into
our web portal to view records of their secondaries and expect them to
match.
If anyone has operational experience with this or just a big cluebat to hit
me with - let me know.
Cheers,
Chris