Hi,
I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
I currently try this:
knotc zone-freeze $ZONE
knotc zone-flush $ZONE
$EDITOR $ZONE
knotc zone-thaw $ZONE
knotc zone-reload $ZONE
As far as I can see knot increases the serial on reload and slaves will be notified.
Is this the correct command sequence?
Regards
Volker
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand
Hello Knot DNS developers,
I have an observation about newer versions of Knot which use the new
single LMDB-based journal.
Suppose I have 3 slave zones configured in my knot.conf. Let's call them
zone1, zone2 and zone3. Knot loads the zones from the master and writes
data into the journal. Now suppose I remove one zone (zone3) from the
config, and reload knot. The zone is no longer configured, and querying
knot for it returns a REFUSED response. So far all is according to my
expectation.
However, if I run "kjournalprint <path-to-journal> -z", I get:
zone1.
zone2.
zone3.
So the zone is no longer configured, but its data persists in the
journal. If I run "knotc -f zone-purge zone3." I get:
error: [zone3.] (no such zone found)
I'm told that I should have done the purge first, *before* remove the
zone from the configuration. However, I find this problematic for 2 reasons:
1. I have to remember to do this, and I'm not used to this modus
operandi; and
2. this is impossible to do on a slave that is configured automatically
from template files. On our slave servers, where we have around 5000
zones, the zones are configured by templating out a knot.conf. Adding
zones is fine, but if a zone is being deleted, it will just disappear
from knot.conf. We keep no state, and so I don't know which zone is
being removed, and cannot purge it before-hand.
Now, the same is kind of true for Knot < 2.4. But... there is one major
difference. Under older versions of Knot, zone data was written into
individual files, and journals were written into individual .db files. I
can run a job periodically that compares zones in knot.conf with files
on disk, and delete those files that have no matching zones in the
config. This keeps the /var/lib/knot directory clean.
But in newer versions of Knot, there is no way to purge the journal of
zone data once a zone is removed from the configuration. For an operator
like me, this is a problem. I would like "knotc zone-purge" to be able
to operate on zones that are no longer configured, and remove stale data
anyway.