Hi Volker,
thank you for your question.
Your suggestion is almost correct, just a little correction:
knotc zone-freeze $ZONE
# wait for possibly still running events (check the logs manually or so...)
knotc zone-flush $ZONE # eventually with '-f' if zone synchronization is
disabled in config
$EDITOR $ZONEFILE # you SHALL increase the SOA serial if any changes
made in zonefile
knotc zone-reload $ZONE
knotc zone-thaw $ZONE
Reload before thaw - because after thaw, some events may start
processing, making the modified zonefile reload problematic.
BR,
Libor
Dne 5.9.2017 v 23:17 Volker Janzen napsal(a):
> Hi,
>
> I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
>
> I currently try this:
>
> knotc zone-freeze $ZONE
> knotc zone-flush $ZONE
> $EDITOR $ZONE
> knotc zone-thaw $ZONE
> knotc zone-reload $ZONE
>
> As far as I can see knot increases the serial on reload and slaves will be notified.
>
> Is this the correct command sequence?
>
>
> Regards
> Volker
>
>
> _______________________________________________
> knot-dns-users mailing list
> knot-dns-users(a)lists.nic.cz
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
Hi,
I've setup knot to handle DNSSEC signing for a couple of zones. I like to update zonefiles on disk with an editor and I want to clarify which steps need to be performed to safely edit the zonefile on disk.
I currently try this:
knotc zone-freeze $ZONE
knotc zone-flush $ZONE
$EDITOR $ZONE
knotc zone-thaw $ZONE
knotc zone-reload $ZONE
As far as I can see knot increases the serial on reload and slaves will be notified.
Is this the correct command sequence?
Regards
Volker
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand