Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hi,
Recently, we noticed a few of our Knot slaves repeatedy doing zone transfers. After enabling zone-related logging, these messages helped narrow down the problem:
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] journal: unable to make free space for insert
Aug 8 17:42:14 f2 knot[31343]: warning: [our.zone.] IXFR, incoming, 1.2.3.4@53: failed to write changes to journal (not enough space provided)
These failures apparently caused the transfers to occur over and over. Not all the zones being served showed up in these messages, but I'm pretty sure that the ones with a high rate of change were more likely to do so. I do know there was plenty of disk space. A couple of the tunables looked relevant:
max-journal-db-size: we didn't hit this limit (used ~450M of 20G limit, the default)
max-journal-usage: we might have hit this limit. The default is 100M. I increased it a couple of times, but the problem didn't go away.
Eventually, we simply removed the journal database and restarted the server and the repeated transfers stopped. At first I suspected that it somehow was losing track of how much space was being allocated, but that's a flimsy theory: I don't really have any hard evidence and these processes had run at a high load for months without trouble. On reflection, hitting the max-journal-usage limit seems more likely. Given that:
1. Are the messages above indeed evidence of hitting the max-journal-usage limit?
2. Is there a way to see the space occupancy of each zone in the journal, so we might tune the threshold for individual zones?
On the odd chance that there is a bug in this area: we are using a slightly older dev variant: a branch off 2.5.0-dev that has some non-standard, minimal EDNS0 client-subnet support we were interested in. The branch is: https://github.com/CZ-NIC/knot/tree/ecs-patch.
Thanks,
Chuck
Dobrý den,
omlouvám se jestli se obracím na špatnou adresu, tuto jsem dostal telefonátem na vaši zákaznickou linku.
Měl bych zájem o odbornou instalaci a konfiguraci vašich DNS serverů v naší společnosti.
Jsme společnost, která poskytuje internetové služby a jednalo by se o interní DNS resolvery pro naše zákazníky
na našem (pravděpodobně virtualizovaném) hardwaru.
Jednalo by se o placenou jednorázovou instalaci a prvotní nastavení - možná včetně nějaké konzultace.
(bohužel si však nemohu dovolit Váš support).
Potřeboval bych tedy vědět, zdali poskytujete podobné služby (případně na koho se mohu obrátit) a alespoň
orientační odhad ceny výše uvedeného zásahu.
Předem děkuji za Váš čas a zpětnou vazbu.
S pozdravem
Milan Černý
vedoucí informačních systémů
Planet A, a.s.
provozovatel sítě AIM
kanceláře společnosti:
U Hellady 4
140 00 Praha 4 - Michle
www.a1m.cz<http://www.a1m.cz/>
+420 246 089 203 tel
+420 246 089 210 fax
+420 603 293 561 gsm
milan.cerny(a)a1m.cz<mailto:milan.cerny@a1m.cz>