Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hello,
I am a bit news with Knot DNS 2.5.3 and also DNSSEC as well.
I am succesfully migrated from a non dnssec bind to knot dns, this is working like a charm.
Now I would like to migrate one zone into DNSSEC.
Everything is working *except* getting keys (ZSK/KSK) to publish to Gandi.
I got this tutorial (french sorry) : https://www.swordarmor.fr/gestion-automatique-de-dnssec-avec-knot.html that seems to works BUT the publication of the DNSKEY has chenged, and I didn't find a nice way to get it.
Have any ideas ?
Regards,
Xavier
Dear Knot Resolver users,
Knot Resolver 1.3.2 maintenance update has been released:
Security
--------
- fix possible opportunities to use insecure data from cache as keys
for validation
Bugfixes
--------
- daemon: check existence of config file even if rundir isn't specified
- policy.FORWARD and STUB: use RTT tracking to choose servers (#125, #208)
- dns64: fix CNAME problems (#203) It still won't work with policy.STUB.
- hints: better interpretation of hosts-like files (#204)
also, error out if a bad entry is encountered in the file
- dnssec: handle unknown DNSKEY/DS algorithms (#210)
- predict: fix the module, broken since 1.2.0 (#154)
Improvements
------------
- embedded LMDB fallback: update 0.9.18 -> 0.9.21
It is recommended to update from 1.3.x, and it's strongly recommended to
update from older versions, as older branches are no longer supported.
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.2/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.2.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.3.2/
--Vladimir
Hello Knot developers,
Suppose I am running two Knot DNS instances. They're listening on
different interfaces, and slaving different sets of zones. If the
"storage" variable is the same for these two, then the two instances of
knotd will both try to write into storage/journal and storage/timers.
Is this safe to do? My understanding of LMDB is that a database can be
shared between different threads and processes because they using locking.
Regards,
Anand
Hello Knot DNS developers,
I have an observation about newer versions of Knot which use the new
single LMDB-based journal.
Suppose I have 3 slave zones configured in my knot.conf. Let's call them
zone1, zone2 and zone3. Knot loads the zones from the master and writes
data into the journal. Now suppose I remove one zone (zone3) from the
config, and reload knot. The zone is no longer configured, and querying
knot for it returns a REFUSED response. So far all is according to my
expectation.
However, if I run "kjournalprint <path-to-journal> -z", I get:
zone1.
zone2.
zone3.
So the zone is no longer configured, but its data persists in the
journal. If I run "knotc -f zone-purge zone3." I get:
error: [zone3.] (no such zone found)
I'm told that I should have done the purge first, *before* remove the
zone from the configuration. However, I find this problematic for 2 reasons:
1. I have to remember to do this, and I'm not used to this modus
operandi; and
2. this is impossible to do on a slave that is configured automatically
from template files. On our slave servers, where we have around 5000
zones, the zones are configured by templating out a knot.conf. Adding
zones is fine, but if a zone is being deleted, it will just disappear
from knot.conf. We keep no state, and so I don't know which zone is
being removed, and cannot purge it before-hand.
Now, the same is kind of true for Knot < 2.4. But... there is one major
difference. Under older versions of Knot, zone data was written into
individual files, and journals were written into individual .db files. I
can run a job periodically that compares zones in knot.conf with files
on disk, and delete those files that have no matching zones in the
config. This keeps the /var/lib/knot directory clean.
But in newer versions of Knot, there is no way to purge the journal of
zone data once a zone is removed from the configuration. For an operator
like me, this is a problem. I would like "knotc zone-purge" to be able
to operate on zones that are no longer configured, and remove stale data
anyway.
Hi Knot folks,
I just tried to view https://www.knot-dns.cz/ and it gave me an HTTP 404
error. After trying to reload it twice, I got the front page, but the
other parts of the site (documentation, download, etc) are all still
giving me HTTP 404 errors.
Regards,
Anand
Dear Knot DNS and Knot Resolver users,
in order to unite the git namespaces and make it more logical we
have move the repositories of Knot DNS and Knot Resolver under the
knot namespace.
The new repositories are located at:
Knot DNS
https://gitlab.labs.nic.cz/knot/knot-dns
Knot Resolver:
https://gitlab.labs.nic.cz/knot/knot-resolver
The old git:// urls were kept same:
git://gitlabs.labs.nic.cz/knot-dns.git
git://gitlabs.labs.nic.cz/knot-resolver.git
Sorry for any inconvenience it might have caused.
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Am 09.07.2017 um 12:30 schrieb Christoph Lukas:
> Hello list,
>
> I'm running knot 2.5.2 on FreeBSD.
> In an attempt to resolve a recent semantic error in one of my zonefiles,
> the $storage/$zone.db (/var/db/knot/firc.de.db) file got lost.
> Meaning: I accidentally deleted it without a backup.
> At point of deletion, the .db file was 1.6 MB in size.
> The actual zone file was kept, the journal and DNSSEC keys untouched,
> the zone still functions without any issues.
>
> The zone is configured as such in knot.conf:
> zone:
> - domain: firc.de
> file: "/usr/local/etc/knot/zones/firc.de"
> notify: inwx
> acl: acl_inwx
> dnssec-signing: on
> dnssec-policy: rsa
>
>
> This raises the following questions:
>
> 1) What is actually in those .db files?
> 2) Are these any adverse effects to be expected now that I don't have
> the file / need to re-create it?
> 3) How can I re-create the file?
>
> Any answers will be greatly appreciated.
>
> With kind regards,
> Christoph Lukas
>
As answered in
https://lists.nic.cz/pipermail/knot-dns-users/2017-July/001160.html
those .db files are not required anymore.
I should have read the archive first ;)
With kind regards,
Christoph Lukas