Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hello knot,
I have recently started a long over due migration to knot 2.* and I have noticed that the server.workers config stanza is now split into three separate stanzas [server.tcp-workers, server.udp-workers & server.background-workers]. Although this is great for flexibility it does make automation a little bit more difficult. With the 1.6 configuration I could easily say something like the following
workers = $server_cpu_count - 2
This meant I would always have 2 cpu cores available for other processes e.g. doc, tcpdump. With the new configuration I would need to do something like the following
$avalible_workers = $server_cpu_count - 2
$udp_workers = $avalible_workers * 0.6
$tcp_workers = $avalible_workers * 0.3
$background_workers = $avalible_workers * 0.1
The above code is lacking error detection and rounding corrections which will add further complexity and potentially lacking itelagence that is available in knot to better balance resources. As you have already implemented logic in knot to ensure cpus are correctly balanced I wonder if you could add back a workers configurations to act as the upper bound used in the *-workers configuration. Such that
*-workes defaults:
"Default: auto-estimated optimal value based on the number of online CPUs or the value set by `workers` which ever is lower)
Thanks
John
Hi,
I just upgraded my Knot DNS to the newest PPA release 2.5.1-3, after
which the server process refuses to start. Relevant syslog messages:
Jun 15 11:19:41 vertigo knotd[745]: error: module, invalid directory
'/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 error: module,
invalid directory '/usr/lib/x86_64-linux-gnu/knot'
Jun 15 11:19:41 vertigo knotd[745]: critical: failed to open
configuration database '' (invalid parameter)
Jun 15 11:19:41 vertigo knotd[745]: 2017-06-15T11:19:41 critical: failed
to open configuration database '' (invalid parameter)
Could this have something to do with the following change:
knot (2.5.1-3) unstable; urgency=medium
.
* Enable dnstap module and set default moduledir to multiarch path
Antti
Hi there,
I'm having some issues configuring dnstap. I'm using Knot version 2.5.1,
installed via the `knot` package on Debian 3.16.43-2. As per this
documentation
<https://www.knot-dns.cz/docs/2.5/html/modules.html#dnstap-dnstap-traffic-lo…>,
I've added the following lines to my config file:
mod-dnstap:
- id: capture_all
sink: "/etc/knot/capture"
template:
- id: default
global-module: mod-dnstap/capture_all
But when starting knot (e.g. by `sudo knotc conf-begin`), I get the message:
error: config, file 'etc/knot/knot.conf', line 20, item 'mod-dnstap', value
'' (invalid item)
error: failed to load configuration file '/etc/knot/knot.conf' (invalid
item)
I also have the same setup on an Ubuntu 16.04.1 running Knot version
2.4.0-dev, and it works fine.
Any idea what might be causing the issue here? Did the syntax for
mod-dnstap change or something? Should I have installed from source? I do
remember there being some special option you needed to compile a dependency
with to use dnstap when I did this the first time, but I couldn't find
documentation for it when I looked for it.
Thanks!
-Sarah
Hi,
after upgrade to 2.5.1 the output of knotc zone-status shows strange
timestamps for refresh and expire:
[example.net.] role: slave | serial: 1497359235 | transaction: none |
freeze: no | refresh: in 415936h7m15s | update: not scheduled |
expiration: in 416101h7m15s | journal flush: not scheduled | notify: not
scheduled | DNSSEC resign: not scheduled | NSEC3 resalt: not scheduled |
parent DS query: not schedule
However the zone is refreshed within correct interval, so it seems its
just a display issue. Is this something specific to our setup?
Regards
André
Dear Knot Resolver users,
CZ.NIC is proud to announce the release of Knot Resolver 1.3.0.
The biggest feature of this release is the support for DNSSEC Validation
in the forwarding mode, the feature many people were eagerly awaiting for.
We have also squeezed refactoring of AD flag handling and several other
bugfixes. The 1.3.0 is currently the recommended release to run at your
recursive nameservers.
Here's the 1.3.0 changelog:
Security
--------
- Refactor handling of AD flag and security status of resource records.
In some cases it was possible for secure domains to get cached as
insecure, even for a TLD, leading to disabled validation.
It also fixes answering with non-authoritative data about nameservers.
Improvements
------------
- major feature: support for forwarding with validation (#112).
The old policy.FORWARD action now does that; the previous non-validating
mode is still avaliable as policy.STUB except that also uses caching (#122).
- command line: specify ports via @ but still support # for compatibility
- policy: recognize 100.64.0.0/10 as local addresses
- layer/iterate: *do* retry repeatedly if REFUSED, as we can't yet easily
retry with other NSs while avoiding retrying with those who REFUSED
- modules: allow changing the directory where modules are found,
and do not search the default library path anymore.
Bugfixes
--------
- validate: fix insufficient caching for some cases (relatively rare)
- avoid putting "duplicate" record-sets into the answer (#198)
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.3.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/latest/
Cheers,
--
Ondřej Surý -- Technical Fellow
--------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Milesovska 5, 130 00 Praha 3, Czech Republic
mailto:ondrej.sury@nic.cz https://nic.cz/
--------------------------------------------
Hi,
we updated knot from 2.4.3 to 2.5.1 and the include statement does not
seem to work anymore:
error: config, file '/etc/knot/zones.conf', line 5, item 'domain', value
'example.net' (duplicate identifier)
error: config, file '/etc/knot/knot.conf', line 73, include ''
(duplicate identifier)
error: failed to load configuration file '/etc/knot/knot.conf'
(duplicate identifier)
cat > /etc/knot/knot.conf << 'EOF'
# THIS CONFIGURATION IS MANAGED BY PUPPET
# see man 5 knot.conf for all available configuration options
server:
user: knot:knot
listen: ["0.0.0.0@53", "::@53"]
version:
log:
- target: syslog
any: info
key:
- id: default
algorithm: hmac-sha512
secret:
pLEG3Z6uvMtKiQsmOp4tMDyyxENLyJGx8kIbud24tfHdY0uRO82Qix8D2opoA/rndcd2fdt9Ba1LhHDefCK1VQ==
remote:
- id: ns1
address: ["xxxx1", "yyyy1"]
key: default
- id: ns2
address: ["xxxx2", "yyyy2"]
key: default
- id: ns3
address: ["xxxx3", "yyyy3"]
key: default
acl:
- id: notify_from_master
action: notify
address: ["xxxx1", "yyyy1"]
key: default
- id: transfer_to_slaves
action: transfer
address: ["xxxx2", "xxxx2", "xxxx3", "yyyy3"]
key: default
policy:
- id: default_rsa
algorithm: RSASHA256
ksk-size: 2048
zsk-size: 1024
template:
- id: default
file: /var/lib/knot/zones/%s.zone
kasp-db: /var/lib/knot/kasp
storage: /var/lib/knot
- id: master_default
acl: ["transfer_to_slaves"]
file: /var/lib/knot/zones/%s.zone
ixfr-from-differences: on
notify: ["ns2", "ns3"]
serial-policy: unixtime
storage: /var/lib/knot
- id: master_dnssec
acl: ["transfer_to_slaves"]
dnssec-policy: default_rsa
dnssec-signing: on
file: /var/lib/knot/zones/%s.zone
notify: ["ns2", "ns3"]
storage: /var/lib/knot
zonefile-sync: -1
- id: slave
acl: ["notify_from_master"]
master: ns1
serial-policy: unixtime
storage: /var/lib/knot
include: "/etc/knot/zones.conf"
EOF
cat > /etc/knot/zones.conf << 'EOF'
# THIS CONFIGURATION IS MANAGED BY PUPPET
# see man 5 knot.conf for all available configuration options
zone:
- domain: example.net
template: slave
- domain: example.com
template: slave
- domain: example.org
template: slave
EOF
If I add the content from zones.conf into knot.conf it works. It seems
like the included file gets parsed twice, when I add a domain twice, it
will fail at the line with the duplicate zone. If there are no duplicate
domains in the file, it always fails at the first domain found.
Is this a bug or something with our setup?
Regards
André