Hi,
I recently started trying out Knot DNS and it has been a pleasure so
far. I like the query modules and how easy it is to construct a query plan.
I am thinking of putting knot as the public-facing server and enable RRl
on it. However, I noticed that rate limiting comes *before* forwarding
the unsatisfied query to the remote backend. This means effectively that
all the queries will be rate limited by error classification.
Wouldn't it be better to apply ratelimits after all stages of the query
plan have been processed? In other words, rate limit based on the final
response, rather than an intermediate state. This way you can truly use
knot as a rate-limiting, public-facing server protecting your backend
name server.
Thoughts?
Best regards,
Matthijs
Hello everyone.
CZ.NIC Labs has just released a patched version of Knot DNS. The 2.2.1
version contains some important bug fixes and a few small improvements.
Let's jump directly into it:
- The previous version was inconsistent in setting the TC flag for
delegations with a glue. We have decided to modify the behavior
slightly and the TC flag is now set always if a complete glue doesn't
fit the response.
- Logging of individual categories (server or zone) didn't work in the
previous release. This problem is now resolved. Logging all categories
(any configuration option) was not affected.
- We have eliminated data race in the code responsible for zone file
flushing. When more zone files were flushed concurrently, flushing of
some zones could fail. This problem is fixed in the new release.
- If you are running Knot DNS on OpenWRT, we advise you to update to the
new release. There has been a critical bug possibly making the server
to crash.
- There had been several issues with the control utility: The timeout
for reconfiguration is now parsed correctly. The "maxreaders limit
reached" error is gone. And the interactive mode now completes
multiple zone names if allowed by the command. The server will also
refuse to perform the reload, if there is an active configuration
transaction.
- We have removed switch for Link Time Optimization from the configure
script, because LTO is still rather problematic. If you want to use
LTO, set the CFLAGS and LDFLAGS appropriately.
- We have improved the logging and status messages to differentiate
between an unavailable zone and a zone with zero serial.
- We have added a list of PKCS #11 devices which had been tested with
automatic DNSSEC signing.
We are looking forward to your feedback.
The sources are available as usual:
Full changelog:
https://gitlab.labs.nic.cz/labs/knot/raw/v2.2.1/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-2.2.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-2.2.1.tar.xz.asc
Cheers,
Jan
--
Jan Včelák, Knot DNS
CZ.NIC Labs https://www.knot-dns.cz
--------------------------------------------
Milešovská 5, 130 00 Praha 3, Czech Republic
WWW: https://labs.nic.czhttps://www.nic.cz
Hey Jake,
yes it does, RPZ is supported for views as is any other policy. There's an
example of setting RPZ for a source-address subnet view in the
documentation:
http://knot-resolver.readthedocs.io/en/latest/modules.html#id3
Cheers,
Marek
> Does KnotDNS Resolver support the use of different RPZ's per-view?
>
> Looking to create a DNS setup that allows recursion to two separate
> groups, one using one RPZ setup, the other using another.
>
> Is this possible with Knot?
>
> Thanks,
> -jake
> _______________________________________________
> knot-dns-users mailing list
> knot-dns-users(a)lists.nic.cz
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
>
Hi,
Linux 4.6 was just released, and it includes a new feature called
"Kernel Connection Multiplexor":
http://kernelnewbies.org/Linux_4.6#head-d86a7a8affd7cefef85fff400e39403718b…
1.5. Kernel Connection Multiplexor, a facility for accelerating
application layer protocols
This release adds Kernel Connection Multiplexor (KCM), a facility
that provides a message-based interface over TCP for accelerating
application layer protocols. The motivation for this is based on the
observation that although TCP is byte stream transport protocol with
no concept of message boundaries, a common use case is to implement
a framed application layer protocol running over TCP. Most TCP
stacks offer byte stream API for applications, which places the
burden of message delineation, message I/O operation atomicity, and
load balancing in the application.
With KCM an application can efficiently send and receive application
protocol messages over TCP using a datagram interface. The kernel
provides necessary assurances that messages are sent and received
atomically. This relieves much of the burden applications have in
mapping a message based protocol onto the TCP stream. KCM also make
application layer messages a unit of work in the kernel for the
purposes of steerng and scheduling, which in turn allows a simpler
networking model in multithreaded applications. In order to
delineate message in a TCP stream for receive in KCM, the kernel
implements a message parser based on BPF, which parses application
layer messages and returns a message length. Nearly all binary
application protocols are parseable in this manner, so KCM should be
applicable across a wide range of applications.
DNS-over-TCP is definitely amenable to this scheme, since messages are
framed with a 2-byte message length value. It also sounds like it can be
combined with recvmmsg():
Q: What about the problem of a connections with very slow rate of
incoming data? As a result your application can get storms of very
short reads. And it actually happens a lot with connection from
mobile devices and it is a problem for servers handling a lot of
connections.
A: The storm of short reads will occur regardless of whether KCM is used
or not. KCM does have one advantage in this scenario though, it will
only wake up the application when a full message has been received,
not for each packet that makes up part of a bigger messages. If a
bunch of small messages are received, the application can receive
messages in batches using recvmmsg.
Maybe this could help speed up a DNS server, or even improve resistance
against slowloris style TCP attacks.
--
Robert Edmonds
edmonds(a)debian.org
Does KnotDNS Resolver support the use of different RPZ's per-view?
Looking to create a DNS setup that allows recursion to two separate
groups, one using one RPZ setup, the other using another.
Is this possible with Knot?
Thanks,
-jake
I'm building my own RPM for Knot 2.2.0 on Centos 7 due to the fact that the
system version is still 1.x. I'm building it with systemd integration, and
I've borrowed the systemd service file from the Fedora RPM. I'm having
issues at startup, though.
I'm wondering if it's related to the use of 'Type=notify', since knot seems
to be running, but then after a delay of 30 seconds systemd decides it has
failed and kills it.
I'm attaching useful bits below. Any thoughts on the cause?
My service file is:
[Unit]
Description=Knot DNS server daemon
[Service]
Type=notify
ExecStart=/usr/sbin/knotd
ExecReload=/usr/sbin/knotc reload
Restart=on-abort
ExecStartPre=/usr/sbin/knotc conf-check
# Breaks daemon reload
#CapabilityBoundingSet=cap_net_bind_service cap_setuid cap_setgid
[Install]
WantedBy=multi-user.target
And this is what knot is reporting at startup:
● knot.service - Knot DNS Server
Loaded: loaded (/etc/systemd/system/knot.service; disabled; vendor
preset: disabled)
Active: failed (Result: timeout) since Thu 2016-05-12 19:52:34 UTC; 3min
19s ago
Process: 5875 ExecStart=/usr/sbin/knotd (code=exited, status=0/SUCCESS)
May 12 19:51:04 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:51:04 info: starting server
May 12 19:51:04 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:51:04 info: server started in the foreground, PID 5875
May 12 19:51:04 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:51:04 info: control, binding to '/var/run/knot/knot.sock'
May 12 19:52:34 master01.test.conundrum.com systemd[1]: knot.service start
operation timed out. Terminating.
May 12 19:52:34 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:52:34 info: stopping server
May 12 19:52:34 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:52:34 info: updating zone timers database
May 12 19:52:34 master01.test.conundrum.com systemd[1]: Failed to start
Knot DNS Server.
May 12 19:52:34 master01.test.conundrum.com systemd[1]: Unit knot.service
entered failed state.
May 12 19:52:34 master01.test.conundrum.com systemd[1]: knot.service failed.
May 12 19:52:34 master01.test.conundrum.com knotd[5875]:
2016-05-12T19:52:34 info: shutting down
And finally, the config report from building the package:
knot 2.2.0
Target: linux-gnu x86_64
Compiler: gcc -std=gnu99
CFLAGS: -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-m64 -mtune=generic -DNDEBUG -Wno-unused -Wall -Werror=format-security
-Werror=implicit -fpredictive-commoning
LIBS: -lcap-ng -ldl -lpthread -lm -Wl,-z,relro
LibURCU: -lurcu
GnuTLS: -lgnutls -lnettle -I/usr/include/p11-kit-1
Jansson: -ljansson
Libedit: -ledit -ltinfo -I/usr/include/editline
LMDB: shared -llmdb
Sanitizer:
LibFuzzer: no
Prefix: /usr
Run dir: /var/run/knot
Storage dir: /var/lib/knot
Config dir: /etc/knot
Configuration DB mapsize: 500 MiB
Timers DB mapsize: 100 MiB
Knot DNS libraries: yes
Knot DNS daemon: yes
Knot DNS utils: yes
Knot DNS documentation: yes
Use SO_REUSEPORT: yes
Fast zone parser: yes
Utilities with IDN: yes
Systemd integration: yes
Dnstap support: yes
Code coverage: no
Bash completions: no
PKCS 11 support: no
I've just upgraded to 2.2.0 and noticed that knot failed to start. Upon
inspection, I found that:
/usr/lib/systemd/system/knot.service had the following line:
ExecStartPre=/usr/sbin/knotc checkconf
I think this is an error. I had to modify it to:
ExecStartPre=/usr/sbin/knotc conf-check
and then reimport the knot.conf file to get knot to start successfully.