Hello Knot developers,
I'm trying out Knot 1.3.0 final, and testing the new options for
system.identity, system.version and system.nsid.
At first, I did this:
system {
identity yes;
version yes;
nsid yes;
}
The alert ones will note that I didn't use "on", but accidentally used
"yes", so Knot parsed them all as strings, and gave me unexpected but
correct results.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt id.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15951
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID: 79 65 73 (y) (e) (s)
;; QUESTION SECTION:
;id.server. CH TXT
;; ANSWER SECTION:
id.server. 0 CH TXT "yes"
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt version.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56914
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; NSID: 79 65 73 (y) (e) (s)
;; QUESTION SECTION:
;version.server. CH TXT
;; ANSWER SECTION:
version.server. 0 CH TXT "yes"
Note that the NSID value is also "yes".
So I realised my mistake, and changes the values from "yes" to "on", and
HUPped the server. Now I get:
;; Warning: Message parser reports malformed message packet.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt id.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27835
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: Messages has 7 extra bytes at end
;; QUESTION SECTION:
;id.server. CH TXT
;; ANSWER SECTION:
id.server. 0 CH TXT "admin.authdns.ripe.net"
;; Warning: Message parser reports malformed message packet.
; <<>> DiG 9.9.3-P2 <<>> +norec +nsid @193.0.0.198 ch txt version.server
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60856
;; flags: qr; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: Messages has 7 extra bytes at end
;; QUESTION SECTION:
;version.server. CH TXT
;; ANSWER SECTION:
version.server. 0 CH TXT "Knot DNS 1.3.0"
Note the warnings from dig about the extra bytes at the end. It seems
that if you change the value of NSID and reconfigure the server, it does
not appear to pick up the new value correctly. Stopping Knot completely
and starting it fixes it, but there appears to be a bug during
reconfiguration.
Hi Everyone,
as promised last week, I am proud to announce the 1.3.0 final is out!
It's been a long release cycle since the last final release, but it
brought lots and lots of bugfixes and a slew of new features.
Let me reiterate briefly what's new since the 1.2.0 - one of the most
visible features is the new zone file parser,
which eliminated the whole zone compilation process and sped up both
startup and preparation.
There's also a magical configure option --enable-fastparser which
makes it even faster (about 2x), very close to loading a binary zone.
We also brought our own alternative to DNS utilities like dig, host
and nsupdate which aim to be compatible with the ISC counterparts,
but also bring some nimble enhancements like pretty comments and output for dig.
No smaller are changes to the configuration. Features like groups of
remotes, include in config, UNIX sockets for remote control, new knotc
commands and general build scripts overhaul that make it nicer for the
package maintainers and users.
There also was a major refactoring effort under the bonnet (and more
to come), which shows in a lower memory consumption, maintainability
and trim code base. For many many more, check our web pages or have a
look at the NEWS file for an exhaustive list of changes and bugfixes.
Back on the ground, we fixed several bugs since rc5 last week. Namely
answering from names at or below insecure delegation points,
new defaults for CH TXT special zones, randomly disconnected transfers
and secondary groups not being initialized when dropping privileges.
Also the bootstrap retry timer is now progressive.
Many thanks for Anand Buddhdev, Jonathan Hoppe, Johan Ihren, Erwin
Lansing and many others who have sent constructive reports, ideas,
encouragements and actual code (How cool is that?).
As always, you can find the full changelog at:
https://gitlab.labs.nic.cz/knot/blob/v1.3.0/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.gzhttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.bz2https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.gz.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.bz2.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0.tar.xz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Cheers,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
Hello Knot developers,
I'm testing 1.3.0-rc4, and have found something that looks like a bug.
I'm running knot using the CentOS upstart supervisor, and in the upstart
script, I have:
pre-stop exec knotc -c $CONF -w stop
This means that when I run "initctl stop knot", upstart will run "knotc
-c /etc/knot/knot.conf -w stop". The "-w" is supposed to make knotc wait
until the server has stopped.
However, in reality this is not happening. When the stop command is
given, Knot logs this:
2013-07-17T22:48:23 Stopping server...
2013-07-17T22:48:23 Server finished.
2013-07-17T22:48:23 Shut down.
And knotc returns *immediately*. However, if I examine the process
table, I see the knotd process still running. It takes knotd about 10
more seconds to actually exit, at 22:48:33. This is problematic for
upstart. Since knotc has returned, but the knotd process hasn't yet
died, upstart thinks that it has not responded to the stop request, and
so upstart uses the sledgehammer (kill -9) to stop the knotd process.
My assumption is that the knotd process is still doing housekeeping
stuff, so the KILL signal is not a good idea. By the looks of it, the
"-w" flag to knotc isn't doing what it's supposed to, ie. wait for the
server to exit. Could you please investigate this and fix it?
(As an aside, I can work around this in upstart by using the option
"kill timeout 60" which will make upstart wait at least 60 seconds
before trying a KILL signal, by which time knotd should have exited. But
this is just a work-around, not a solution).
Regards,
Anand Buddhdev
RIPE NCC
Hello,
it seems that knotd suffers from the same issue as described here:
http://lists.scusting.com/index.php?t=msg&th=244420
I have Debian 7.0 with
http://deb.knot-dns.cz/debian/dists/wheezy/main/binary-i386/net/knot_1.2.0-…
and this is in /var/log/syslog after reboot:
Jun 3 22:37:43 ns knot[2091]: Binding to interface 2xxx:xxxx:xxxx:xxxx::1
port 53.
Jun 3 22:37:43 ns knot[2091]: [error] Cannot bind to socket (errno 99).
Jun 3 22:37:43 ns knot[2091]: [error] Could not bind to UDP
interface 2xxx:xxxx:xxxx:xxxx::1 port 53.
I have a static IPv6 address configured in /etc/network/interfaces.
Restarting knot later binds to this IPv6 address without any problem - it
is only the first start which fails (during OS booting). What do you think
that is the proper way of making knotd reliably listen on a static IPv6
address? I would prefer if I could avoid restarting knotd.
Leos Bitto
Hello Knot folks:
The 'rundir' obsoletes 'pidfile' config option, as the PID file will
always be placed in 'rundir'."
This is cool, unless you want to run multiple instances of KNOT on a single
machine. Can you reconsider this?
Jonathan
Hello KNOT folks,
We've found an issue 1.3 with bootstrapping. We're using FreeBSD 9.x, but we
also quickly confirmed it exists on Ubuntu 12.x to confirm it was not
isolated to FreeBSD. We're testing with about 3000 to 4000 zones, so our
environment is not even very large at this point and the bootstrapping
failures are very problematic. There are three causes that we've seen thus
far:
1. If the AXFR TCP connect is interrupted by a signal, the whole AXFR is
aborted and the bootstrap is rescheduled instead of selecting on the socket
to either get the successful connection, or until it times out/fails. This
can result in a flood of connects, with little to no progress in the
bootstrapping.
2. When connected, if a recv() is interrupted by a signal, it isn't retried.
This results in connections being dropped that don't need to be dropped.
3. If a successful connect is made, but the remote end subsequently drops it
(e.g., resets the connection), then the bootstrap fails without being
rescheduled. This was found when slaving from a non-KNOT DNS server that may
have TCP rate limiting enabled, or something of that nature. Either way, the
fact that it is not rescheduled is very undesirable.
I suspect that there are other cases of interrupted system calls not being
handled correctly.
Here is some additional info that may help find the root cause:
- The greater the latency between the master and slave, the worse the
problem is. We tested with a slave 80 ms RTT away and it was very bad.
- The more worker threads you have, the worse the problem is. So even
locally (slave 0 ms away from master) we could reproduce the issue fairly
easily.
Hopefully this can be remedied!
Cheers,
Jonathan