Hello Knot developers,
I'm testing 1.3.0-rc4, and have found something that looks like a bug.
I'm running knot using the CentOS upstart supervisor, and in the upstart
script, I have:
pre-stop exec knotc -c $CONF -w stop
This means that when I run "initctl stop knot", upstart will run "knotc
-c /etc/knot/knot.conf -w stop". The "-w" is supposed to make knotc wait
until the server has stopped.
However, in reality this is not happening. When the stop command is
given, Knot logs this:
2013-07-17T22:48:23 Stopping server...
2013-07-17T22:48:23 Server finished.
2013-07-17T22:48:23 Shut down.
And knotc returns *immediately*. However, if I examine the process
table, I see the knotd process still running. It takes knotd about 10
more seconds to actually exit, at 22:48:33. This is problematic for
upstart. Since knotc has returned, but the knotd process hasn't yet
died, upstart thinks that it has not responded to the stop request, and
so upstart uses the sledgehammer (kill -9) to stop the knotd process.
My assumption is that the knotd process is still doing housekeeping
stuff, so the KILL signal is not a good idea. By the looks of it, the
"-w" flag to knotc isn't doing what it's supposed to, ie. wait for the
server to exit. Could you please investigate this and fix it?
(As an aside, I can work around this in upstart by using the option
"kill timeout 60" which will make upstart wait at least 60 seconds
before trying a KILL signal, by which time knotd should have exited. But
this is just a work-around, not a solution).
Regards,
Anand Buddhdev
RIPE NCC
Hello,
it seems that knotd suffers from the same issue as described here:
http://lists.scusting.com/index.php?t=msg&th=244420
I have Debian 7.0 with
http://deb.knot-dns.cz/debian/dists/wheezy/main/binary-i386/net/knot_1.2.0-…
and this is in /var/log/syslog after reboot:
Jun 3 22:37:43 ns knot[2091]: Binding to interface 2xxx:xxxx:xxxx:xxxx::1
port 53.
Jun 3 22:37:43 ns knot[2091]: [error] Cannot bind to socket (errno 99).
Jun 3 22:37:43 ns knot[2091]: [error] Could not bind to UDP
interface 2xxx:xxxx:xxxx:xxxx::1 port 53.
I have a static IPv6 address configured in /etc/network/interfaces.
Restarting knot later binds to this IPv6 address without any problem - it
is only the first start which fails (during OS booting). What do you think
that is the proper way of making knotd reliably listen on a static IPv6
address? I would prefer if I could avoid restarting knotd.
Leos Bitto
Hello Knot folks:
The 'rundir' obsoletes 'pidfile' config option, as the PID file will
always be placed in 'rundir'."
This is cool, unless you want to run multiple instances of KNOT on a single
machine. Can you reconsider this?
Jonathan
Hello KNOT folks,
We've found an issue 1.3 with bootstrapping. We're using FreeBSD 9.x, but we
also quickly confirmed it exists on Ubuntu 12.x to confirm it was not
isolated to FreeBSD. We're testing with about 3000 to 4000 zones, so our
environment is not even very large at this point and the bootstrapping
failures are very problematic. There are three causes that we've seen thus
far:
1. If the AXFR TCP connect is interrupted by a signal, the whole AXFR is
aborted and the bootstrap is rescheduled instead of selecting on the socket
to either get the successful connection, or until it times out/fails. This
can result in a flood of connects, with little to no progress in the
bootstrapping.
2. When connected, if a recv() is interrupted by a signal, it isn't retried.
This results in connections being dropped that don't need to be dropped.
3. If a successful connect is made, but the remote end subsequently drops it
(e.g., resets the connection), then the bootstrap fails without being
rescheduled. This was found when slaving from a non-KNOT DNS server that may
have TCP rate limiting enabled, or something of that nature. Either way, the
fact that it is not rescheduled is very undesirable.
I suspect that there are other cases of interrupted system calls not being
handled correctly.
Here is some additional info that may help find the root cause:
- The greater the latency between the master and slave, the worse the
problem is. We tested with a slave 80 ms RTT away and it was very bad.
- The more worker threads you have, the worse the problem is. So even
locally (slave 0 ms away from master) we could reproduce the issue fairly
easily.
Hopefully this can be remedied!
Cheers,
Jonathan
Hi Knot people,
I've been trying out rc3, and I've found a few issues, as documented below:
1. In the knotc man page, the "-s" option shows the default as
@rundir@/knot.sock. I guess that should have been substituted with the
compile-time setting, but wasn't.
2. knotc start doesn't do anything now. It should be removed.
3. knotc restart only stops the server, but does not start it. It should
be removed.
4. When configured as a slave, and Knot receives a zone via AXFR
containing the following record:
<owner-obscured>. 86400 IN TXT "Alliance Fran\195\167aise de Kotte"
it serves this record correctly when queried over DNS:
dig @a.b.c.d +norec +short txt <owner-obscured>.
"Alliance Fran\195\167aise de Kotte"
But when saving the zone to disk, this record gets written out as:
<owner-obscured>. 86400 IN TXT "Alliance Fran\4294967235\4294967207aise
de Kotte"
So when Knot restarts and tries to load this zone, it gives an error:
Error in zone file /var/lib/knot/XX:4875: Number is bigger than 8 bits.
5. I have configured Knot to log to the file /var/log/knot/knot.log.
Most logging goes into it. However, some error messages are still
leaking into syslog, for example, the following appears in both Knot's
log file, as well as in /var/log/messages (via syslog):
logfile:
2013-06-28T20:17:25 [error] Incoming AXFR of 'XX.' with 'a.b.c.d@53':
General error.
/var/log/messages:
Jun 28 20:17:25 admin knot[1630]: [error] Incoming AXFR of 'XX.' with
'a.b.c.d@53': General error.
I would expect Knot to send logging to syslog only while it is starting,
and it hasn't setup its logging to file yet. Once logging to a file has
started, there should be no more logs to syslog, so this looks like a
bug, albeit harmless.
That's about it for Friday evening :)
Regards,
Anand Buddhdev
RIPE NCC
Hi Everyone,
I'm happy to announce the third and probably last release candidate for
Knot DNS 1.3.0 is out there!
This release candidate is a bit special to us, featuring not only a slew of
bugfixes but also introducing a new feature.
Quite a lot of you asked for a tool to estimate memory consumption for
given zone, this is now a new action 'memstats'
available in knotc utility. The usage is quite similar to the 'checkzone'
command, but it prints out estimated memory
consumption instead. You can also pick specific zones if you like.
Second quite substantial change is deprecation of 'knotc start'. The
preferred method of startup is now using knotd directly,
optionally with -d flag for running in background. Also, knotd doesn't
create PID files when running in foreground.
This is to play nice with managed init scripts and to be able to write out
PID atomically.
On top of that, you can also alter default 'storage' and 'rundir', see
./configure --help for that.
The 'rundir' obsoletes 'pidfile' config option, as the PID file will always
be placed in 'rundir'.
Third and last substantial change is that knotc now uses UNIX sockets by
default and is also now a preferred mode of operation,
although IP sockets are still supported. We believe this is safer for now,
as the control protocol is not encrypted.
It is also enabled by default, to disable control interface completely, add
an empty control section to the configuration.
But of course, being a release candidate, we didn't forget to pack in a
bugfixes. Namely processing of IXFR with a lot
of serial changes, processing of TSIG keyfile in knotc and a couple
performance regressions related to zone transfers.
Big thanks for numerous hours of testing and witty feedback for this
release goes to Anand Buddhdev, thank you!
You can find the full changelog at:
https://gitlab.labs.nic.cz/knot/blob/v1.3.0-rc3/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.gzhttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.bz2https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.gz.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.bz2.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.xz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Kind regards,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
[sorry for cross-post, please don't cross-post when replying]
Hi,
we are currently implementing DNSSEC in Knot DNS and we would like to gather opinions on this topic. I think we have all have more experience of DNSSEC deployment now and it would be very valuable for us to gather that experience and forge an implementation that would be liked by the users.
Could you please spare a few minutes and fill a little survey of ours on this topic?
https://docs.google.com/forms/d/1gS-Z1NAL78oFOU51HR6jmg-ZkAevU2CrvCHQgrNnXL…
If you don't want to give your answers to NSA :), you can reply to this email (the Reply-To is set to knot-dns-users(a)lists.nic.cz), or just send the feedback back to me.
Thank you very much,
Ondrej
--
Ondřej Surý -- Chief Science Officer
-------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Americka 23, 120 00 Praha 2, Czech Republic
mailto:ondrej.sury@nic.cz http://nic.cz/
tel:+420.222745110 fax:+420.222745112
-------------------------------------------
Dear knot users/developers,
I'm testing Knot 1.3.0-rc2. One issue I've run into is when knot exits
uncleanly. In this case, it leaves its PID file behind in
/var/lib/knot/knot.pid.
On the following attempt to start knot, it sees the PID file, and
refuses to start, so I have to remove the stale PID file by hand. If I
run knot under a supervisor, such as upstart, then it tries to run knot,
fails, tries again rapidly in succession, and eventually gives up, at
which point knot isn't running.
My first observation is that in cases where knot is running under a
supervisor, the PID file is not necessary. The supervisor knows knot's
PID, and can signal it directly (to reload, or stop). Could you consider
making the PID file optional? As in, if in knot.conf, I have:
pidfile "";
then knot should not attempt to write (or read or delete) a PID file at all.
Alternatively, could you make knot a bit smarter so that if it detects a
PID file at startup, but that PID is either not in use, or the process
using it is not knot, then it should remove the PID file?
Without either of these possibilities, I have to add logic in my startup
scripts or supervisor scripts to remove any stale PID files. If I only
had one instance of knot, I could do that simply with "rm -f
/var/lib/knot/knot.pid", but I'd like to run multiple instances of knot,
and so I would have multiple PID files, whose locations I would have to
extract from the config file first in order to delete them reliably.
This starts to get hairy...
I hope I have made a reasonable case for providing an option to not
record a PID file, or clean up one before starting. My preference is for
the first option, ie. not writing a PID file.
Regards,
Anand Buddhdev
RIPE NCC
Dear Knot developers,
I'm testing Knot 1.3.0-rc1 now. One of the things I'd like to be able to
do is to estimate Knot's memory usage without actually loading all my
zones into it (because if it runs out of memory it will be killed).
One very crude way was to load a single zone from a plain text zone
file, and comparing the memory used by knot with the size of the zone on
disk. For a zone of size 229MB, knot is using about 1.3GB of RAM. So I
could say that knot wants about 6x as much RAM for each zone. Is this a
reasonable asumption?
Alternatively, since you know how knot uses memory for a zone, are you
able to provide a small program which can read zones one at a time,
calculate the RAM required for each zone, and then print a summary of
the total RAM required to load all those zones into knot?
We frequently get questions from our users along the lines of "I'd like
you to slave this new zone" or "we are going to sign this zone; will
your server be able to load it?" It would help an operator immensely if
he can get an estimate of the additional RAM required.
Regards,
Anand Buddhdev
RIPE NCC
Our BIND server slaves a zone which has the following record (I have
replaced the actual TLD with placeholders):
*.XX. 60 IN CNAME SERVER.SLD.XX.
BIND loads this zone without complaining.
However, knot 1.3.0-rc1 refuses to load this zone with the following error:
2013-06-07T13:53:50 [warning] Semantic warning in node: *.XX.: CNAME:
CNAME cycle!
2013-06-07T13:53:50 [error] Semantic check found fatal error on line=10
2013-06-07T13:53:54 [error] Zone could not be loaded
(-9992).2013-06-07T13:53:54 [error] Zone XX. could not be loaded.
As a side note, it looks like there's a newline missing somewhere,
because two log lines have appeared on a single line. And what's the
-9992 in brackets?
Regards,
Anand
Testing knot 1.3.0-rc1, I'm seeing a strange message in the log if I
start it without defining any zones:
2013-06-07T10:44:13 Binding to interface 127.0.0.1 port 5353.
2013-06-07T10:44:13 Configured 1 interfaces and 0 zones.
2013-06-07T10:44:13
2013-06-07T10:44:13 Changing group id to '10073'.
2013-06-07T10:44:13 Changing user id to '10073'.
2013-06-07T10:44:13 Loading 0 zones...
2013-06-07T10:44:13 Loaded -12 out of 0 zones.
2013-06-07T10:44:13 [warning] Not all the zones were loaded.
2013-06-07T10:44:13 Starting server...
2013-06-07T10:44:13 Server started in foreground, PID = 10514
2013-06-07T10:44:13 PID stored in /var/lib/knot/knot.pid
2013-06-07T10:44:13 [warning] Server started, but no zones served.
Notice the "-12 out of 0 zones.". Bug?
Anand
Hi everyone,
we've been very busy for the last couple months, and I'm glad to let you
know,
that the Knot DNS v1.3.0 first release candidate is finally out!
This is a feature-full release, packing a lot of new stuff and improvements.
The major change since 1.2.0 is the zone compilation, which is now
deprecated and
zone files are parsed directly on startup. This was made possible with the
new zone file parser,
which is neck to neck in terms of speed with whole parsing to loading a
precompiled zone.
So no more 'compile' steps.
Configuration file is also improved with the handy 'group' keyword, that
allows you to
make groups of remotes and also a support to include another part of config
file.
This is useful for example if you want to store keys elsewhere.
We also now support queries to pseudo CH zone (RFC4892 style), so that is
configurable
as well (we support identity, version and hostname).
On the backend part, we have overhauled several internal structures to
lower memory
consumption and improved speed and scheduling of zone transfers. Log
messages for
those are improved as well, giving a more verbose overview of what was
transferred,
which serial and how long. Apart from that, there are a lot of smaller
performance
improvements, revamped build system and a lot of small details.
For example zone files written out on slave contain a source address, time
and other
useful information. See NEWS or gitlog for more details.
Last new (sort of) feature is, that we included our own DNS tools.
Namely kdig, khost and knsupdate. They are quite compatible with BIND9
tools,
but they also bring several improvements in logging, prettified output and
checks.
Be sure to check them out.
We also had a terrific user feedback, thanks to Erwin Lansing, Anand
Buddhdev, Jan-Piet Mens
and everyone for the reports.
So that's mostly it! For a full overview of changes see:
https://gitlab.labs.nic.cz/knot/blob/v1.3.0-rc1/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.gzhttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.bz2https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.gz.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.bz2.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc1.tar.xz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Kind regards,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
Hey guys,
i am currently playing around with some dns solutions, currently its knot :-)
Is there any solution for recursion. I would like to add an powerdns server to any node which could do lookups for new zones while the configfile isnt written. As you may know registrys like denic are using realtime checks while ordering a domain.
Regards
Joerg
Thanks for the 1.2.0, some really nice features in there. I especially like the zonestatus command.
I have one problem though. It seems that knot drops its root privileges too early, before trying to bind to the interface.
Configured with:
system { user bind.bind };
Results in:
Apr 23 12:26:26 l knot[25585]: [error] Could not bind to UDP interface 127.0.0.1 port 53.
Apr 23 12:26:26 l knot[25585]: [error] Could not bind to UDP interface ::1 port 53.
Changing to root.bind, makes it work, hence my guess it's related to dropping privileges. This is on FreeBSD 9.0.
Any hints appreciated.
Best,
Erwin
--
Med venlig hilsen/Best Regards
Erwin Lansing
Network and System Administrator
DK Hostmaster A/S
Kalvebod Brygge 45, 3. sal
1560 København V
Tlf. 33 64 60 60
Fax.: 33 64 60 66
Email: erwin(a)dk-hostmaster.dk
Homepage: http://www.dk-hostmaster.dk
.dk Danmarks plads på Internettet
-------------------------------------------------------------------------
Dette er en e-mail fra DK Hostmaster A/S. Denne e-mail kan indeholde
fortrolig information, som kun er til brug for den tiltænkte modtager.
Hvis du ved en fejl har modtaget denne e-mail, bedes du venligst straks
give afsenderen besked om dette og slette e-mailen fra dit system uden
at offentliggøre, videresende eller tage kopi af meddelelsen.
This is an email from DK Hostmaster A/S. This message may contain
confidential information and is intended solely for the use of the
intended addressee. If you are not the intended addressee please notify
the sender immediately and delete this e-mail from your system. You are
not permitted to disclose, distribute or copy the information in this
e-mail.
--------------------------------------------------------------------------
Hi,
We're using the latest version 1.2.0 after updating from 1.1.0. It seems
that when we run a dnsperf against it, we now get many query timeouts. It
isn't that we're overloading the server, because we can run a 2nd server
with dnsperf and get similar throughput (22k qps) but it too has query
timeouts of about just under 1%.
This seemed like maybe it was the response rate limiting, but it says it is
off by default. To be sure, I set the parameter in the config to be off.
Am I missing something? Is there something else I need to turn off?
Thanks for any guidance anyone can provide!
Jonathan
Hello everyone,
we're happy to announce that the Knot DNS 1.2.0 final is out after the
fourth release candidate.
Just to reiterate what's new and fixed in the 1.2.0, we brought 3 new
features in the 1.2.0.
First is a support for dynamic updates (DDNS) including forwarding to the
primary master,
which received a couple of bugfixes in the early release candidates.
Since the third release candidate there is a Response Rate Limiting as a
new way to combat increasing amplification/reflection attacks.
It's been slightly reworked since the release candidate and disabled by
default. You can enable it by setting 'rate-limit' config option to a
sensible value.
Last feature is a reworked control utility which is now able to control the
daemon remotely and even introduced a few new commands, namely 'zonestatus'
to
fetch the status of served zones. Aside from the new features, it also
fixes a few bugs. Namely missing RRSIGs in the response to the ANY type,
processing of some malicious domain names and a detection of broken
implementation of recvmmsg() on some Linux distributions.
As usual, you can find a full list of changes at
https://redmine.labs.nic.cz/projects/knot-dns/repository/revisions/v1.2.0/e…
Sources: https://secure.nic.cz/files/knot-dns/knot-1.2.0.tar.gz
GPG signature: https://secure.nic.cz/files/knot-dns/knot-1.2.0.tar.gz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Cheers,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
Hi everyone,
as an outcome of the discussions on the RRL mailing lists and a
stellar feedback in recent weeks,
we have decided to slip yet another release candidate before the 1.2.0
finally goes out.
The release candidate features a reworked classification in RRL in
respect to the RRL technical memo
and also includes code to resolve hash collisions in the former implementation.
Also a new 'zonestatus' command was introduced to knotc and a several
bugs were fixed, namely logfile ownership problems, faster rate of SOA
queries on refresh and
knotc respecting 'control' section in configuration.
As usual, you can find a full list of changes at
https://redmine.labs.nic.cz/projects/knot-dns/repository/revisions/v1.2.0-r…
Sources: https://secure.nic.cz/files/knot-dns/knot-1.2.0-rc4.tar.gz
GPG signature: https://secure.nic.cz/files/knot-dns/knot-1.2.0-rc4.tar.gz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Have a nice weekend,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
On a server with 16 GB of RAM, my instance of BIND can load my 5174
zones into memory and use around 13 GB.
Knot didn't do so well. At some point while trying to XFR-in these
zones, it hit the memory limit and the Linux out-of-memory killer came
along and killed it.
When I started it up again, it began loading zones in from the disk, but
then appeared to go into some kind of loop, and the CPU usage was 100%.
This is usually a sign that it is stuck in some kind of loop or
deadlock. The only want to stop it is with a KILL signal (TERM doesn't
work). The log didn't output anything.
How can I help debug this?
Do you have any numbers on how much RAM Knot will require given a bunch
of zones? This would allow me to estimate how much RAM I will need in a
server for the zones I have.
Regards,
Anand
Hello,
Is there any existing functionality to log queries? I've enabled all
existing logging for the "answer" category and do not see any. But maybe it
is unsupported in version 1.1.0?
log { syslog { answering all; } }
Thanks,
Jonathan
Hello,
I'd like to mention a few nits about Knot's documentation, if you don't
mind. :)
1. Docs linked to from https://www.knot-dns.cz/documentation.html have
URLs which don't look permanent; this makes it difficult to link to
individual pages. It would be better imo, to have permanent URIs.
2. The usage message for [knotc flush] says "Flush journal and update
zone files.". I understand this to mean zones that have received
updates (RFC2136) will be written out, but this doesn't occur. I note
zones are written out only at the end of a `zonefile-sync' period.
3. ixfr-from-differences, while documented in the manual, points to
'Controlling running daemon', but it doesn't say there, that the
syntax is 'on/off'.
Enabling this in zones {} doesn't seem to do anything here: I was
expecting to see a "*.ixfr" or some such file containing diffs, but I
get none; neither for incoming xfr, nor for DNS updates.
4. Docs specify in 'Controlling running daemon' there is a knotc option
-a, but the code doesn't have that: knotc: invalid option -- 'a'
Same for 'Running Knot DNS' chapter.
Regards,
-JP
Hello,
Yesterday I had Knot 1.1.0 installed and with 4000 test zones on a slave
server configuration, a knotc refresh would hit my master to check SOA on
all zones very quickly (I would see over 600 queries per second). It could
generally complete the task in well under 60 seconds.
Today I have upgraded to 1.2.0-RC3 and with the same 4000 test zones on a
slave server configuration a knotc refresh is extremely slow. It issues SOA
queries to the master at about 10 queries per second.
Why this change? Is there any setting I can implement to speed this up? In a
production environment with 250k+ zones, this obviously won't work.
Thanks!
Jonathan
Hello,
I'm pretty sure I have tomatoes (or other vegetables) on my eyes, but
Knot won't answer UDP queries here, running on Centos 6.3 in an OpenVZ
container. This is the minimal configuration and the results I'm seeing:
$ knotc -V
Knot DNS, version 1.2.0-rc3
$ cat knot.sample.conf
system {
identity "knot 1.2-rc2";
storage "/etc/knot";
}
interfaces {
my-iface { address 127.0.0.1@53; }
}
zones {
example.com {
file "example.com.zone";
}
}
log {
syslog { any info, debug, notice, warning, error; }
}
$ knotc -c knot.sample.conf start
2013-03-11T18:07:44.244202+01:00 Starting server...
2013-03-11T18:07:44.244337+01:00 Zone 'example.com.' is up-to-date.
$ netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 11408/knotd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 486/sshd
tcp 0 0 172.16.153.104:22 172.16.153.1:64881 ESTABLISHED 554/sshd
tcp 0 0 :::22 :::* LISTEN 486/sshd
udp 0 0 127.0.0.1:53 0.0.0.0:* 11408/knotd
unix 2 [ ] DGRAM 8840414 11408/knotd
$ dig @127.0.0.1 example.com any
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> @127.0.0.1 example.com a
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ dig +tcp @127.0.0.1 example.com a
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> +tcp @127.0.0.1 example.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20248
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;example.com. IN A
;; AUTHORITY SECTION:
example.com. 60 IN SOA localhost. root.localhost. 1997022700 28800 14400 3600000 86400
;; Query time: 2 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Mar 11 18:08:32 2013
;; MSG SIZE rcvd: 79
Mar 11 18:07:44 knot knot[11408]: Binding to interface 127.0.0.1 port 53.
Mar 11 18:07:44 knot knot[11408]: Configured 1 interfaces and 1 zones.
Mar 11 18:07:44 knot knot[11408]:
Mar 11 18:07:44 knot knot[11408]: Loading 1 compiled zones...
Mar 11 18:07:44 knot knot[11408]: Loaded zone 'example.com.' serial 1997022700
Mar 11 18:07:44 knot knot[11408]: Loaded 1 out of 1 zones.
Mar 11 18:07:44 knot knot[11408]: Starting server...
Mar 11 18:07:44 knot knot[11408]: Server started as a daemon, PID = 11408
Mar 11 18:07:44 knot knot[11408]: PID stored in /etc/knot/knot.pid
Can you please help me open my eyes?
Thanks,
-JP
I thought I'd enable debugging code to try and figure out why the server
goes into a loop. So I added the following to configure:
--enable-debug=server,zones,xfr,packet,dname,rr,ns,hash,compiler,stash
However, Knot doesn't compile now. The compile fails with:
...
...
libknot/packet/packet.c: In function 'knot_packet_parse_question':
libknot/packet/packet.c:311: error: 'bp' undeclared (first use in this
function)
libknot/packet/packet.c:311: error: (Each undeclared identifier is
reported only once
libknot/packet/packet.c:311: error: for each function it appears in.)
If I remove the packet debug option, I can compile Knot again.
Regards,
Anand
Hello Knot developers,
I configured an instance of Knot 1.2.0rc3 with 5174 slave zones, and
started it. I wanted to see how long it would take to transfer all the
zones in. It has been running for about an hour, and it still hasn't
managed to load all the zones from its master. I'll pick just one zone
as an example to show how long it took:
# grep apnic.net /var/log/knot/knot.log
2013-03-09T00:50:19.361489-00:00 Will attempt to bootstrap zone
apnic.net. from AXFR master in 37s.
2013-03-09T01:06:31.330875-00:00 Will attempt to bootstrap zone
apnic.net. from AXFR master in 35s.
2013-03-09T01:16:19.741644-00:00 Incoming AXFR transfer of 'apnic.net.'
with '193.0.0.198@53' key 'ripencc-20110222.': Started.
2013-03-09T01:16:35.282203-00:00 Will attempt to bootstrap zone
apnic.net. from AXFR master in 52s.
2013-03-09T01:52:16.477212-00:00 Will attempt to bootstrap zone
apnic.net. from AXFR master in 37s.
2013-03-09T01:56:02.806828-00:00 Will attempt to bootstrap zone
apnic.net. from AXFR master in 13s.
2013-03-09T01:57:00.901518-00:00 Incoming AXFR transfer of 'apnic.net.'
with '193.0.0.198@53' key 'ripencc-20110222.': Started.
2013-03-09T01:57:01.096493-00:00 Incoming AXFR transfer of 'apnic.net.'
with '193.0.0.198@53' key 'ripencc-20110222.': Finished.
>From the time Knot decided it needed to bootstrap apnic.net, to the time
it actually transferred the zone in, it took 67 minutes! All that while,
it was responsing with SERVFAIL for the zone.
While the zone "apnic.net" has now been loaded, Knot is still busy
loading many of the other zones. I can't tell how far it is. Perhaps
there could be a command for knotc, called "zonestatus" which would
print out a list of all configured zone, and their statuses.
I understand that transferring in lots of zones takes time. However, if
I start an instance of BIND with the same slave zones, it can transfer
them all in within about 15-20 minutes. Knot appears to be much slower
in comparison.
Do you have any suggestions for speeding things up?
Regards,
Anand
Hello Knot developers,
I'm testing the use of multiple instances of Knot on the same server.
For this simple test, I have two configurations, A.conf and B.conf, as
follows. Note that I'm bind the control interface to two different ports.
A.conf:
...
...
interfaces {
a { address A; }
}
remotes {
ctl { address 127.0.0.1; }
}
control {
listen-on { address 127.0.0.1@5553; }
allow ctl;
}
B.conf:
...
...
interfaces {
b { address B; }
}
remotes {
ctl { address 127.0.0.1; }
}
control {
listen-on { address 127.0.0.1@5554; }
allow ctl;
}
I then start both instances:
knotd -c A.conf
knotd -c B.conf
Now, if I run "knotc -c A.conf reload" then the A instance reloads, as I
expect.
However, if I run "knotc -c B.conf reload", then it is still the A
instance that reloads. The only way I can get the B instance to reload
is to run "knotc -p 5554 reload".
This is not what I expected. I expected that if I give knotc a
configuration file, then it will use the address and port numbers in
there, so that it can signal the correct instance of Knot. In
comparison, the upcoming NSD4 *does* do this. It has an "nsd-control"
command, and if I give it a configuration file as a parameter, then it
uses the control port settings from that file to signal the appropriate
instance of NSD, and not just the default one.
Would you consider getting knotc to also behave the way nsd-control
does? After all, knotc *does* read some configuration from the file, so
ideally it should also read the control section and use the appropriate
address and port numbers from there.
Regards,
Anand Buddhdev
RIPE NCC
Dear Knot developers,
I've been playing with Knot version 1.2.0-rc3, and run into a small
issue. Apologies if it has already been reported.
I'm using a configuration like this:
system {
...
...
user knot.knot;
}
log {
file "/var/log/knot/knot.log" { ... }
}
When I start Knot, it starts running as root, and creates the file
/var/log/knot/knot.log, as root. It then switches to the non-privileged
user "knot". Now it can no longer continue writing to the log file.
Could you please add some code (to go before it changes the UID) to
change the ownership of the log file to the user it is about to switch to?
Regards,
Anand
I've configured Knot 1.2.0-rc3 to log to a file with this:
log {
file "/var/log/knot/knot.log" { any debug, info, notice, warning, error; }
}
However, some log messages are appearing in /var/log/messages via
syslog, for example:
Mar 9 11:51:37 ams3 knot[30177]: [error] Incoming AXFR transfer of
'0.0.0.0.a.2.ip6.arpa.' with '193.0.0.198@53' key 'ripencc-20110222.':
Zone transfer refused by the server.
Looks like some parts of Knot are not honouring the file log option.
Regards,
Anand
Hello Knot developers,
I've added some zones to my Knot configuration like this:
zones {
example.com {
file example.com;
xfr-in master;
notify-in master;
}
}
After Knot has started up, I see the following files in the storage
directory:
example.com
example.com.db
example.com.db.crc
example.com.diff.db
I see that Knot keeps a copy of the zone in a compiled form. Why does it
also dump the plain text zone?
This would be okay for a small number of zones, but if I want to load a
few thousand zones, then all those plain text copies of the zone files
will take up unnecessary space on the disk. I tried to turn off the
plain text copy by removing the "file example.com" config line, but then
Knot created a file called "example.com..zone" (yes, with two dots) to
keep a plain text copy of the zone anyway.
Is it possible to turn off this plain text dump of the zone?
Regards,
Anand
Hi everyone,
I'm happy to announce that the 3rd and hopefully final release
candidate of Knot DNS 1.2.0 is out!
This one not only brings a few bugfixes, but also a new feature.
The hot topic of the day - Response Rate Limiting, based on the work
of Vernon Schryver and Paul Vixie
(http://www.redbarn.org/dns/ratelimits).
If you're not familiar with the topic, it is a way to combat DNS
amplification and reflection attacks.
General idea is to identify flows in outgoing responses and block
responses exceeding the rate limit.
To get at least some degree of service for victims a mechanism called
SLIP is used, causing each Nth blocked response to
be sent as truncated, thus enabling legitimate requests to reconnect over TCP.
You can enable rate limiting by setting option "rate-limit" to a value
greater than 0, for example:
system {
rate-limit 100;
}
For more about rate limiting knobs, refer to the documentation or a
sample configuration.
Any feedback is more than welcome before it lands in the final version!
As usual, you can find a full list of changes at
https://redmine.labs.nic.cz/projects/knot-dns/repository/revisions/v1.2.0-r…
Sources: https://secure.nic.cz/files/knot-dns/knot-1.2.0-rc3.tar.gz
GPG signature: https://secure.nic.cz/files/knot-dns/knot-1.2.0-rc3.tar.gz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Have a nice weekend,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
I just successfully migrated a several zones from NSD to Knot DNS. The
migration was straightforward and quick, but I limiting the number of
addresses per interface and remote server to one doesn't make sense to
me especially with dual stack hosts. This is more intuitive to me in
NSD. What is the rationale behind the current syntax?
Regards,
Matthias-Christian
Hi Everyone,
we're proud to announce, that release candidate for the next major
release 1.2 is out!
Apart from a couple bugfixes, it features full DDNS support (including
update forwarding).
There are a few limitations related to DNSSEC-signed zones, please
refer to user manual for more information.
Access control for dynamic updates could be configured in a similar
fashion to transfers, using the 'update-in' keyword.
Next major thing is an updated remote control tool.
Basic commands for start/stop/restart/compile and checks are the same
as they were,
but the command for refresh/flush/status could be executed remotely
given the right configuration.
You can enable remote control interface with a new config section
'control', f.e.:
control {
listen-on { address 127.0.0.1@5553; }
allow remote0;
}
You can also specify a remote with associated TSIG key for security reasons.
Knot control tool then accepts host, port and TSIG key as a parameter, f.e.:
$ knotc -s 127.0.0.1 -p 5553 status
Key could be also specified in a file instead of a command line parameter.
But that is just a tip of the iceberg. For more smaller features, like
configurable TCP timeouts or LOC support,
refer to RELNOTES and a user documentation.
RELNOTES: https://git.nic.cz/redmine/projects/knot-dns/repository/revisions/v1.2-rc1/…
Sources: https://secure.nic.cz/files/knot-dns/knot-1.2-rc1.tar.gz
GPG signature: https://secure.nic.cz/files/knot-dns/knot-1.2-rc1.tar.gz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Cheers,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
Hi,
we've recently corrected two issues: the server crashed when trying to
reload configuration with duplicate zones and zone transfers scheduling
was broken, causing sometimes (but very rarely) some zones not to be
synced with master properly, so we are releasing the fixed version right
away.
Sources of this version are here:
http://public.nic.cz/files/knot-dns/knot-1.1.2-rc1.tar.gz
GPG signature:
http://public.nic.cz/files/knot-dns/knot-1.1.2-rc1.tar.gz.asc
Final v1.1.2 will be released in a week.
Kind regards,
Lubos
--
Ľuboš Slovák Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
Email: lubos.slovak(a)nic.cz
WWW: http://labs.nic.czhttp://www.nic.cz
-------------------------------------------
Please consider the environment before printing this email.
Join the campaign at http://thinkBeforePrinting.org
Hello,
final release 1.1.1 of Knot DNS was just released. We just fixed one
small bug since last week's release candidate. Now should be stable.
Enjoy and stay tuned for version 1.2, which should be out in late
November and will bring the long desired support for Dynamic Updates and
some other improvements!
Kind regards,
Lubos
--
Ľuboš Slovák Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
Email: lubos.slovak(a)nic.cz
WWW: http://labs.nic.czhttp://www.nic.cz
-------------------------------------------
Please consider the environment before printing this email.
Join the campaign at http://thinkBeforePrinting.org
Hello,
I have setup a KNOT dns server but I'm having troubles with the UDP
queries. The server is not answering to the UDP queries but it is
answering to queries in TCP.
The server is running on a CentOS release 6.3 (Final) and the
configuration file is the following.
*************knot.conf***********
system {
identity "Yet.another.server";
nsid "Yet.another.server";
storage "/opt/knot_run/knot-minimal";
pidfile "/opt/knot_run/knot.pid";
user root;
}
interfaces {
ipv4 { address 127.0.0.1@53; }
ipv4 { address 193.137.197.25@53; }
}
remotes {
ns-test01 { address 193.136.192.86@53; }
ns-test02 { address 193.136.192.87@53; }
ns-test03 { address 193.137.196.30@53; }
ns-test04 { address 193.137.196.31@53; }
}
zones {
zonetest-01.dns.pt {
file "/opt/knot_run/zones/zonetest01";
xfr-in ns-test01;
notify-in ns-test01;
}
zonetest-06.dns.pt {
file "/opt/knot_run/zones/zonetest06";
}
}
log {
file "/opt/knot_run/log/knot.log" { any all; }
}
**********************************
The output of the log file is
********knot.log******************
2012-09-17T10:25:40.208574+01:00 Stopping server...
2012-09-17T10:25:40.210677+01:00 Server finished.
2012-09-17T10:25:40.211260+01:00 Shut down.
2012-09-17T10:25:40.230967+01:00 Binding to interface 127.0.0.1 port 53.
2012-09-17T10:25:40.231283+01:00 Binding to interface 193.137.197.25
port 53.
2012-09-17T10:25:40.232162+01:00 Loading 2 compiled zones...
2012-09-17T10:25:40.233783+01:00 Loaded zone 'zonetest-01.dns.pt.'
2012-09-17T10:25:40.237553+01:00 Loaded zone 'zonetest-06.dns.pt.'
2012-09-17T10:25:40.238983+01:00 Loaded 2 out of 2 zones.
2012-09-17T10:25:40.239044+01:00 Configured 2 interfaces and 2 zones.
2012-09-17T10:25:40.239078+01:00
2012-09-17T10:25:40.239111+01:00 Starting server...
2012-09-17T10:25:40.240688+01:00 Server started as a daemon, PID = 8599
2012-09-17T10:25:40.240772+01:00 PID stored in /opt/knot_run/knot.pid
*********************************
And an example of the query's
*********************************
[root@ns-test06 ~]# dig @127.0.0.1 zonetest-06.dns.pt +tcp
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.1 <<>> @127.0.0.1
zonetest-06.dns.pt +tcp
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30969
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;zonetest-06.dns.pt. IN A
;; ANSWER SECTION:
zonetest-06.dns.pt. 3600 IN A 193.137.196.42
;; AUTHORITY SECTION:
zonetest-06.dns.pt. 3600 IN NS ns-test01.dns.pt.
zonetest-06.dns.pt. 3600 IN NS ns-test02.dns.pt.
zonetest-06.dns.pt. 3600 IN NS ns-test03.dns.pt.
zonetest-06.dns.pt. 3600 IN NS ns-test04.dns.pt.
zonetest-06.dns.pt. 3600 IN NS ns-test06.dns.pt.
;; Query time: 2 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Mon Sep 17 10:25:49 2012
;; MSG SIZE rcvd: 202
[root@ns-test06 ~]# dig @127.0.0.1 zonetest-06.dns.pt
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.1 <<>> @127.0.0.1
zonetest-06.dns.pt
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
*********************************
Can anyone help me with this problem?
Best regards,
--
Eduardo Duarte
SIT-DNS
DNS.PT - https://www.dns.pt/
FCCN - http://www.fccn.pt/
Greetings. This seems a bit odd. I have a zone file with:
=====
$TTL 6
test79.example.com. IN SOA foo.example.com. dns.test79.example.com. ( 201209150 5m 1m 2w 5m )
test79.example.com. IN NS foo.example.com.
test79.example.com. IN TYPE79 \# 4 deadface
=====
Trying to compile it gives:
=====
Parsing file '/home/dns/Files/foo.conf', origin 'test79.example.com.' ...
/home/dns/Files/foo.conf:4: error: bad unknown RDATA
Parser finished with error, not dumping the zone!
error: Compilation of 'test79.example.com.' failed, knot-zcompile return code was '1'
=====
This zone file works fine in BIND, NSD, and PowerDNS. It seems that either Knot cannot use the standard mechanism for defining unknown RRtypes (RFC 3597), or it maybe has a different syntax. Clues are appreciated.
--Paul Hoffman
Hi!
I run Knot with option
apn@knot-test:/home/apn>grep user /usr/local/etc/knot/knot.conf
user bind.dns;
apn@knot-test:/home/apn>ps uaxww | grep knot
bind 9925 0.0 0.8 33760 8736 ?? Ss 4:03PM 0:00.07
/usr/local/sbin/knotd -d -c /usr/local/etc/knot/knot.conf
apn@knot-test:/home/apn>knotc -V
Knot DNS, version 1.1.0-rc2
apn@knot-test:/home/apn>uname -a
FreeBSD knot-test.local 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan 3
07:46:30 UTC 2012
root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
Everything is fine except for one: I can't control Knot via knotc
under my account and have to raise my privileges.
apn@knot-test:/home/apn>knotc running
2012-09-03T17:33:20.801730+04:00 Using '/usr/local/etc/knot/knot.conf'
as default configuration.
2012-09-03T17:33:20.802876+04:00 Server PID not found, probably not running.
2012-09-03T17:33:20.803099+04:00 [warning] PID file is stale.
apn@knot-test:/home/apn>knotc reload
2012-09-03T17:57:01.706820+04:00 Using '/usr/local/etc/knot/knot.conf'
as default configuration.
2012-09-03T17:57:01.707934+04:00 [warning] Server PID not found,
probably not running.
apn@knot-test:/home/apn>knotc refresh
2012-09-03T17:57:11.314605+04:00 Using '/usr/local/etc/knot/knot.conf'
as default configuration.
2012-09-03T17:57:11.315736+04:00 [warning] Server PID not found,
probably not running.
I believe that is because of using of kill(2) in pid_running(). So I'm
wondering how unprivileged user can send commands to Knot?
Thanks in advance.
--
AP
Hi,
second Release Candidate of Knot DNS 1.1 is out now. We slightly
improved and fixed the user manual, fixed two minor bugs:
- generating journal for IXFR when the zone contains IPSECKEY and APL
records in binary format,
- possible leak on server shutdown with a pending transfer
and fixed the behaviour of slave server using TSIG. It did not sign SOA
queries to master, causing it to fail the zone version check when
talking to Bind with allow-query configured to use TSIG key.
Source files are available here:
http://public.nic.cz/files/knot-dns/knot-1.1.0-rc2.tar.gz
GPG signature:
http://public.nic.cz/files/knot-dns/knot-1.1.0-rc2.tar.gz.asc
Packages will be updated soon at the usual place on http://www.knot-dns.cz.
Please provide us with any feedback before the final 1.1 release next week.
Regards,
Lubos
--
Ľuboš Slovák Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
Email: lubos.slovak(a)nic.cz
WWW: http://labs.nic.czhttp://www.nic.cz
-------------------------------------------
Please consider the environment before printing this email.
Join the campaign at http://thinkBeforePrinting.org