Hello,
it seems that knotd suffers from the same issue as described here:
http://lists.scusting.com/index.php?t=msg&th=244420
I have Debian 7.0 with
http://deb.knot-dns.cz/debian/dists/wheezy/main/binary-i386/net/knot_1.2.0-…
and this is in /var/log/syslog after reboot:
Jun 3 22:37:43 ns knot[2091]: Binding to interface 2xxx:xxxx:xxxx:xxxx::1
port 53.
Jun 3 22:37:43 ns knot[2091]: [error] Cannot bind to socket (errno 99).
Jun 3 22:37:43 ns knot[2091]: [error] Could not bind to UDP
interface 2xxx:xxxx:xxxx:xxxx::1 port 53.
I have a static IPv6 address configured in /etc/network/interfaces.
Restarting knot later binds to this IPv6 address without any problem - it
is only the first start which fails (during OS booting). What do you think
that is the proper way of making knotd reliably listen on a static IPv6
address? I would prefer if I could avoid restarting knotd.
Leos Bitto
Hello Knot folks:
The 'rundir' obsoletes 'pidfile' config option, as the PID file will
always be placed in 'rundir'."
This is cool, unless you want to run multiple instances of KNOT on a single
machine. Can you reconsider this?
Jonathan
Hello KNOT folks,
We've found an issue 1.3 with bootstrapping. We're using FreeBSD 9.x, but we
also quickly confirmed it exists on Ubuntu 12.x to confirm it was not
isolated to FreeBSD. We're testing with about 3000 to 4000 zones, so our
environment is not even very large at this point and the bootstrapping
failures are very problematic. There are three causes that we've seen thus
far:
1. If the AXFR TCP connect is interrupted by a signal, the whole AXFR is
aborted and the bootstrap is rescheduled instead of selecting on the socket
to either get the successful connection, or until it times out/fails. This
can result in a flood of connects, with little to no progress in the
bootstrapping.
2. When connected, if a recv() is interrupted by a signal, it isn't retried.
This results in connections being dropped that don't need to be dropped.
3. If a successful connect is made, but the remote end subsequently drops it
(e.g., resets the connection), then the bootstrap fails without being
rescheduled. This was found when slaving from a non-KNOT DNS server that may
have TCP rate limiting enabled, or something of that nature. Either way, the
fact that it is not rescheduled is very undesirable.
I suspect that there are other cases of interrupted system calls not being
handled correctly.
Here is some additional info that may help find the root cause:
- The greater the latency between the master and slave, the worse the
problem is. We tested with a slave 80 ms RTT away and it was very bad.
- The more worker threads you have, the worse the problem is. So even
locally (slave 0 ms away from master) we could reproduce the issue fairly
easily.
Hopefully this can be remedied!
Cheers,
Jonathan
Hi Knot people,
I've been trying out rc3, and I've found a few issues, as documented below:
1. In the knotc man page, the "-s" option shows the default as
@rundir@/knot.sock. I guess that should have been substituted with the
compile-time setting, but wasn't.
2. knotc start doesn't do anything now. It should be removed.
3. knotc restart only stops the server, but does not start it. It should
be removed.
4. When configured as a slave, and Knot receives a zone via AXFR
containing the following record:
<owner-obscured>. 86400 IN TXT "Alliance Fran\195\167aise de Kotte"
it serves this record correctly when queried over DNS:
dig @a.b.c.d +norec +short txt <owner-obscured>.
"Alliance Fran\195\167aise de Kotte"
But when saving the zone to disk, this record gets written out as:
<owner-obscured>. 86400 IN TXT "Alliance Fran\4294967235\4294967207aise
de Kotte"
So when Knot restarts and tries to load this zone, it gives an error:
Error in zone file /var/lib/knot/XX:4875: Number is bigger than 8 bits.
5. I have configured Knot to log to the file /var/log/knot/knot.log.
Most logging goes into it. However, some error messages are still
leaking into syslog, for example, the following appears in both Knot's
log file, as well as in /var/log/messages (via syslog):
logfile:
2013-06-28T20:17:25 [error] Incoming AXFR of 'XX.' with 'a.b.c.d@53':
General error.
/var/log/messages:
Jun 28 20:17:25 admin knot[1630]: [error] Incoming AXFR of 'XX.' with
'a.b.c.d@53': General error.
I would expect Knot to send logging to syslog only while it is starting,
and it hasn't setup its logging to file yet. Once logging to a file has
started, there should be no more logs to syslog, so this looks like a
bug, albeit harmless.
That's about it for Friday evening :)
Regards,
Anand Buddhdev
RIPE NCC
Hi Everyone,
I'm happy to announce the third and probably last release candidate for
Knot DNS 1.3.0 is out there!
This release candidate is a bit special to us, featuring not only a slew of
bugfixes but also introducing a new feature.
Quite a lot of you asked for a tool to estimate memory consumption for
given zone, this is now a new action 'memstats'
available in knotc utility. The usage is quite similar to the 'checkzone'
command, but it prints out estimated memory
consumption instead. You can also pick specific zones if you like.
Second quite substantial change is deprecation of 'knotc start'. The
preferred method of startup is now using knotd directly,
optionally with -d flag for running in background. Also, knotd doesn't
create PID files when running in foreground.
This is to play nice with managed init scripts and to be able to write out
PID atomically.
On top of that, you can also alter default 'storage' and 'rundir', see
./configure --help for that.
The 'rundir' obsoletes 'pidfile' config option, as the PID file will always
be placed in 'rundir'.
Third and last substantial change is that knotc now uses UNIX sockets by
default and is also now a preferred mode of operation,
although IP sockets are still supported. We believe this is safer for now,
as the control protocol is not encrypted.
It is also enabled by default, to disable control interface completely, add
an empty control section to the configuration.
But of course, being a release candidate, we didn't forget to pack in a
bugfixes. Namely processing of IXFR with a lot
of serial changes, processing of TSIG keyfile in knotc and a couple
performance regressions related to zone transfers.
Big thanks for numerous hours of testing and witty feedback for this
release goes to Anand Buddhdev, thank you!
You can find the full changelog at:
https://gitlab.labs.nic.cz/knot/blob/v1.3.0-rc3/NEWS
Sources:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.gzhttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.bz2https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.gz.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.bz2.aschttps://secure.nic.cz/files/knot-dns/knot-1.3.0-rc3.tar.xz.asc
Packages available at www.knot-dns.cz will be updated soon as well.
Kind regards,
Marek
--
Marek Vavruša Knot DNS
CZ.NIC Labs http://www.knot-dns.cz
-------------------------------------------
Americká 23, 120 00 Praha 2, Czech Republic
WWW: http://labs.nic.czhttp://www.nic.cz
[sorry for cross-post, please don't cross-post when replying]
Hi,
we are currently implementing DNSSEC in Knot DNS and we would like to gather opinions on this topic. I think we have all have more experience of DNSSEC deployment now and it would be very valuable for us to gather that experience and forge an implementation that would be liked by the users.
Could you please spare a few minutes and fill a little survey of ours on this topic?
https://docs.google.com/forms/d/1gS-Z1NAL78oFOU51HR6jmg-ZkAevU2CrvCHQgrNnXL…
If you don't want to give your answers to NSA :), you can reply to this email (the Reply-To is set to knot-dns-users(a)lists.nic.cz), or just send the feedback back to me.
Thank you very much,
Ondrej
--
Ondřej Surý -- Chief Science Officer
-------------------------------------------
CZ.NIC, z.s.p.o. -- Laboratoře CZ.NIC
Americka 23, 120 00 Praha 2, Czech Republic
mailto:ondrej.sury@nic.cz http://nic.cz/
tel:+420.222745110 fax:+420.222745112
-------------------------------------------
Dear knot users/developers,
I'm testing Knot 1.3.0-rc2. One issue I've run into is when knot exits
uncleanly. In this case, it leaves its PID file behind in
/var/lib/knot/knot.pid.
On the following attempt to start knot, it sees the PID file, and
refuses to start, so I have to remove the stale PID file by hand. If I
run knot under a supervisor, such as upstart, then it tries to run knot,
fails, tries again rapidly in succession, and eventually gives up, at
which point knot isn't running.
My first observation is that in cases where knot is running under a
supervisor, the PID file is not necessary. The supervisor knows knot's
PID, and can signal it directly (to reload, or stop). Could you consider
making the PID file optional? As in, if in knot.conf, I have:
pidfile "";
then knot should not attempt to write (or read or delete) a PID file at all.
Alternatively, could you make knot a bit smarter so that if it detects a
PID file at startup, but that PID is either not in use, or the process
using it is not knot, then it should remove the PID file?
Without either of these possibilities, I have to add logic in my startup
scripts or supervisor scripts to remove any stale PID files. If I only
had one instance of knot, I could do that simply with "rm -f
/var/lib/knot/knot.pid", but I'd like to run multiple instances of knot,
and so I would have multiple PID files, whose locations I would have to
extract from the config file first in order to delete them reliably.
This starts to get hairy...
I hope I have made a reasonable case for providing an option to not
record a PID file, or clean up one before starting. My preference is for
the first option, ie. not writing a PID file.
Regards,
Anand Buddhdev
RIPE NCC
Dear Knot developers,
I'm testing Knot 1.3.0-rc1 now. One of the things I'd like to be able to
do is to estimate Knot's memory usage without actually loading all my
zones into it (because if it runs out of memory it will be killed).
One very crude way was to load a single zone from a plain text zone
file, and comparing the memory used by knot with the size of the zone on
disk. For a zone of size 229MB, knot is using about 1.3GB of RAM. So I
could say that knot wants about 6x as much RAM for each zone. Is this a
reasonable asumption?
Alternatively, since you know how knot uses memory for a zone, are you
able to provide a small program which can read zones one at a time,
calculate the RAM required for each zone, and then print a summary of
the total RAM required to load all those zones into knot?
We frequently get questions from our users along the lines of "I'd like
you to slave this new zone" or "we are going to sign this zone; will
your server be able to load it?" It would help an operator immensely if
he can get an estimate of the additional RAM required.
Regards,
Anand Buddhdev
RIPE NCC