zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC
Hello all,
We had a weird issue with Knot serving an old version of a zone after a server reboot. After the reboot, our monitoring alerted that the zone was out of sync. Knot was then serving an older version of the zone (the zone did not update during the reboot, Knot was serving a version of the zone that was older than what it had before the reboot). The zone file on the disk had the correct serial, and knotc zone-status <zone> showed the current serial as well. However, dig @localhost soa <zone> on that box, showed the old serial. Running knotc zone-refresh <zone> didn't help, as in the logs when it went to do the refresh, it showed 'zone is up-to-date'. Running knotc zone-retransfer also did not resolve the problem, only a restart of the knotd process resolved this issue. While we were able to resolve this ourselves, it is certainly a strange issue and we were wondering if we could get any input on this.
Command output:
[root@ns02 ~]# knotc
knotc> zone-status <zone>
[<zone>] role: slave | serial: 2017121812 | transaction: none | freeze: no | refresh: +3h59m42s | update: not scheduled | expiration: +6D23h59m42s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: not scheduled | parent DS query: not scheduled
knotc> exit
[root@ns02 ~]# dig @localhost soa <zone>
…
… 2017090416 …
…
Logs after retransfer and refresh:
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] control, received command 'zone-refresh'
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] control, received command 'zone-retransfer'
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: starting
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: finished, 0.00 seconds, 1 messages, 5119 bytes
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: zone updated, serial none -> 2017121812
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:53:03 ns02 knot[7187]: info: [<zone>] control, received command 'zone-status'
And a dig after that:
[root@ns02 ~]# dig @localhost soa crnet.cr
…
… 2017090416 …
…
-Rob
Hi,
I wrote a collectd plugin which fetches the metrics from "knotc
[zone-]status" directly from the control socket.
The code is still a bit work in progress but should be mostly done. If
you want to try it out, the code is on Github, feedback welcome:
https://github.com/julianbrost/collectd/tree/knot-pluginhttps://github.com/collectd/collectd/pull/2649
Also, I'd really like some feedback on how I use libknot, as I only
found very little documentation on it. If you have any questions, just ask.
Regards,
Julian
Hi!
I installed the Knot 2.6.3 packages from PPA on Ubuntu 14.04. This
confuses the syslog logging. I am not sure but as I think the problem is
that Knot requires systemd for logging.
The problem is, that I do not see any logging of Knot in my
syslogserver, only in journald. Is this something special in Knot that
the logging is not forwarded to syslog?
Is it possible to use your Ubuntu Packages without systemd logging?
I think it would be better to build the packages on non-systemd distros
(ie Ubuntu 14.04) without systemd dependencies.
Thanks
Klaus
Hi!
Knot 2.6.3: When an incoming NOTIFY does not match any ACL the NOTIFY is
replied with "notauth" although the zone is configured. I would have
expected that Knot should response with "refused" in such a scenario. Is
the notauth intended? From operational view a "refuses" would easy
debugging.
regards
Klaus
> key
>
> An ordered list of references to TSIG keys. The query must match one of them. Empty value means that TSIG key is not required.
>
> Default: not set
This is not 100% correct. At least with a notify ACL the behavior is:
Empty value means that TSIG keys are not allowed.
regards
Klaus