Hello,
We're building a replicated Signer machine, based on Knot DNS. We have
a PKCS #11 backend for keys, and replication working for it.
On one machine we run
one# keymgr orvelte.nep generate ...
and then use the key hash on the other machine in
two# keymgr orvelte.nep share ...
This, however, leads to a report that the identified key could not be
found. Clearly, there is more to the backing store than just the key
material in PKCS #11.
What is the thing I need to share across the two machines, and how can I
do this?
Thanks,
-Rick
We're experiencing occasional failures with Knot crashing while running as a slave. The behavior is as follows: the slave will run for 2 months or so and then segfault. Our system automatically restarts the process, but after 15 minutes or less, the segfault happens again. This repeats until we remove the /var/lib/knot/journal and /var/lib/knot/timers directories. This seems to fix it up for a while: a newly started process will run fine for another couple of months.
More details on our setup: These systems serve a little less than a hundred zones, some of which change at a rapid rate. We have configured the servers to not flush the zone data to regular files. The server software is 2.5.7, but with the changes from the "ecs-patch" branch applied.
A while back, I tried a release from the newer branch (I'm pretty sure it was 2.6.4), but I had a problem there where some servers were falling behind the master, as evidenced by their SOA serial number. Diagnosing this on a more recent branch probably makes more sense, but I'd be a little leery of dealing with two problems, not just one.
I can provide various data: the (gigantic) seemingly "corrupt" journal/timer files and the segfault messages from the syslog. I don't have any coredumps, but I'll turn those on today. Given the nature of the problem, it might take a while for it to manifest.
Chuck
Hello
How can I dump a zone stored in Knot DNS to a file?
DNSSEC signed zones are overwritten, apparently using a zone dump functionality; noticable by the comment ";; Zone dump (Knot DNS 2.6.3)".
Regards
Hi, just getting up to speedon knotDNS and trying to get dynamically
added secondaries working via bootstrapping.
My understanding is when the server receives a notify from an authorized
master, if it is not already in the zone like it will add it and AXFR
it, right?
In my conf:
acl:
- id: "acl_master"
address: "64.68.198.83"
address: "64.68.198.91"
action: "notify"
remote:
- id: "master"
address: "64.68.198.83@53"
address: "64.68.198.91@53"
But whenever I send NOTIFY from either of those masters, nothing happens
on the knotDNS side. I have my logging as:
log:
- target: "syslog"
any: "debug"
Thx
- mark
Hello,
I'm trying to use Knot 2.6.7 in a configuration where zone files are
preserved (including comments, ordering and formatting) yet at the same
time Knot performs DNSSEC signing – something similar to inline-signing
feature by BIND. My config file looks like this:
policy:
- id: ecdsa_fast
nsec3: on
ksk-shared: on
zsk-lifetime: 1h
ksk-lifetime: 5h
propagation-delay: 10s
rrsig-lifetime: 2h
rrsig-refresh: 1h
template:
- id: mastersign
file: "/etc/knot/%s.zone"
zonefile-sync: -1
zonefile-load: difference
journal-content: all
dnssec-signing: on
dnssec-policy: ecdsa_fast
serial-policy: unixtime
acl: acl_slave
zone:
- domain: "example.com."
template: mastersign
It seems to work well for the first run, I can see that zone got signed
properly:
>
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 1
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> ;; Changes between zone versions: 1 -> 1529578258
> ;;Removed
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1 3600 900 1814400 60
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 1529578258 3600 900 1814400 60
> example.com. 0 CDNSKEY 257 3 13
> …lots of DNSSEC data.
However, if I try to update the unsigned zone file, strange things
happen. If I just add something to a zone and increase the serial, I get
these errors in the log:
>
> Jun 21 13:00:08 localhost knotd[2412]: warning: [example.com.] zone file changed, but SOA serial decreased
> Jun 21 13:00:08 localhost knotd[2412]: error: [example.com.] zone event 'load' failed (value is out of range)
If I set the serial to be higher than the serial of last signed zone, I
get a slightly different error:
>
> Jun 21 13:22:36 localhost knotd[3096]: warning: [example.com.] journal, discontinuity in changes history (1529580085 -> 1529580084), dropping older changesets
> Jun 21 13:22:36 localhost knotd[3096]: error: [example.com.] zone event 'load' failed (value is out of range)
In either case, when I look into the journal after the reload of the
zone, I see just the unsigned zone:
> # kjournalprint /var/lib/knot/journal/ example.com
> ;; Zone-in-journal, serial: 2
> ;;Added
> example.com. 60 SOA knot.example.com. hostmaster.example.com. 2 3600 900 1814400 60
> example.com. 60 NS knot.example.com.
> first.example.com. 60 TXT "first"
> second.example.com. 60 TXT "second"
Yet the server keeps serving the previous signed zone no matter what I
try. The only thing that help is a cold restart of Knot, when the zone
gets signed again.
So this approach is obviously not working as expected. If I comment out
option `zonefile-load: difference`, I get somehow working solution where
zone is completely resigned during each reload and I get this warning to
the log:
> Jun 21 13:27:38 localhost knotd[3156]: warning: [example.com.] with automatic DNSSEC signing and outgoing transfers enabled, 'zonefile-load: difference' should be set to avoid malformed IXFR after manual zone file update
I guess this should not bother me a lot as log as I keep serial numbers
of unsigned zones significantly different from signed ones. The only
problem is that this completely kills IXFR transfers as well as signing
only differences.
So far the only solution I see is to run two instances of Knot, one
reading the zone file from disk without signing, transferring it to
another instance which would do the signing is slave mode.
Is there anything I'm missing here?
Sorry for such a long e-mail and thank you for reading all the way here.
Best regards,
Ondřej Caletka
Hi!
One of our customers uses Knot 2.6.7 as hidden master which sends
NOTIFYs to our slave service. He reported that Knot can not send the
NOTIFYs, ie:
knotd[10808]: warning: [example.com.] notify, outgoing,
2a02:850:8::6@53: failed (connection reset)
It seems that Knot sometimes tries to send the NOTIFY with TCP (I see
also NOTIFYs via UDP). Unfortunatelly our NOTIFY-receiver only supports UDP.
So, this is the first time seeing a name server sending NOTIFYs over
TCP. Is this a typical behavior in Knot? Can I force Knot to send
NOTIFYs always over UDP?
Thanks
Klaus
Hello
I am using ecdsap256sha256 as algorithm. Why does the KSK DNSKEY (=257) use as digest type SHA1 (=1) and not SHA256 (=2)?
For example:
> dig DNSKEY nic.cz | grep 257
nic.cz. 871 IN DNSKEY 257 3 13 LM4zvjUgZi2XZKsYooDE0HFYGfWp242fKB+O8sLsuox8S6MJTowY8lBD jZD7JKbmaNot3+1H8zU9TrDzWmmHwQ==
> dig DNSKEY nic.cz | grep 257 > dnspub.key
> jdnssec-dstool dnspub.key
nic.cz. 868 IN DS 61281 13 1 091CECC4D2AADB7AC8C4DF413DDF9C5B0B61E5B6
Regards
dp
Hello all,
my Knot DNS is now in production and I would like to setup some
backup tasks for the configuration and of course keys. Are there any
recommendations regarding backup? And restore? I can see that it is very
easy to dump current config but I am to sure how to backup keys. What do
you recommend? Save the content of /var/lib/knot on a hourly/daily
basis? I am not using shared keys.
Thanks
With regards
Ales
Hi
I have a question about this commit
“01b00cc47efe”
Replace select() by poll()?
The performance of epoll is better then select/poll when monitoring large
numbers of file descriptions.
Could you please let me why you choose poll()?
Thanks for your reply!
Dear all,
While trying to migrate or DNS to Knot I have noticed that a slave
server with 2GB RAM is facing memory exhaustion. I am running
2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
having around 1MB in total. Knot is acting as pure slave server with
minimal configuration.
There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
root@eira:/proc/397# cat status
Name: knotd
Umask: 0007
State: S (sleeping)
Tgid: 397
Ngid: 0
Pid: 397
PPid: 1
TracerPid: 0
Uid: 108 108 108 108
Gid: 112 112 112 112
FDSize: 64
Groups: 112
NStgid: 397
NSpid: 397
NSpgid: 397
NSsid: 397
VmPeak: 24817520 kB
VmSize: 24687160 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1743400 kB
VmRSS: 1743272 kB
RssAnon: 1737088 kB
RssFile: 6184 kB
RssShmem: 0 kB
VmData: 1781668 kB
VmStk: 132 kB
VmExe: 516 kB
VmLib: 11488 kB
VmPTE: 3708 kB
VmPMD: 32 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 21
SigQ: 0/7929
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe7bfbbefc
SigIgn: 0000000000000000
SigCgt: 0000000180007003
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 260
nonvoluntary_ctxt_switches: 316
root@eira:/proc/397#
Config:
server:
listen: 0.0.0.0@53
listen: ::@53
user: knot:knot
log:
- target: syslog
any: info
mod-rrl:
- id: rrl-10
rate-limit: 10 # Allow 200 resp/s for each flow
slip: 2 # Every other response slips
mod-stats:
- id: custom
edns-presence: on
query-type: on
request-protocol: on
server-operation: on
request-bytes: on
response-bytes: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
query-size: on
reply-size: on
template:
- id: default
storage: "/var/lib/knot"
module: mod-rrl/rrl-10
module: mod-stats/custom
acl: [allowed_transfer]
disable-any: on
master: idunn
I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
BR
Ales
Hello all,
I noticed that Knot (2.6.5) creates an RRSIG for the CDS/CDNSKEY RRset
with the ZSK/CSK only.
I was wondering if this is an acceptable behavior as RFC 7344, section
4.1. CDS and CDNSKEY Processing Rules states:
o Signer: MUST be signed with a key that is represented in both the
current DNSKEY and DS RRsets, unless the Parent uses the CDS or
CDNSKEY RRset for initial enrollment; in that case, the Parent
validates the CDS/CDNSKEY through some other means (see
Section 6.1 and the Security Considerations).
Specifically I read that "represented in both the current DNSKEY and DS
RRsets" means that the CDS/CDNSKEY RRset must be signed with a KSK/CSK
and not only with a ZSK and a trust chain to KSK <- DS.
I tested both BIND 9.12.1 and PowerDNS Auth 4.0.5 as well. PowerDNS Auth
behaves the same as Knot 2.6.5 but BIND 9.12.1 always signs the
CDS/CDNSKEY RRset with at least the KSK.
Do I read the RFC rule too strict? To be honest, I see nothing wrong
with the CDS/CDNSKEY RRset only signed by the ZSK but BINDs behavior and
the not so clear RFC statement keeps me wondering.
Thanks,
Daniel
I'm getting started with knot resolver and am a bit unclear as to how this
config should be structured.
The result I'm looking for is to forward queries to resolver A if the
source is subnet A; unless the query is for the local domain if so then
query the local DNS.
I've been working with the config below to accomplish this. However I'm
finding that this config will if the request does not match the local
todname and will use root hints if not but will not use the FORWARD server.
Ultimately, this server will resolve DNS for several subnets and will
forward queries to different servers based on the source subnet.
Would someone mind pointing me in the right direction on this, please?
for name, addr_list in pairs(net.interfaces()) do
net.listen(addr_list)
end
-- drop root
user('knot', 'knot')
-- Auto-maintain root TA
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --view filters
'hints', -- Load /etc/hosts and allow custom root hints
'stats',
}
-- 4GB local cache for record storage
cache.size = 4 * GB
--If the request is from eng subnet
if (view:addr('192.168.168.0/24')) then
if (todname('localnet.mydomain.com')) then
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')}))
else
view:addr('192.168.168.0/24', policy.FORWARD('68.111.106.68'))
end
end
--
855.ONTRAPORT
ontraport.com
------------------------------
Get a Demo <https://ontraport.com/demo> | Blog <https://ontraport.com/blog>
| Free Tools <https://ontraport.com/tools>
zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hello,
Knot DNS looks awesome, thanks for that!
The benchmarks show a clear picture (for hosting) that the size of zones
doesn't matter, but DNSSEC does. I'm intruiged by the differences with NSD.
What is less clear, is what form of DNSSEC was used -- online signing,
or just signed for policy refreshes and updates, or signed before it
gets to knotd? This distinction seems important, as it might explain
the structural difference with NSD.
Also, the documentation speaks of "DNSSEC signing for static zones" but
leaves some doubt if this includes editing of the records using zonec
transactions, or if it relates to rosedb, or something else.
https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#automatic-dnssec-sig…https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#rosedb-static-resour…
Other thant his uncertainty (and confusion over the meaning of the
master: parameter) the documentation is a real treat. Thanks for a job
done well!
Best wishes,
-Rick
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC
Hello all,
We had a weird issue with Knot serving an old version of a zone after a server reboot. After the reboot, our monitoring alerted that the zone was out of sync. Knot was then serving an older version of the zone (the zone did not update during the reboot, Knot was serving a version of the zone that was older than what it had before the reboot). The zone file on the disk had the correct serial, and knotc zone-status <zone> showed the current serial as well. However, dig @localhost soa <zone> on that box, showed the old serial. Running knotc zone-refresh <zone> didn't help, as in the logs when it went to do the refresh, it showed 'zone is up-to-date'. Running knotc zone-retransfer also did not resolve the problem, only a restart of the knotd process resolved this issue. While we were able to resolve this ourselves, it is certainly a strange issue and we were wondering if we could get any input on this.
Command output:
[root@ns02 ~]# knotc
knotc> zone-status <zone>
[<zone>] role: slave | serial: 2017121812 | transaction: none | freeze: no | refresh: +3h59m42s | update: not scheduled | expiration: +6D23h59m42s | journal flush: not scheduled | notify: not scheduled | DNSSEC re-sign: not scheduled | NSEC3 resalt: not scheduled | parent DS query: not scheduled
knotc> exit
[root@ns02 ~]# dig @localhost soa <zone>
…
… 2017090416 …
…
Logs after retransfer and refresh:
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] control, received command 'zone-refresh'
Jan 15 16:49:22 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:49:23 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] control, received command 'zone-retransfer'
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: starting
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] AXFR, incoming, <master>@53: finished, 0.00 seconds, 1 messages, 5119 bytes
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: zone updated, serial none -> 2017121812
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:52:45 ns02 knot[7187]: info: [<zone>] refresh, outgoing, <master>@53: remote serial 2017121812, zone is up-to-date
Jan 15 16:53:03 ns02 knot[7187]: info: [<zone>] control, received command 'zone-status'
And a dig after that:
[root@ns02 ~]# dig @localhost soa crnet.cr
…
… 2017090416 …
…
-Rob
Hi,
I wrote a collectd plugin which fetches the metrics from "knotc
[zone-]status" directly from the control socket.
The code is still a bit work in progress but should be mostly done. If
you want to try it out, the code is on Github, feedback welcome:
https://github.com/julianbrost/collectd/tree/knot-pluginhttps://github.com/collectd/collectd/pull/2649
Also, I'd really like some feedback on how I use libknot, as I only
found very little documentation on it. If you have any questions, just ask.
Regards,
Julian
Hi!
I installed the Knot 2.6.3 packages from PPA on Ubuntu 14.04. This
confuses the syslog logging. I am not sure but as I think the problem is
that Knot requires systemd for logging.
The problem is, that I do not see any logging of Knot in my
syslogserver, only in journald. Is this something special in Knot that
the logging is not forwarded to syslog?
Is it possible to use your Ubuntu Packages without systemd logging?
I think it would be better to build the packages on non-systemd distros
(ie Ubuntu 14.04) without systemd dependencies.
Thanks
Klaus
Hi!
Knot 2.6.3: When an incoming NOTIFY does not match any ACL the NOTIFY is
replied with "notauth" although the zone is configured. I would have
expected that Knot should response with "refused" in such a scenario. Is
the notauth intended? From operational view a "refuses" would easy
debugging.
regards
Klaus
> key
>
> An ordered list of references to TSIG keys. The query must match one of them. Empty value means that TSIG key is not required.
>
> Default: not set
This is not 100% correct. At least with a notify ACL the behavior is:
Empty value means that TSIG keys are not allowed.
regards
Klaus
Hi everybody,
I would have a question related to zone signing. Whenever I reload knot config
using knotc reload it starts to resign all DNSSEC enabled zones. It makes the
daemon sometimes unresponsive to knotc utility.
root@idunn:# knotc reload
error: failed to control (connection timeout)
Is it a design intent to sign zones while reloading config? Is it really
needed? It invokes zone transfers, consumes resources, etc.
Thanks for answer
With regards
Ales
Helly everybody,
there is a KNOT DNS master name server that I do not manage myself for my domain. I try to setup a BIND DNS server as a slave in-house. BIND fails to do the zone transfer and reports
31-Dec-2017 16:19:02.503 zone whka.de/IN: Transfer started.
31-Dec-2017 16:19:02.504
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
connected using 2001:7c7:20e8:18e::2#53509
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
failed while receiving responses: NOTAUTH
31-Dec-2017 16:19:02.505
transfer of 'whka.de/IN' from 2001:7c7:2000:53::#53:
Transfer completed: 0 messages, 0 records, 0 bytes, 0.001 secs
If try dig (this time using the IPv4 address), I get a failure, too.
# dig axfr @141.70.45.160 whka.de.
; <<>> DiG 9.9.5-9+deb8u7-Debian <<>> axfr @141.70.45.160 whka.de.
; (1 server found)
;; global options: +cmd
; Transfer failed.
Wireshark tells me that the reply code of the name server is `1001 Server is not an authority for domain`. What is going on here?
Especially, if I query the same nameserver for an usual A-record it claims to be authoritative. Moreover, KNOT DNS manual says KNOT is an authoritative-only name server. So there is no way of being non-authoritative.
Has anybody already observed something like this?
Best regards, Matthias
--
Evang. Studentenwohnheim Karlsruhe e.V. – Hermann-Ehlers-Kolleg
Matthias Nagel
Willy-Andreas-Allee 1, 76131 Karlsruhe, Germany
Phone: +49-721-96869289, Mobile: +49-151-15998774
E-Mail: matthias.nagel(a)hermann-ehlers-kolleg.de
Dear Knot Resolver users,
Knot Resolver 1.5.1 is released, mainly with bugfixes and cleanups!
Incompatible changes
--------------------
- script supervisor.py was removed, please migrate to a real process manager
- module ketcd was renamed to etcd for consistency
- module kmemcached was renamed to memcached for consistency
Bugfixes
--------
- fix SIGPIPE crashes (#271)
- tests: work around out-of-space for platforms with larger memory pages
- lua: fix mistakes in bindings affecting 1.4.0 and 1.5.0 (and
1.99.1-alpha),
potentially causing problems in dns64 and workarounds modules
- predict module: various fixes (!399)
Improvements
------------
- add priming module to implement RFC 8109, enabled by default (#220)
- add modules helping with system time problems, enabled by default;
for details see documentation of detect_time_skew and detect_time_jump
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.5.1/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.5.1.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v1.5.1/
--Vladimir
Hello guys,
there has been a request in our issue tracker [1], to enable
IPV6_USE_MIN_MTU socket option [2] for IPv6 UDP sockets in Knot DNS.
This option makes the operating system to send the responses with a
maximal fragment size of 1280 bytes (minimal MTU size required by IPv6
specification).
The reasoning is based on the draft by Mark Andrews from 2012 [3]. I
wonder if the reasoning is still valid in 2016. And I'm afraid that
enabling this option could enlarge the window for possible DNS cache
poisoning attacks.
We would appreciate any feedback on your operational experience with DNS
on IPv6 related to packet fragmentation.
[1] https://gitlab.labs.nic.cz/labs/knot/issues/467
[2] https://tools.ietf.org/html/rfc3542#section-11.1
[3] https://tools.ietf.org/html/draft-andrews-dnsext-udp-fragmentation-01
Thanks and regards,
Jan
Hi everybody,
Is there a way how to change TTL of all zone records at once using knotc? I.e.
without editing the zone file manually. Something what I can do using $TTL
directive in Bind9 zone files?
If not I would like to ask for implementing if possible.
Thanks
Regards
Ales Rygl
Dobry den,
narazil jsem na problem s fungovanim modulu mod-synthrecord pri pouziti vice
siti soucasne:
Konfigurace:
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: [ 46.12.0.0/16, 46.13.0.0/16 ]
zone:
- domain: customers.tmcz.cz
file: db.customers.tmcz.cz
module: mod-synthrecord/customers1
S uvedenou konfiguraci Knot generuje pouze zaznamy z posledni uvedene site,
pro 46.12.0.0/16 dava NXDOMAIN. Stejne se to chova i s touto formou zapisu.
mod-synthrecord
- id: customers1
type: forward
prefix:
ttl: 300
network: 46.12.0.0/16
network: 46.13.0.0/16
Konfiguracne je to ok, knot neprotestuje, ale zaznamy negeneruje. Knot je
2.6.1-1+0~20171112193256.11+stretch~1.gbp3eaef0.
Diky za pomoc ci nasmerovani.
S pozdravem
Ales Rygl
On 11/20/2017 12:37 PM, Petr Kubeš wrote:
> Je prosím někde dostupna nějaká jednoduchá "kuchařka" pro zprovoznění
> takovéhoto DNS resolveru?
V některých systémech už je přímo balíček se službou, případně máme PPA
obsahující novější verze. https://www.knot-resolver.cz/download/
Vyhnul bych se verzím před 1.3.3.
Přímo kuchařku nemáme, ale kresd funguje dobře i bez konfigurace - pak
poslouchá na všech lokálních adresách na UDP+TCP portu 53, se 100 MB
cache v momentálním adresáři. Akorát pro validaci DNSSEC je potřeba
zadat jméno souboru s kořenovými klíči, třeba "kresd -k root.keys" - ten
je při neexistenci inicializován přes https. Různé možnosti jsou
popsány v dokumentaci
http://knot-resolver.readthedocs.io/en/stable/daemon.html
V. Čunát
Dobrý den, prosím o radu.
provozujeme malou síť a v současné době využíváme externí DNS
poskytovatele (UPC).
CHtěli by jsme na hraničním uzlu zprovoznit vlastní DNS , konkrétně KNOT
v konfiguraci, kdy by majoritně fungoval jako DNS RESOLVER a v budoucnu
případně dostal i naše zony.
Není prosím u vás někde dostupný návod step by step, co konkrétně
nastavit, aby jsme mohli úspěšně takovýto KNOT zprovoznit v několika
krocích jako CZ Resolver DNS?
Asi špatné období, nedaří se mi bohužel z dostupných manuálů, nebo
návodů systém KNOT dns nastavit tak, aby odpovídal a synchronizoval DNS
zóny.
Velice děkuji za radu
P.Kubeš
Dobry den,
Rad bych pozadal o radu. Experimentuji s Knot DNS, verze 2.6.0-3+0~20171019083827.9+stretch~1.gbpe9bd69. Debian Stretch.
Mam nasazeny DNSSEC s KSK a ZSK v algoritmu 5 a Bind9, klice bez metadat. Snazim se prejit na Knot, s tim, ze mam dve testovaci zony. Pouzivam nasledujici postup.
1. Naimportuji stavajici klice pomoci keymgr
2. nastavim timestamy:
keymgr t-sound.cz set 18484 created=+0 publish=+0 active=+0
keymgr t-sound.cz set 04545 created=+0 publish=+0 active=+0
3. zavedu zonu do Knotu. lifetime je extremne kratky, abych vedel, jak mi to funguje.
zone:
- domain: t-sound.cz
template: signed
file: db.t-sound.cz
dnssec-signing: on
dnssec-policy: migration
- domain: mych5.cz
template: signed
file: db.mych5.cz
dnssec-signing: on
dnssec-policy: migration
acl: [allowed_transfer]
notify: idunn-freya-gts
policy:
- id: migration
algorithm: RSASHA1
ksk-size: 2048
zsk-size: 1024
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
Toto projde. Knot zacne podepisovat importovanymi klici. Nasledne zmenim policy u t-sound.cz na
policy:
- id: migration3
algorithm: ecdsap256sha256
zsk-lifetime: 20m
ksk-lifetime: 10d
propagation-delay: 5m
ksk-submission: nic.cz
Knot vygeneruje nove klice:
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: warning: [t-sound.cz.] DNSSEC, creating key with different algorithm than policy
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, algorithm rollover started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 18484, algorithm 5, KSK yes, ZSK no, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 5821, algorithm 5, KSK no, ZSK yes, public yes, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public no, ready no, active no
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39697, algorithm 13, KSK no, ZSK yes, public no, ready no, active yes
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 10 16:40:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-10T16:45:09
Rozbehne se mechanismus ZSK rolloveru, vypublikuje se CDNSKEY. Projde sumbission. Vysledny stav je, ze zona funguje,
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-12T23:03:27
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] zone file updated, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] notify, outgoing, 93.153.117.50@53: serial 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.50@35557: finished, 0.00 seconds, 1 messages, 780 bytes
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: started, serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: debug: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: serial 1510523007 -> 1510523307
Nov 12 22:48:27 idunn knotd[24980]: info: [t-sound.cz.] IXFR, outgoing, 93.153.117.20@57641: finished, 0.00 seconds, 1 messages, 780 bytes
ZSK se rotuji. Pak ale dojde k chybe nize:
Nov 12 23:03:27 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 12 23:03:27 idunn knotd[24980]: warning: [t-sound.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 12 23:03:27 idunn knotd[24980]: error: [t-sound.cz.] zone event 'DNSSEC resign' failed (unknown error -28)
Stav klicu v tomto okamziku:
root@idunn:/var/lib/knot# keymgr t-sound.cz list human
c87e00bd71d0f89ea540ef9c21020df1e0106c0f ksk=yes tag=04256 algorithm=13 public-only=no created=-2D16h24m21s pre-active=-2D16h24m21s publish=-2D16h19m21s ready=-2D16h14m21s active=-1D18h14m21s retire-active=0 retire=0 post-active=0 remove=0
fe9f432bfc5d527dc11520615d6e29e5d1799d8c ksk=no tag=22255 algorithm=13 public-only=no created=-10h26m3s pre-active=0 publish=-10h26m3s ready=0 active=-10h21m3s retire-active=0 retire=0 post-active=0 remove=0
root@idunn:/var/lib/knot#
knotc zone-sign t-sound.cz ale pojde a vse se tim opravi.
Nov 13 08:56:41 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-status'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] control, received command 'zone-sign'
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, dropping previous signatures, resigning zone
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, ZSK rollover started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 22255, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, loaded key, tag 24386, algorithm 13, KSK no, ZSK yes, public yes, ready no, active no
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, signing started
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 13 09:06:23 idunn knotd[24980]: info: [t-sound.cz.] DNSSEC, next signing at 2017-11-13T09:11:23
O den drive na tom knot zcela havaroval:
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing zone
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:09 idunn knotd[21682]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:09 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:09 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:09 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Nov 11 23:05:10 idunn knotd[23933]: info: Knot DNS 2.6.0 starting
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface 0.0.0.0@553
Nov 11 23:05:10 idunn knotd[23933]: info: binding to interface ::@553
Nov 11 23:05:10 idunn knotd[23933]: info: changing GID to 121
Nov 11 23:05:10 idunn knotd[23933]: info: changing UID to 114
Nov 11 23:05:10 idunn knotd[23933]: info: loading 2 zones
Nov 11 23:05:10 idunn knotd[23933]: info: [mych5.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] zone will be loaded
Nov 11 23:05:10 idunn knotd[23933]: info: starting server
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 39964, algorithm 13, KSK no, ZSK yes, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, loaded key, tag 4256, algorithm 13, KSK yes, ZSK no, public yes, ready no, active yes
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, signing started
Nov 11 23:05:10 idunn knotd[23933]: warning: [mych5.cz.] DNSSEC, key rollover [1] failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] DNSSEC, failed to initialize (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: error: [mych5.cz.] zone event 'load' failed (unknown error -28)
Nov 11 23:05:10 idunn knotd[23933]: info: [t-sound.cz.] DNSSEC, successfully signed
Nov 11 23:05:10 idunn systemd[1]: knot.service: Main process exited, code=killed, status=11/SEGV
Nov 11 23:05:10 idunn systemd[1]: knot.service: Unit entered failed state.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Failed with result 'signal'.
Nov 11 23:05:10 idunn systemd[1]: knot.service: Service hold-off time over, scheduling restart.
Nov 11 23:05:10 idunn systemd[1]: Stopped Knot DNS server.
Nov 11 23:05:10 idunn systemd[1]: Started Knot DNS server.
Delam nekde chybu? Omlouvam se za komplikovany a dlouhy popis.
Diky
S pozdravem
Ales Rygl
Dear Knot Resolver users,
Knot Resolver 1.99.1-alpha has been released!
This is an experimental release meant for testing aggressive caching.
It contains some regressions and might (theoretically) be even vulnerable.
The current focus is to minimize queries into the root zone.
Improvements
------------
- negative answers from validated NSEC (NXDOMAIN, NODATA)
- verbose log is very chatty around cache operations (maybe too much)
Regressions
-----------
- dropped support for alternative cache backends
and for some specific cache operations
- caching doesn't yet work for various cases:
* negative answers without NSEC (i.e. with NSEC3 or insecure)
* +cd queries (needs other internal changes)
* positive wildcard answers
- spurious SERVFAIL on specific combinations of cached records, printing:
<= bad keys, broken trust chain
- make check
- a few Deckard tests are broken, probably due to some problems above
+ unknown ones?
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v1.99.1-alpha/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.99.1-alpha.tar.xz…
Documentation (not updated):
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hello,
one more question:
What is the proper way of autostarting Knot Resolver 1.4.0 on systemd (Debian Stretch in my case) to be able to listen on interfaces other from localhost?
As per the Debian README I've set up the socket override.
# systemctl edit kresd.socket:
[Socket]
ListenStream=<my.lan.ip>:53
ListenDatagram=<my.lan.ip>:53
However after reboot the service doesn't autostart.
# systemctl status kresd.service
kresd.socket - Knot DNS Resolver network listeners
Loaded: loaded (/lib/systemd/system/kresd.socket; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kresd.socket.d
└─override.conf
Active: failed (Result: resources)
Docs: man:kresd(8)
Listen: [::1]:53 (Stream)
[::1]:53 (Datagram)
127.0.0.1:53 (Stream)
127.0.0.1:53 (Datagram)
<my.lan.ip>:53 (Stream)
<my.lan.ip>:53 (Datagram)
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Failed to listen on sockets: Cannot assign requested address
Oct 01 23:17:12 <myhostname> systemd[1]: Failed to listen on Knot DNS Resolver network listeners.
Oct 01 23:17:12 <myhostname> systemd[1]: kresd.socket: Unit entered failed state.
To get it running I have to type in manually:
# systemctl start kresd.socket
I apologize, I am now to systemd and its socket activation so it's not clear to me whether service or socket or both have to be somehow set up to autostart or not.
Could anyone clarify this?
Also, this is also in the log (again, Debian default):
Oct 01 23:18:22 <myhostname> kresd[639]: [ ta ] keyfile '/usr/share/dns/root.key': not writeable, starting in unmanaged mode
The file has permissions 644 for root:root. Should this be owned by knot, or writeable by others?
Thanks!
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Hello all,
I have come across an incompatibility with /usr/lib/knot/get_kaspdb and knot in relation to IPv6. Knot expects unquoted ip addresses however the kaspdb tool uses the python yams library which requires valid yams and therefore needs IPv6 addresses to be quoted strings as the contain ‘:’. This incompatibility causes issues when updating knot as the ubuntu packages from 'http://ppa.launchpad.net/cz.nic-labs/knot-dns/ubuntu’ call get_kaspdb during installation. The below gist shows how to recreate this issue. Please let me know if you need any further information
https://gist.github.com/b4ldr/bd549b4cf63a7d564299497be3ef868d
Thanks John
Dear Knot Resolver users,
Knot Resolver 1.4.0 has been released!
Incompatible changes
--------------------
- lua: query flag-sets are no longer represented as plain integers.
kres.query.* no longer works, and kr_query_t lost trivial methods
'hasflag' and 'resolved'.
You can instead write code like qry.flags.NO_0X20 = true.
Bugfixes
--------
- fix exiting one of multiple forks (#150)
- cache: change the way of using LMDB transactions. That in particular
fixes some cases of using too much space with multiple kresd forks (#240).
Improvements
------------
- policy.suffix: update the aho-corasick code (#200)
- root hints are now loaded from a zonefile; exposed as hints.root_file().
You can override the path by defining ROOTHINTS during compilation.
- policy.FORWARD: work around resolvers adding unsigned NS records (#248)
- reduce unneeded records previously put into authority in wildcarded answers
Full changelog:
https://gitlab.labs.nic.cz/knot/resolver/raw/v1.4.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-1.4.0.tar.xz.asc
Documentation:
http://knot-resolver.readthedocs.io/en/v1.4.0/
--Vladimir
Hi,
I'm getting an HTTP 500 error from https://gitlab.labs.nic.cz/ that says
"Whoops, something went wrong on our end". Can someone take a look at it
and fix it?
Thanks!
--
Robert Edmonds
edmonds(a)debian.org
Hi,
we have a DNSSEC enabled zone, for which knot serves RRSIGs with expire
date in the past (expired on Sept 13th) and signed by a no longer active
ZSK. The correct RRSIGs (uptodate and signed with the current ZSK) are
served as well, so the zone still works.
Is there a way to purge these outdated RRSIGs from the database?
Regards
André
Hi,
I maybe missed something. I created kasp direcotry, added knot as a
owner.
In the kasp directory (/var/lib/knot/kasp) runned commands:
keymgr init
keymgr zone add domena.cz policy none
keymgr zone key generate domena.cz algorithm rsasha256 size 2048 ksk
Cannot retrieve policy from KASP (not found).
Did I missed something ?
Thanks and best regards
J.Karliak
--
Bc. Josef Karliak
Správa sítě a elektronické pošty
Fakultní nemocnice Hradec Králové
Odbor výpočetních systémů
Sokolská 581, 500 05 Hradec Králové
Tel.: +420 495 833 931, Mob.: +420 724 235 654
e-mail: josef.karliak(a)fnhk.cz, http://www.fnhk.cz
Hello,
this might be a rather stupid question.
I have a fresh install of Debian Stretch with all updates and Knot Resolver 1.4.0. installed from CZ.NIC repositories. I've set up a rather simple configuration allowing our users to use the resolver and everything works fine (systemd socket override for listening on LAN). I have however noticed that kresd logs every single query into /var/log/syslog, generating approx. 1 MB/min worth of logs on our server. I've looked into documentation and haven't found any directive to control the logging behavior. Is there something I might be missing? I would preferrably like to see only warnings in the log.
Here's my config:
# cat /etc/knot-resolver/kresd.conf
net = { '127.0.0.1', '::1', '<my.lan.ip>' }
user('knot-resolver','knot-resolver')
cache.size = 4 * GB
Thanks for the help.
--
Regards,
Thomas Van Nuit
Sent with [ProtonMail](https://protonmail.com) Secure Email.