Dear all,
While trying to migrate or DNS to Knot I have noticed that a slave
server with 2GB RAM is facing memory exhaustion. I am running
2.6.5-1+0~20180216080324.14+stretch~1.gbp257446. There is 141 zones
having around 1MB in total. Knot is acting as pure slave server with
minimal configuration.
There is nearly 1.7GB of memory consumed by Knot on a freshly rebooted
server:
root@eira:/proc/397# cat status
Name: knotd
Umask: 0007
State: S (sleeping)
Tgid: 397
Ngid: 0
Pid: 397
PPid: 1
TracerPid: 0
Uid: 108 108 108 108
Gid: 112 112 112 112
FDSize: 64
Groups: 112
NStgid: 397
NSpid: 397
NSpgid: 397
NSsid: 397
VmPeak: 24817520 kB
VmSize: 24687160 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 1743400 kB
VmRSS: 1743272 kB
RssAnon: 1737088 kB
RssFile: 6184 kB
RssShmem: 0 kB
VmData: 1781668 kB
VmStk: 132 kB
VmExe: 516 kB
VmLib: 11488 kB
VmPTE: 3708 kB
VmPMD: 32 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
Threads: 21
SigQ: 0/7929
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: fffffffe7bfbbefc
SigIgn: 0000000000000000
SigCgt: 0000000180007003
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: f
Cpus_allowed_list: 0-3
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 260
nonvoluntary_ctxt_switches: 316
root@eira:/proc/397#
Config:
server:
listen: 0.0.0.0@53
listen: ::@53
user: knot:knot
log:
- target: syslog
any: info
mod-rrl:
- id: rrl-10
rate-limit: 10 # Allow 200 resp/s for each flow
slip: 2 # Every other response slips
mod-stats:
- id: custom
edns-presence: on
query-type: on
request-protocol: on
server-operation: on
request-bytes: on
response-bytes: on
edns-presence: on
flag-presence: on
response-code: on
reply-nodata: on
query-type: on
query-size: on
reply-size: on
template:
- id: default
storage: "/var/lib/knot"
module: mod-rrl/rrl-10
module: mod-stats/custom
acl: [allowed_transfer]
disable-any: on
master: idunn
I was pretty sure that a VM with 2GB RAM is enough for my setup :-)
BR
Ales
Hello all,
I noticed that Knot (2.6.5) creates an RRSIG for the CDS/CDNSKEY RRset
with the ZSK/CSK only.
I was wondering if this is an acceptable behavior as RFC 7344, section
4.1. CDS and CDNSKEY Processing Rules states:
o Signer: MUST be signed with a key that is represented in both the
current DNSKEY and DS RRsets, unless the Parent uses the CDS or
CDNSKEY RRset for initial enrollment; in that case, the Parent
validates the CDS/CDNSKEY through some other means (see
Section 6.1 and the Security Considerations).
Specifically I read that "represented in both the current DNSKEY and DS
RRsets" means that the CDS/CDNSKEY RRset must be signed with a KSK/CSK
and not only with a ZSK and a trust chain to KSK <- DS.
I tested both BIND 9.12.1 and PowerDNS Auth 4.0.5 as well. PowerDNS Auth
behaves the same as Knot 2.6.5 but BIND 9.12.1 always signs the
CDS/CDNSKEY RRset with at least the KSK.
Do I read the RFC rule too strict? To be honest, I see nothing wrong
with the CDS/CDNSKEY RRset only signed by the ZSK but BINDs behavior and
the not so clear RFC statement keeps me wondering.
Thanks,
Daniel
I'm getting started with knot resolver and am a bit unclear as to how this
config should be structured.
The result I'm looking for is to forward queries to resolver A if the
source is subnet A; unless the query is for the local domain if so then
query the local DNS.
I've been working with the config below to accomplish this. However I'm
finding that this config will if the request does not match the local
todname and will use root hints if not but will not use the FORWARD server.
Ultimately, this server will resolve DNS for several subnets and will
forward queries to different servers based on the source subnet.
Would someone mind pointing me in the right direction on this, please?
for name, addr_list in pairs(net.interfaces()) do
net.listen(addr_list)
end
-- drop root
user('knot', 'knot')
-- Auto-maintain root TA
modules = {
'policy', -- Block queries to local zones/bad sites
'view', --view filters
'hints', -- Load /etc/hosts and allow custom root hints
'stats',
}
-- 4GB local cache for record storage
cache.size = 4 * GB
--If the request is from eng subnet
if (view:addr('192.168.168.0/24')) then
if (todname('localnet.mydomain.com')) then
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')}))
else
view:addr('192.168.168.0/24', policy.FORWARD('68.111.106.68'))
end
end
--
855.ONTRAPORT
ontraport.com
------------------------------
Get a Demo <https://ontraport.com/demo> | Blog <https://ontraport.com/blog>
| Free Tools <https://ontraport.com/tools>
zone-refresh [<zone>...] Force slave zone
refresh.
zone-retransfer [<zone>...] Force slave zone
retransfer (no serial check).
I would expect that retransfer does a complete AXFR. But instead it just
does sometimes a refresh:
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] refresh, outgoing, 83.1.2.3@53: remote serial 2018011647,
zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-refresh'
info: [at.] refresh, outgoing, 2a02:111:9::5@53: remote serial
2018011647, zone is up-to-date
info: [at.] control, received command 'zone-retransfer'
info: [at.] AXFR, incoming, 2a02:111:9::5@53: starting
Seen with 2.6.3-1+ubuntu14.04.1+deb.sury.org+1
regards
Klaus
Hello,
Knot DNS looks awesome, thanks for that!
The benchmarks show a clear picture (for hosting) that the size of zones
doesn't matter, but DNSSEC does. I'm intruiged by the differences with NSD.
What is less clear, is what form of DNSSEC was used -- online signing,
or just signed for policy refreshes and updates, or signed before it
gets to knotd? This distinction seems important, as it might explain
the structural difference with NSD.
Also, the documentation speaks of "DNSSEC signing for static zones" but
leaves some doubt if this includes editing of the records using zonec
transactions, or if it relates to rosedb, or something else.
https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#automatic-dnssec-sig…https://www.knot-dns.cz/docs/2.6/singlehtml/index.html#rosedb-static-resour…
Other thant his uncertainty (and confusion over the meaning of the
master: parameter) the documentation is a real treat. Thanks for a job
done well!
Best wishes,
-Rick
Hi,
After upgrading our fleet of slave servers from 2.5.4 to 2.6.4, I
noticed that, on a few slaves, a large zone that changes rapidly is
now consistently behind the master to a larger degree that we consider
normal. By "behind", I mean that the serial number reported by the
slave in the SOA record is less than reported the master server.
Normally we expect small differences between the serial on the master
and the slaves because our zones change rapidly. These differences are
often transient. However, after the upgrade, a subset of the slaves (always
the same ones) have a much larger difference. Fortunately, the difference
does not increase without bound.
The hosts in question seem powerful enough: one has 8 2gHz Xeons and
32G RAM, which is less powerful than some of the hosts that are
keeping up. It may be more a case of their connectivity. Two of the
affected slaves are in the same location.
For now, I've downgraded these slaves back to 2.5.4, and they are able
to keep up again.
Is there a change that would be an obvious culprit for this or is
something that we could tune? One final piece of information: We
always apply the change contained in the ecs-patch branch (which
returns ECS data if the client requests it). I don't know if the
effect of this processing is significant. We do need it as part of
some ongoing research we're conducting.
Chuck
Hello,
I plan to use Docker to deploy Knot-DNS.
I am going to copy all the zones configurations in the Docker image.
Then I will start two containers with two different ip addresses.
In this case, is it necessary to configure acl and remote section related
to master / slave replication?
I don't think so because both IP will reply with excatly the same zone
configuration but please give me your opinion ?
Regards,
Gael
--
Cordialement, Regards,
Gaël GIRAUD
ATLANTIC SYSTÈMES
Mobile : +33603677500
Dear Knot Resolver users,
please note that Knot Resolver now has its dedicated mailing list:
https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
For further communication regarding Knot Resolver please subscribe to
this list. We will send new version announcements only to the new
mailing list.
--
Petr Špaček @ CZ.NIC