Hello,
I was using Knot resolver 5 with configuration where I was replacing some parts of DNS tree (https://knot-resolver.readthedocs.io/en/latest/modules-policy.html#replacin…).
Since I updated to Knot Resolver 6 and rewrite configuration to YAML using forward directive (https://www.knot-resolver.cz/documentation/latest/config-forward.html#forwa…) it seems that this configuration is not working as planned. After short time Knot Resolver started responding to replaced part of tree as NXDOMAIN.
Is this behavior intended, or is it some unintended bug?
Configuration for v6 was:
forward:
# Internal Domains
- subtree:
- internal.corp
- 10.in-addr.arpa
servers:
- 10.3.0.102
- 10.3.0.103
options:
authoritative: false
dnssec: false
I've tried to find if I do have something misconfigured, but without any luck. I have only a theory - in docs, it states like this: "Forwarding configuration instructs resolver to forward cache-miss queries from clients to manually specified DNS resolvers". Can the real reason of malfunction be, that Knot resolver 6 uses information from cache BEFORE it even tries to forward the query?
Since I am not sure how Knot resolver handles these rules internally, it is only theory.
Is there any correct way to replicate behavior from Knot Resolver 5? My config in v5 was:
internalDomains = policy.todnames({
'10.in-addr.arpa',
'mydomain.corp'
})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), internalDomains))
policy.add(policy.suffix(policy.STUB({
'10.3.0.102',
'10.3.0.103'
}), internalDomains))
Regards,
Jiri Masek
Hi,
I run my knot-resolver on a raspberry pi with pxe boot and the root file
system on nfs.
I realize that this is not recommended operations.
Therefore I try to run my cache on an tmpfs file system.
> tmpfs on /var/cache/knot-resolver type tmpfs (rw,relatime,size=102400k)
But now my knot-resolver refuses to start
Oct 2 20:14:39 pi-hole1 systemd[1]: Starting Knot Resolver daemon...
Oct 2 20:14:40 pi-hole1 kresd[1052]: [system] error while loading
config: /etc/knot-resolver/kresd.conf:69: can't open cache path
'/var/cache/knot-resolver'; working directory '/var/lib/knot-resolver';
No space left on device (workdir '/var/lib/knot-resolver')
Oct 2 20:14:40 pi-hole1 systemd[1]: kresd(a)1.service: Main process
exited, code=exited, status=1/FAILURE
Oct 2 20:14:40 pi-hole1 systemd[1]: kresd(a)1.service: Failed with result
'exit-code'.
Oct 2 20:14:40 pi-hole1 systemd[1]: Failed to start Knot Resolver daemon.
Besides the fact that this message makes no sense (why wouldn't the
cache be opened in /var/cache when /var/lib is full?), the cache gets
actually opened and a database created. The kres-cache-gc process runs
without problems.
The /var/lib/knot-resolver directory exists and is writeable.
I have tried to put it on tmpfs too, no luck.
What solved the problem was
root@pi-hole1:~# umount /var/cache/knot-resolver
root@pi-hole1:~# systemctl start kresd@1
But now the cache is on nfs. That seems risky.
I think this is the relevant part of the configuration
-- Cache size
workdir = '/var/cache/knot-resolver'
cache.open(100 * MB, 'lmdb:///var/cache/knot-resolver')
But it comes with a distro pre-config of
-- Set cache location
rawset(cache, 'current_storage', 'lmdb:///var/cache/knot-resolver')
Which by the way makes it very hard to change cache location. Not to
mention the hard coded location in the systemd script for kres-cache-gc.
What do I do wrong?
/Ulrich