Hello Tomas,
You are right, it was just those silly configuration mistakes. Everything works now :)
Thank you so much!
Best regards,
--Manuel
________________________________
De: Tomas Krizek
Enviado: Viernes, 11 de Diciembre de 2020 13:25
Para: Knot Resolver Users List; Urueña-Pascual Manuel
Asunto: Re: [knot-resolver-users] Is policy.rpz a non-chain action?
Hi, the use-case you're trying to achieve is possible, but there are
some issues with your configuration.
On 10/12/2020 17.29, Urueña-Pascual Manuel wrote:>
policy.add(policy.rpz(policy.DENY_MSG('domain blocked'),
'/etc/knot-resolver/blocklist.rpz', true))
> policy.add(policy.rpz(policy.PASS(), '/etc/knot-resolver/allowlist.rpz', true))
You want to specify "policy.PASS" without the brackets.
> and these are the RPZ zones:
>
> $ cat '/etc/knot-resolver/allowlist.rpz':
> www.google.com<http://www.google.com> 600 IN CNAME rpz-passthrough.
> www.bing.com<http://www.bing.com> 600 IN CNAME rpz-passthrough.
When you provide kresd these RPZ zones, it will complain:
[poli] RPZ /tmp/kr_dev/etc/knot-resolver/allowlist.rpz:1: CNAME with
custom target in RPZ is not supported yet (ignored)
It's because you're trying to use unsupported CNAME. See the table in
our docs [1]. What you're probably looking for is "rpz-passthru."
instead. However, if you're using a separate allowlist with policy.PASS
action (which is your case) "." would also work here.
You should also be able to combine the blocklist and allowlist into just
a single rpz file, using policy.DENY_MSG("...") and controlling whether
domain is blocked ("CNAME .") or allowed ("CNAME rpz-passthru.") with
the RPZ rules themselves.
[1] -
https://knot-resolver.readthedocs.io/en/stable/modules-policy.html#response…
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello,
I'm trying to setup a knot-resolver 5.2.0-1 instance where all DNS queries should return a fixed IPv4 address, but the domains on an allow list that should return the real IPv4 address instead (by using a policy.PASS action), and the ones on a block list that should return a SRVFAIL response (by using a policy.DROP action).
I'm using Response Policy Zones (RPZ), and with an explicit list of domains to redirect the traffic to (plus the explicit allow and block lists) everything works fine. However, when trying to redirect all queries by default by using a policy.all rule, the allow list does not longer work and all queries (but blocked ones) are responded with the fixed IPv4 address.
This is my '/etc/knot-resolver/kresd.conf':
-- turns off DNSSEC validation
trust_anchors.remove('.')
-- Network interface configuration
net.listen('127.0.0.1', 53, { kind = 'dns' })
net.listen('10.127.0.20', 53, { kind = 'dns' })
net.ipv6 = false
net.listen('/tmp/kres.control', nil, { kind = 'control'})
-- Load useful modules
modules = {
'hints > iterate', -- Load /etc/hosts and allow custom root hints
'stats', -- Track internal statistics
'predict', -- Prefetch expiring/frequent records
}
-- Cache size
cache.size = 100 * MB
policy.add(policy.rpz(policy.DENY_MSG('domain blocked'), '/etc/knot-resolver/blocklist.rpz', true))
--policy.add(policy.rpz(policy.ANSWER(), '/etc/knot-resolver/redirectlist.rpz', true))
policy.add(policy.rpz(policy.PASS(), '/etc/knot-resolver/allowlist.rpz', true))
policy.add(policy.all(policy.ANSWER({ [kres.type.A] = { rdata=kres.str2ip('10.127.0.10'), ttl=300 } })))
and these are the RPZ zones:
$ cat '/etc/knot-resolver/allowlist.rpz':
www.google.com 600 IN CNAME rpz-passthrough.
www.bing.com 600 IN CNAME rpz-passthrough.
$ cat /etc/knot-resolver/blocklist.rpz
examplemalwaredomain.com 600 IN CNAME .
*.examplemalwaredomain.com 600 IN CNAME .
Thus, www.examplemalwaredomain.com is blocked:
$ dig www.examplemalwaredomain.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.examplemalwaredomain.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 26657
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;www.examplemalwaredomain.com. IN A
;; AUTHORITY SECTION:
www.examplemalwaredomain.com. 10800 IN SOA www.examplemalwaredomain.com. nobody.invalid. 1 3600 1200 604800 10800
;; ADDITIONAL SECTION:
explanation.invalid. 10800 IN TXT "domain blocked"
And any other domain is redirected:
$ dig nic.cz @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> nic.cz @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60891
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;nic.cz. IN A
;; ANSWER SECTION:
nic.cz. 300 IN A 10.127.0.10
But the domains in the allow list are also redirected :(
$ dig www.google.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.google.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49284
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 300 IN A 10.127.0.10
Interestingly, if I add a policy.PASS rule with a list of explicit domains before the policy.all one, it does work properly:
policy.add(policy.suffix(policy.PASS, policy.todnames({'example.com', 'example.net'})))
$ dig www.example.com @127.0.0.1
; <<>> DiG 9.16.1-Ubuntu <<>> www.example.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59322
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 120 IN A 93.184.216.34
Thus, it looks as if the allowed domains first hit the allow RPZ list, but then they hit the policy.ANSWER policy anyway. Is this the expected behaviour? Is there any way to implement such a default redirect but for a list of allowed or blocked domains?
Regards,
--Manuel
The knot-resolver documentation only appears to support Linux. Any plans to
make
the resolver available, or useful on systems other than Linux? While I could
put Linux in a VM, or BE, and use the resolver from there. It wouldn't really
be
very effective. Any insight to using this on any of the BSD's would be
greatly
appreciated (I'm currently using unbound along side knot authoritative).
Thanks!
--Chris
Hi,
I’ve stumbled across knot-resolver because I have an issue with my current DNS solution.
What is the best way to block a large number of domains.
I’ve trying to work with the below by it’s not functioning
Part of /etc/knot-resolver/kresd.conf
-- Domain Blocking
policy.add(
policy.rpz(policy.DENY_MSG('domain blocked by your IT department'),'/etc/knot-resolver/blacklist.rpz', true))
policy.add (
policy.rpz(policy.DENY, '/etc/knot-resolver/blacklist.rpz'))
/etc/knot-resolver/backlist.rpz
007bets.com,
Rrds,
Mike
Hi,
I’ve stumbled across knot-resolver because I have an issue with my current DNS solution.
What is the best way to block a large number of domains.
I’ve trying to work with the below by it’s not functioning
Part of /etc/knot-resolver/kresd.conf
-- Domain Blocking
policy.add(
policy.rpz(policy.DENY_MSG('domain blocked by your IT department'),'/etc/knot-resolver/blacklist.rpz', true))
policy.add (
policy.rpz(policy.DENY, '/etc/knot-resolver/blacklist.rpz'))
/etc/knot-resolver/backlist.rpz
007bets.com,
Rrds,
Mike
Hello,
recently we upgraded from 5.1 to 5.2 few servers (CentOS7 and Raspbian) and
all seems to be working fine, but I can see on (yes I know, not
recommend and supported) rasbian issue with metrics and in the log I can
issue this error:
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/H#003: socket:connect: No such
file or directory (ignoring this socket)
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/H: socket:connect: No such file
or directory (ignoring this socket)
Nov 13 10:21:33 dns-cache-2 kresd[15232]: map() error while connecting to
control socket /run/knot-resolver/control/: socket:connect: Connection
refused (ignoring this socket)
the error is triggered by opening "ip:8453/metrics" and it show almost
empty response:
# TYPE resolver_latency histogram
resolver_latency_count 0.000000
resolver_latency_sum 0.000000
root@dns-cache-2:/# apt -qq list knot* --installed
knot-resolver-module-http/unknown,now 5.2.0-1 all [installed]
knot-resolver-release/unknown,now 1.7-1 all [installed]
knot-resolver/unknown,now 5.2.0-1 armhf [installed]
root@dns-cache-2:/#
root@dns-cache-2:/# uname -a
Linux dns-cache-2 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020
armv7l GNU/Linux
Could it be a bug or something related to mix armv7l and armhf? I know of
the limitation as we discussed this setup on raspberry some time ago. I
just want to share my experience as it can be useful and if you will have
any tip I will appreciate it.
Dear Knot Resolver users,
Knot Resolver 5.2.0 has been released!
One of the notable features is a new DNS-over-HTTTPS implementation
which is more scalable and stable than the old one. It also has less
dependencies and simpler configuration.
Another new feature is experimental eXpress Data Path (XDP) support for
UDP. With support from both the network card and the kernel, it can
provide superior performance and lower latency for UDP answers.
Some of the improvements and bugfixes required a few backward
incompatible changes, mainly regarding control sockets or module API.
Please refer to our upgrading guide for details:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#to-5-2
Improvements
------------
- doh2: add native C module for DNS-over-HTTPS (#600, !997)
- xdp: add server-side XDP support for higher UDP performance (#533,
!1083)
- lower default EDNS buffer size to 1232 bytes (#538, #300, !920);
see https://dnsflagday.net/2020/
- net: split the EDNS buffer size into upstream and downstream (!1026)
- lua-http doh: answer to /dns-query endpoint as well as /doh (!1069)
- improve resiliency against UDP fragmentation attacks (disable PMTUD)
(!1061)
- ta_update: warn if there are differences between statically configured
keys and upstream (#251, !1051)
- human readable output in interactive mode was improved (!1020)
- doc: generate info page (!1079)
- packaging: improve sysusers and tmpfiles support (!1080)
Bugfixes
--------
- avoid an assert() error in stash_rrset() (!1072)
- fix emergency cache locking bug introduced in 5.1.3 (!1078)
- migrate map() command to control sockets; fix systemd integration
(!1000)
- fix crash when sending back errors over control socket (!1000)
- fix SERVFAIL while processing forwarded CNAME to a sibling zone (#614,
!1070)
Incompatible changes
--------------------
- see upgrading guide:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#to-5-2
- minor changes in module API
- control socket API commands have to be terminated by "\n"
- graphite: default prefix now contains instance identifier (!1000)
- build: meson >= 0.49 is required (!1082)
- planned changes in future versions:
https://knot-resolver.readthedocs.io/en/v5.2.0/upgrading.html#upcoming-chan…
Full changelog:
https://gitlab.nic.cz/knot/knot-resolver/raw/v5.2.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.2.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-5.2.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v5.2.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello team,
most probably we hit the issue !1070 "fix SERVFAIL in *FORWARD modes with
certain CNAME setup" and I would like to ask when we can expect release of
knot-resolver 5.2.0 (with the fix) or if there will be "hot fix" in 5.1.x?
example of affected domain: www.action.foundation.total
Thank you.