Hello,
There is a website I need to use in a daily basis that uses DNSSEC,
however their keys have expired which causes validation to fail. I have
contacted their support but they failed to resolve the issue so far.
Since I can resolve the name when using `dig +cd`, I was hoping I could
configure `kresd` to skip validation when resolving that specific
domain. It seems that I should be able to do so by using the `policies`
module and the `FLAGS` action:
https://knot-resolver.readthedocs.io/en/stable/modules.html#actions
I am not sure with flag/flags to use. I inspected the source and tried
the following:
policy.add(policy.suffix(policy.FLAGS('DNSSEC_CD'),{todname('example.org.')}))
But this apparently had no effect. I also tried without the trailing dot
and played with other flags, but no success.
Does anybody know which flag I could set to bypass DNSSEC validation for
the specified domain? Or, if the policy module is not the way to achieve
that goal, is there any other way?
# kresd --version
Knot DNS Resolver, version 1.5.1
Any help will be greatly appreciated,
// Leonardo.
Greetings. I want to build Knot 2.1 on a Ubuntu 16.04.4 LTS. However, the
libknot from the packages is too old:
Makefile:87: *** libknot >= 2.6.4 required. Stop.
As compared to:
libknot-dev/xenial,now 2.1.1-1build1 amd64 [installed]
Is it possible to get the libknot package on Ubuntu 16.04.4 LTS updated
soon? If not, how do I install the latest libknot by hand?
--Paul Hoffman
Greetings. My kresd config file is:
net.listen('192.241.207.161', 5364)
trust_anchors.file = 'root.keys'
modules.load('ta_sentinel')
I wanted to test this with a zone I set up at this-is-signed.com. However,
I'm getting a positive result back for both the is-ta and the not-ta
records (it is properly giving me the SERVFAIL for the bogus record).
Have I configured Knot Resolver incorrectly? For Knot 2.2.1, do I need a
different form for the names in order to get the kskroll-sentinel effect to
kick in? From the DNSOP WG mailing list traffic, I thought I needed the
4f66 tag, but could have misinterpreted that.
--Paul Hoffman
# dig @192.241.207.161 -p 5364
kskroll-sentinel-is-ta-4f66.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
kskroll-sentinel-is-ta-4f66.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36153
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kskroll-sentinel-is-ta-4f66.this-is-signed.com. IN A
;; ANSWER SECTION:
kskroll-sentinel-is-ta-4f66.this-is-signed.com. 60 IN CNAME
this-is-signed.com.
this-is-signed.com. 60 IN A 192.241.207.161
;; Query time: 283 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:09:54 UTC 2018
;; MSG SIZE rcvd: 105
# dig @192.241.207.161 -p 5364
kskroll-sentinel-not-ta-4f66.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
kskroll-sentinel-not-ta-4f66.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14466
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kskroll-sentinel-not-ta-4f66.this-is-signed.com. IN A
;; ANSWER SECTION:
kskroll-sentinel-not-ta-4f66.this-is-signed.com. 60 IN CNAME
this-is-signed.com.
this-is-signed.com. 54 IN A 192.241.207.161
;; Query time: 5 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:10:00 UTC 2018
;; MSG SIZE rcvd: 106
# dig @192.241.207.161 -p 5364 bogus.this-is-signed.com a
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.241.207.161 -p 5364
bogus.this-is-signed.com a
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 20810
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;bogus.this-is-signed.com. IN A
;; Query time: 9 msec
;; SERVER: 192.241.207.161#5364(192.241.207.161)
;; WHEN: Sun Feb 25 00:10:08 UTC 2018
;; MSG SIZE rcvd: 42
Dear Knot Resolver users,
Knot Resolver 2.1.0 is released.
Incompatible changes
--------------------
- stats: remove tracking of expiring records (predict uses another way)
- systemd: re-use a single kresd.socket and kresd-tls.socket
- ta_sentinel: implement protocol draft-ietf-dnsop-kskroll-sentinel-01
(our draft-ietf-dnsop-kskroll-sentinel-00 implementation had inverted
logic)
- libknot: require version 2.6.4 or newer to get bugfixes for DNS-over-TLS
Bugfixes
--------
- detect_time_jump module: don't clear cache on suspend-resume (#284)
- stats module: fix stats.list() returning nothing, regressed in 2.0.0
- policy.TLS_FORWARD: refusal when configuring with multiple IPs (#306)
- cache: fix broken refresh of insecure records that were about to expire
- fix the hints module on some systems, e.g. Fedora (came back on 2.0.0)
- build with older gnutls (conditionally disable features)
- fix the predict module to work with insecure records & cleanup code
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v2.1.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.1.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.1.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v2.1.0/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hi,
I am running knot-resolver version 2.0.0 and I would like to redirect
(STUB) queries from a subnet to a specific nameserver. For that I
created a view configuration for that subnet (see below). A query
without being in that subnet caches the answer but unfortunately the
cache overrides the answer when doing the same query inside the view.
My configuration file looks like that:
---
modules = {
'hints > iterate',
'policy > hints',
'view < cache'
}
view:addr('${SUBNET}', policy.suffix(policy.STUB('${IP}'),
{todname('${DOMAIN_SUFFIX}')}))
---
Testing procedure:
1) Try to resolve A record being in the special subnet.
$kdig @${KNOT-RESOLVER} A www.${DOMAIN_SUFFIX}
-> STUB works and answer is correct
2) Try to resolve A record without being in the special subnet.
$kdig @${KNOT-RESOLVER} ${KNOT-RESOLVER}
-> view is not triggerd and answer is correct
3) Try to resolve A record again within the special subnet.
$kdig @${KNOT-RESOLVER} A www.${DOMAIN_SUFFIX}
-> The answer is satisfied from cache, which is not the right answer.
Is it even possible to do that?
Best wishes,
Sakirnth
Hello,
I'm going to reply here instead of knot-dns-users.
On 7.2.2018 03:24, Yoshi Horigome wrote:
> Hello Jay,
>
> Is it ok to understand that it forwards to "192.168.168.1" which is the
> local DNS when asking for localnet.mydomain.com
> <http://localnet.mydomain.com>?
>
> If it is, perhaps, I think that setting should be done as follows.
>
> --If the request is from eng subnet
>
> if (view:addr('192.168.168.0/24' <http://192.168.168.0/24'>)) then
> if (todname('localnet.mydomain.com
> <http://localnet.mydomain.com>')) then
> - policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
> {todname('localnet.mydomain.com <http://localnet.mydomain.com>')}))
> + policy.add(policy.suffix(policy.STUB('192.168.168.1'),
> {{'\8localnet\8mydomain\3com'}}))
> else
> view:addr('192.168.168.0/24 <http://192.168.168.0/24>',
> policy.FORWARD('68.111.106.68'))
>
> end
> end
First of all the use of `if` conditions above is incorrect.
Examples of view configuration can be found here:
http://knot-resolver.readthedocs.io/en/latest/modules.html#example-configur…
Second part of view definition is basically a rule from policy module,
which has examples here:
http://knot-resolver.readthedocs.io/en/latest/modules.html#policy-examples
I'm not sure I understood your request correctly, so I will provide
snippets and let you to combine them together.
-- forward all queries for subdomain localnet.mydomain.com to 192.168.168.1
policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
{todname('localnet.mydomain.com')})
-- forward all queries from 192.168.168.0/24 to 68.111.106.68
view:addr('192.168.168.0/24', policy.all(policy.FORWARD('68.111.106.68')))
This needs to be tested as result will depend on order of modules. I can
see that you have policy before view, so it might just work. Give it a try.
Petr Špaček @ CZ.NIC
> I understand that it is policy.STUB if it is version1, and policy.PASS
> if it is version2.
>
> I am sorry if I made a mistake.
>
>
> Best regards.
>
> Postscript:
> It seems that knot resolver's mailing list has been created, so this may
> be better.
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-resolver-users
>
>
> 2018-02-07 4:46 GMT+09:00 Jay Remotti <jremotti(a)ontraport.com
> <mailto:jremotti@ontraport.com>>:
>
> I'm getting started with knot resolver and am a bit unclear as to
> how this config should be structured.
> The result I'm looking for is to forward queries to resolver A if
> the source is subnet A; unless the query is for the local domain if
> so then query the local DNS.
>
> I've been working with the config below to accomplish this. However
> I'm finding that this config will if the request does not match the
> local todname and will use root hints if not but will not use the
> FORWARD server.
>
> Ultimately, this server will resolve DNS for several subnets and
> will forward queries to different servers based on the source subnet.
>
> Would someone mind pointing me in the right direction on this, please?
>
> for name, addr_list in pairs(net.interfaces()) do
> net.listen(addr_list)
> end
> -- drop root
> user('knot', 'knot')
> -- Auto-maintain root TA
> modules = {
> 'policy', -- Block queries to local zones/bad sites
> 'view', --view filters
> 'hints', -- Load /etc/hosts and allow custom root hints
> 'stats',
> }
>
>
> -- 4GB local cache for record storage
> cache.size = 4 * GB
>
> --If the request is from eng subnet
>
> if (view:addr('192.168.168.0/24' <http://192.168.168.0/24'>)) then
> if (todname('localnet.mydomain.com
> <http://localnet.mydomain.com>')) then
> policy.add(policy.suffix(policy.FORWARD('192.168.168.1'),
> {todname('localnet.mydomain.com <http://localnet.mydomain.com>')}))
> else
> view:addr('192.168.168.0/24 <http://192.168.168.0/24>',
> policy.FORWARD('68.111.106.68'))
>
> end
> end
>
>
> 855.ONTRAPORT
> ontraport.com <https://ontraport.com>
> ------------------------------------------------------------------------
>
> Get a Demo <https://ontraport.com/demo>| Blog
> <https://ontraport.com/blog>| Free Tools <https://ontraport.com/tools>
>
>
>
> --
> https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users
> <https://lists.nic.cz/cgi-bin/mailman/listinfo/knot-dns-users>
Dear Knot Resolver users,
Knot Resolver 2.0.0 brings lots of changed code, including two
bigger new features: aggressive cache and pipelined TLS forwarding.
Incompatible changes
--------------------
- systemd: change unit files to allow running multiple instances,
deployments with single instance now must use `kresd(a)1.service`
instead of `kresd.service`; see kresd.systemd(8) for details
- systemd: the directory for cache is now /var/cache/knot-resolver
- unify default directory and user to `knot-resolver`
- directory with trust anchor file specified by -k option must be writeable
- policy module is now loaded by default to enforce RFC 6761;
see documentation for policy.PASS if you use locally-served DNS zones
- drop support for alternative cache backends memcached, redis,
and for Lua bindings for some specific cache operations
- REORDER_RR option is not implemented (temporarily)
New features
------------
- aggressive caching of validated records (RFC 8198) for NSEC zones;
thanks to ICANN for sponsoring this work.
- forwarding over TLS, authenticated by SPKI pin or certificate.
policy.TLS_FORWARD pipelines queries out-of-order over shared TLS connection
Beware: Some resolvers do not support out-of-order query processing.
TLS forwarding to such resolvers will lead to slower resolution or failures.
- trust anchors: you may specify a read-only file via -K or --keyfile-ro
- trust anchors: at build-time you may set KEYFILE_DEFAULT (read-only)
- ta_sentinel module implements draft ietf-dnsop-kskroll-sentinel-00,
enabled by default
- serve_stale module is prototype, subject to change
- extended API for Lua modules
Bugfixes
--------
- fix build on osx - regressed in 1.5.3 (different linker option name)
Full changelog:
https://gitlab.labs.nic.cz/knot/knot-resolver/raw/v2.0.0/NEWS
Sources:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.0.0.tar.xz
GPG signature:
https://secure.nic.cz/files/knot-resolver/knot-resolver-2.0.0.tar.xz.asc
Documentation:
https://knot-resolver.readthedocs.io/en/v2.0.0/
--Vladimir
On Mon 2018-01-22 12:42:47 +0100, Vladimír Čunát wrote:
> Knot Resolver 1.5.2 is a security release!
>
> Security
> --------
> - fix CVE-2018-1000002: insufficient DNSSEC validation, allowing
> attackers to deny existence of some data by forging packets.
> Some combinations pointed out in RFC 6840 sections 4.1 and 4.3
> were not taken into account.
Thanks for this report, Vladimír!
Out of curiosity, are there any test suites available that exercise this
particular attack? I'm trying to sort out a backported fix for the
version of knot-resolver in debian stable (1.2.0) and enough of the
codebase has changed that it's not as simple as just cherry-picking
patches f90d27de49c9d3be0424d5d5457fb18df7d5c3f3 and
d296e36eb554148f3d6f1f86e8f86ddec81de962, so i want to be sure that any
attempted change actually fixes the problem.
--dkg