Hi,
to see whether kresd 4.0.0 would do a comparable or better job
than doh-httpproxy we analyzed failure rates by looking at HTTP response
codes for each setup and found some significant differences
for HTTP response code 400:
sample size: 1 000 000 HTTP requests for each run
HTTP Code [1] [2] [3]
--------------------------------
200 93.96 97.04 96.33
499 3.28 2.45 3.11
400 2.07 0.18 0.22 <<<
415 0.68 0.32 0.34
408 0.002 0.002 0.002
413 0.001 0 0
Numbers show percent.
setups:
[1] nginx -> (http, no tls) kresd
[2] nginx -> (http, no tls) doh-httpproxy -> (udp/53) unbound
To reduce the likelihood of measuring unrelated issues (like issues
caused by qname minimization differences between unbound and kresd) we
also used kresd as DNS resolver without touching its DoH code path:
[3] nginx -> (http, no tls) doh-httpproxy -> (udp/53) kresd
The HTTP request rate for [1] was slightly lower when compared with [2]
and [3].
To be more precise, doh-httproxy services was configured with two
running instances as described here:
https://facebookexperimental.github.io/doh-proxy/tutorials/nginx-dohhttppro…
version information:
kresd 4.0.0
nginx 1.14.2
unbound 1.9.0
https://github.com/facebookexperimental/doh-proxy @
9f943a4c232bd018ae155b7839a6b4e13181a5fd
This information on its own is not very useful but it might help
motivate further tests in a test-environment with no real end-user
traffic that allows for more verbose logging.
We would also like to hear from other kresd DoH adopters if they
observe similar failure rates on their setups.
Some open questions for further tests:
- How reproducible are these results?
- Does the HTTP method (GET vs. POST) change the error rate?
- Can the error rate be reduced by running multiple kresd backends?
(nginx sending requests to two kresd bakcends)
- Is the DNS transaction ID of always 0 as per RFC8484 an issue when
kresd is not also terminating the initial HTTPS connection from the DoH
client? (from a backend's perspective two distinct queries might look
identical when a reverse proxy sits between the DoH client and kresd)
regards,
Christoph
I'm new to knot resolver (krsed).
Running kresd in runit and logging via svlogd.
I'm trying to run kresd on a lan with internal ip addresses and
internal domains.
I can currently do this with dnsmasq and unbound, but I wanted to see
how kresd would do on the client facing edge.
I have an Active Directory domain which I've inherited (domain.local)
and I've made a building.domain dns infrastructure for the different
buildings. (building-red.domain, building-orange.domain,
building-green.domain, etc..)
There were two AD dns servers doing all the DNS.. There is now a
pihole server and dnsmasq helping to offset the queries.
I'm looking to put up a kresd on the :53 and move the current dnsmasq
installs to :57 and have kresd forward to them.
When I do this my building.domain and domain.local are not resolvable.
What am I missing?
Unbound has a private-address and private-domain which handles this.
Does knot resolver have something similar?
egrep -v "\-\-" /etc/knot-resolver/config
<code>
net.listen('10.20.0.43', 53)
trust_anchors.remove('.')
modules = {
'policy',
'stats',
'predict'
}
cache.size = 100*MB
cache.storage = 'lmdb:///var/cache/knot-resolver/'
user( 'knot-resolver', 'knot-resolver' )
predict.config({ window = 20, period = 72 })
policy.add( policy.all( policy.FORWARD(
{ '10.20.0.43@57', '10.20.0.53@57' }
)))
</code>
Below is an excerpt from the kresd logs captured via svlogd showing
the nxdomain return..
2019-05-22_18:19:53.06750 [00000.00][plan] plan 'squid.tech.pcsd.'
type 'A' uid [35568.00]
2019-05-22_18:19:53.06756 [35568.00][iter] 'squid.tech.pcsd.' type
'A' new uid was assigned .01, parent uid .00
2019-05-22_18:19:53.06758 [35568.01][cach] => trying zone: ., NSEC, hash 0
2019-05-22_18:19:53.06759 [35568.01][cach] => NSEC sname: covered
by: pccw. -> pe., new TTL 83379
2019-05-22_18:19:53.06760 [35568.01][cach] => NSEC wildcard: covered
by: . -> aaa., new TTL 84454
2019-05-22_18:19:53.06761 [35568.01][cach] => writing RRsets: +++
2019-05-22_18:19:53.06762 [35568.01][iter] <= rcode: NXDOMAIN
2019-05-22_18:19:53.06768 [35568.01][resl] AD: request NOT
classified as SECURE
2019-05-22_18:19:53.06773 [35568.01][resl] finished: 0, queries: 1,
mempool: 16400 B
and this is another request that worked successfully
2019-05-22_18:19:53.07382 [00000.00][plan] plan
'r6---sn-8xgp1vo-2iae.googlevideo.com.' type 'A' uid [24882.00]
2019-05-22_18:19:53.07384 [24882.00][iter]
'r6---sn-8xgp1vo-2iae.googlevideo.com.' type 'A' new uid was assigned
.01, parent uid .00
2019-05-22_18:19:53.07388 [24882.01][cach] => skipping unfit CNAME
RR: rank 020, new TTL -144
2019-05-22_18:19:53.07389 [24882.01][cach] => no NSEC* cached for
zone: googlevideo.com.
2019-05-22_18:19:53.07389 [24882.01][cach] => skipping zone:
googlevideo.com., NSEC, hash 0;new TTL -123456789, ret -2
2019-05-22_18:19:53.07389 [24882.01][cach] => skipping zone:
googlevideo.com., NSEC, hash 0;new TTL -123456789, ret -2
2019-05-22_18:19:53.07390 [ ][nsre] score 21 for 10.20.0.43#00057;
cached RTT: 19
2019-05-22_18:19:53.07391 [ ][nsre] score 40001 for
10.20.0.53#00057; cached RTT: 12666
2019-05-22_18:19:53.07391 [24882.01][resl] => id: '07414' querying:
'10.20.0.43#00057' score: 21 zone cut: '.' qname:
'R6---sN-8Xgp1VO-2iae.goOgLeVIDeO.CoM.' qtype: 'A' proto: 'udp'
here is dig showing that the entry does exist..
; <<>> DiG 9.14.2 <<>> @10.20.0.43 -p 53 -t any squid.tech.pcsd
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17967
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;squid.tech.pcsd. IN ANY
;; ANSWER SECTION:
squid.tech.pcsd. 300 IN A 10.20.0.69
;; Query time: 3 msec
;; SERVER: 10.20.0.43#53(10.20.0.43)
;; MSG SIZE rcvd: 60
As I said I'm still new to kresd and it's logging format so please
excuse my ignorance if the answer is obvious.. I've been reading all
that I can, and I couldn't find a use case that was like mine in the
documentation.. (I did find two things that were causing me problems
with upstream providers in unbound because of the docs.. which is why
I'm looking to try it on the lan and see what happens.. )
Thank you for taking the time to read this.
I'm looking to fortify and make things more resilient with the network
so that I can focus on finishing other projects.. and from what I can
see.. knot will totally help me with that.
Again, thanks for this piece of software - greatly appreciated.
After squidguard and pihole this is what I'm sending to the outiside
world (thanks to krsed..
https://imgur.com/a/tlcC6Jx
--
This message may contain confidential information and is intended only for
the individual(s) named. If you are not an intended recipient you are not
authorized to disseminate, distribute or copy this e-mail. Please notify
the sender immediately if you have received this e-mail by mistake and
delete this e-mail from your system.
(I'm including the mailing list so others can benefit from this
discussion as well since we had the proxy/no-proxy topic there before)
Vaclav Steiner wrote:
> Our KRESd daemons are configured with lua-http module for DoH, not anz reverse proxy.
Is this a temporary setup or what is the motivation behind not using any
HTTP frontend since knot-resolver developers actively recommend against
a "naked" kresd DoH endpoint without any reverse proxy?
> And about list [3] we know. We want to be there :-)
Great to hear that.
kind regards,
Christoph
> Issue [2] I’ll say guys from knot-resolver team. Probably Firefox
doesn’t have problem with it.
a _fresh_ firefox 66.0.5 acutally has a problem with it:
"
Warning: Potential Security Risk Ahead
Firefox detected a potential security threat and did not continue to
odvr.nic.cz. If you visit this site, attackers could try to steal
information like your passwords, emails, or credit card details.
[...]
"
>> 20. 5. 2019 v 19:36, Christoph wrote:
>>
>> Hi Vaclav,
>>
>> thanks for running a public DoH service [1].
>> Would be great if you could add your DoH server to the
>> public DNS resolver lists [3].
>>
>> There is a TLS misconfiguration that results in a TLS error
>> because the certificate chain is incomplete [2].
>>
>> Is this a kresd without any HTTP reverse proxy like nginx in front of it?
>>
>>
>> kind regards,
>> Christoph
>>
>>
>> [1] https://blog.nic.cz/2019/05/20/na-odvr-podporujeme-take-dns-over-https/
>> [2]
>> https://www.ssllabs.com/ssltest/analyze.html?d=odvr.nic.cz&hideResults=on
>> [3] https://github.com/curl/curl/wiki/DNS-over-HTTPS
>> https://github.com/DNSCrypt/dnscrypt-resolvers
>>
>
>
Hi,
how big should the cache filesystem (tmpfs) be relative to the cache.size?
If cache.size is N MB
is N + 50 MB a fine value?
I'm asking because apparently 50 MB
additional space is not enough:
> SIGBUS received; this is most likely due to filling up the filesystem
where cache resides.
https://knot-resolver.readthedocs.io/en/stable/daemon.html#envvar-cache.size
writes:
> Note that this is only a hint to the backend, which may or may not
respect it. See cache.open().
If cache.size does not set a maximum size maybe remove "Set the cache
maximum size in bytes." from its documentation? :)
Does that mean I have to use cache.open() as well to enforce a limit
or is "max_size" also just a hint that is not respected/enforced?
If so: is there a way to enforce a max_size for the cache?
thanks,
Christoph
Hi,
I have a quick update about the upstream package repositories in
OpenBuild Service (OBS).
New repositories
----------------
- Fedora_30 - x86_64, armv7l
- xUbuntu_19.04 - x86_64
- Debian_Next - x86_64 (Debian unstable rolling release)
New repositories for Debian 10 and CentOS 8 should be available shortly
after these distros are released, depending on their buildroot
availability in OBS.
Deprecated repositories
-----------------------
- Arch - x86_64
Due to many issues with Arch packaging in OBS (invalid package size,
incorrect signatures) and the fast pace of Arch updates, please consider
this repository deprecated in favor of the AUR package [1] that I keep
up-to-date. The Arch OBS repository will most likely be removed in the
future.
Also, please note I'll be periodically deleting repositories for distros
that reach their official end of life. In the coming months, this
concerns Ubuntu 18.10 and Fedora 28.
[1] - https://aur.archlinux.org/packages/knot-resolver/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello!
This occurs if for some reason the prefill file happens to be empty:
kresd[11812]: [prefill] root zone file valid for 17 hours 01 minutes,
reusing data from disk
kresd[11812]: segfault at 0 ip 00007f9b06017436 sp 00007ffc3142bb58
error 4 in libc-2.28.so[7f9b05fa1000+148000]
Apr 30 20:26:13 scruffy kernel: Code: 0f 1f 40 00 66 0f ef c0 66 0f ef
c9 66 0f ef d2 66 0f ef db 48 89 f8 48 89 f9 48 81 e1 ff 0f 00 00 48 81
f9 cf 0f 00 00 77 6a <f3> 0f 6f 20 66 0f 74 e0 66 0f d7 d4 85 d2 74 04
0f bc c2 c3 48 83
This happens in a loop until systemd gives up trying to start kresd.
Solved by removing /var/cache/knot-resolver/root.zone (0 bytes).
kresd version: 4.0.0
We use your example config from:
https://knot-resolver.readthedocs.io/en/stable/modules.html#cache-prefilling
kind regards,
Christoph