Hello,
could you please help me with knot resolver configuration in the case when I
need to redirect each variation for the domain to some address.
e.g.
www.example.com, m.example.com, domain.example.com ...
like wildcard record
*.example.com 10.0.0.50
In my configuration is it handeled by file with static records
-- load static records
hints.add_hosts('/etc/knot-resolver/static_records.txt')
which contains address to be redirected and the domain.
10.0.0.50 1xbet.com
10.0.0.50 thelotter.com
10.0.0.50 webmoneycasino.com
10.0.0.50 betworld.com
10.0.0.50 bosscasino.eu
10.0.0.50 sportingbull.com
But I´m not able to handle the correct syntax for a wildcard domain
redirection.
Best regards,
--
Smil Milan Jeskyňka Kazatel
hello,
is there a way to output metrics about requests sent to upstreams and their information in prometheus /metrics output?
been trying to find info and there seem to be no documentation about that functionality.
sources mention dedicated /upstreams endpoint https://gitlab.nic.cz/knot/knot-resolver/-/blob/master/modules/http/prometh… but /upstreams returns empty list.
currently trying to run this config:
modules = {
'hints > iterate',
'stats',
'predict',
'http',
}
net.listen('0.0.0.0', 53, { kind = 'dns' })
net.listen('0.0.0.0', 9053, { kind = ‘webmgmt' })
cache.size = 256 * MB
cache.storage = "lmdb:///dev/shm/knot-resolver”
policy.add(
policy.all(
policy.TLS_FORWARD({
{'1.1.1.1', hostname='cloudflare-dns.com' },
{'1.0.0.1', hostname='cloudflare-dns.com' },
})
)
)
is there a way to get upstreams info?
running modules.load('stats’) and then stats.upstreams() from ‘runtime’ configuration returns upstream request details like described here https://knot-resolver.readthedocs.io/en/stable/modules-stats.html
thanks
Apologies if this has been asked before, but I was unable to find
informative resources about this topic except this[1].
What are the downsides of having a recursive DNS server in front of an
authoritative DNS Server? I'm wondering if all the points listed in the
linked article are relevant for small scale installations.
Is anyone running such a setup and can share some advice with regards to
rate limiting?
[1]: https://www.whalebone.io/separate-dns-servers/
--
Alex JOST
Hey list,
new here. Could someone please try explain to me, what's better about the
new algorithm for choosing nameservers? I feel like it totally broke my use
case.
I use knot-resolver as local resolver and have configured this:
acme = policy.todnames({'acme.cz', 'acme2.cz'})
policy.add(policy.suffix(policy.FLAGS({'NO_CACHE'}), acme))
policy.add(policy.suffix(policy.STUB({'172.16.21.93','172.16.21.94','8.8.8.8'}),
acme))
Until the "better" algo, it worked exactly as I wanted it to. When I was in
the network where the 172.16.21.9{3,4} DNS servers were available, they
were selected. And when they were not available, google DNS was used to
resolve those domains.
Now, even when the internal nameservers are available, they are rarely used:
$ for i in `seq 1 20`; do dig intranet.acme.cz +short; done
193.165.208.153
172.16.21.1
172.16.21.1
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
$ for i in `seq 1 20`; do dig intranet.acme.cz +short; done
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
193.165.208.153
172.16.21.1
193.165.208.153
When I remove the google DNS and leave just 172...
# systemctl restart kresd@{1..4}.service && for i in `seq 1 20`; do dig
intranet.acme.cz +short; done
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
172.16.21.1
Can I somehow switch back to the old algorithm via configuration?
Thanks
Josef