Hi there,
while trying to understand the algorithm employed in the
`find_best_view` function in the geoip module, I started wondering
whether this line is in there intentionally:
https://gitlab.labs.nic.cz/knot/knot-dns/blob/4015475b0d3e11c0bd6fcac8aceb6…
I am still trying to understand how this works with the actual geo data,
but here is a test case using the subnet mode that yields slightly
surprising results:
Using a geoip config like this for zone example.com:
bar.example.com:
- net: 127.0.0.0/8
A: 9.9.9.9
- net: 192.0.0.0/8
A: 1.1.1.1
- net: 192.168.0.0/16
A: 4.4.4.4
- net: 192.168.1.0/24
A: 8.8.8.8
If I query bar.example.com from 192.168.1.X, I get 4.4.4.4, which is
suprising because it is neither the most nor the least specific item.
The binary search returns the most specific one (8.8.8.8), which is sort
of what I would expect. However, above line immediately takes the `prev`
item without checking for `view_strictly_in_view`. Without the above
line, the whole function returns the most specific item, as I would expect.
Please note that this is mostly an intuition atm, as I have not yet had
the time to set up a similar test case for real geo data (which uses the
same algorithm). But I figured that someone more familiar with the code
might have enough context to tell whether this is correct or not or what
a suitable fix might look like.
Thanks a bunch,
Conrad
Hello Knot team
I have just noticed that there are no more Debian packages for Debian 8 (Jessie) at https://deb.knot-dns.cz/knot/dists/. Debian 8 is in LTS until June 30, 2020 (see https://wiki.debian.org/LTS). Is the dropping of packages for Jessie intended or not? (I plan to migrate this summer, but for now it would be good to still get security fixes for Knot.)
Might be related to the PGP revocation?
Danilo
I have to integrate Knot in framework which will be testing for the
presence of a running instance of the Knot daemon by means of 'knotc
status'. This works fine, once knotd has loaded all the zones. However,
when launching it there will be a window of opportunity during which 'knotc
status' will fail, despite the fact that knotd is already running, but
still loading its zones.
Two questions:
1. Is this a correct description, or am I missing something here?
2. If it is right, is there a way, with the concourse of KNOT tools, to
find out whether knotd is already running or still loading zones?
I guess that, if it is a matter of just verifying that knotd is running,
using the 'ps' command would be simpler. I wonder, however, whether it can
be done with KNOT tools alone.
Hi there!
At work, we make use something often referred to as "weighted records",
a feature offered by many managed DNS vendors. We use it to implement
e.g. a canary environment, where we test changes on a small portion of
production traffic.
I played around with knot-dns for a bit (quite impressed), and figured
it might be nice to implement such a feature for it. Turns out, the code
is so amazing that if you abuse the infrastructure of the geoip module,
it only takes an hour (very impressed).
If you are interested in trying it, read on further below, but just to
state my intention: I was wondering if such a feature would be of
interest at all?
I realize the geoip module may not be the appropriate for such an
implementation, consider this merely a demo of sorts.
So here goes nothing:
1. apply attached patch
2. config file snippet:
mod-geoip:
- id: test
config-file: /etc/knot/test.conf
ttl: 600
mode: weighted
zone:
- domain: example.com.
file: "/var/lib/knot/example.com.zone"
module: mod-geoip/test
3. /etc/knot/test.conf:
lb.example.com:
- weight: 10
CNAME: www1.example.com.
- weight: 5
CNAME: www2.example.com.
Results in this:
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
68 www1.example.com.
32 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
72 www1.example.com.
28 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
66 www1.example.com.
34 www2.example.com.
You get the idea. Anyways, any thoughts and feedback would be greatly
appreciated.
Thanks a lot,
Conrad
Hi @all
I was trying to migrate knot to a new server with Ubuntu 18.04 and knot
2.8.0. I managed this by just copying the content of /var/lib/knot/*
from the old to the new server.
One of my domains threw an error when starting knot:
knotd[5186]: error: [<zone>] zone event 'load' failed (malformed data)
keymgr throws the same error. I dumped the data out of the 2.7.6
installation with mdb_dump and imported it into the 2.8.0 installation,
which did not help.
I then uninstalled 2.8.0 on the new server and installed 2.7.6, migrated
data from the old server with the same procedure and it worked, no errors.
Just out of curiosity I upgraded then the new server from 2.7.6 to 2.8.0
to see what will happen: It was the exact same behaviour, the zone
couldn't load.
How can I proceed here? There is obviously something wrong that the
update breaks the loading of the zone. Is this on my end or a bug in knot?
Thanks
Simon
I perfoms benchmarks with knot-dns as a authoritative server and dnsperf
as a workload client. Knot server has 32 cores. Interrupts from 10Gb
network card are spreaded across all 32 cores. Knot configured with
64 udp-workers. Each knot thread assigned to one core. So there are at
least two knot threads assigned to one core. Then i start dnsperf with
command
./dnsperf -s 10.0.0.4 -d out -n 20 -c 103 -T 64 -t 500 -S 1 -q 1000 -D
htop on knot server shows 3-4 cores completly unused. Then i restart
dnsperf unused cores are changes.
That is the reason for unused core?
When we try to use the root zone file to load to verify the UDP or TCP
perfermance of knot
the result TCP performance is greater than UDP performance
--
Best Regards!!
champion_xie