Hi,
I have a quick update about the upstream package repositories in
Open Build Service (OBS) for Knot DNS.
New repositories
----------------
- CentOS_7_EPEL - aarch64
- Fedora_30 - x86_64, armv7l, aarch64
- xUbuntu_19.04 - x86_64
- Debian_Next - x86_64 (Debian unstable rolling release)
New repositories for Debian 10 and CentOS 8 should be available shortly
after these distros are released, depending on their buildroot
availability in OBS.
Deprecated repositories
-----------------------
- Arch - x86_64
Due to many issues with Arch packaging in OBS (invalid package size,
incorrect signatures) and the fast pace of Arch updates, please consider
this repository deprecated in favor of the knot package in Arch
Community repo [1]. The Arch OBS repository will most likely be removed
in the future.
Also, please note I'll be periodically deleting repositories for distros
that reach their official end of life. In the coming months, this
concerns Ubuntu 18.10 and Fedora 28.
[1] - https://www.archlinux.org/packages/community/x86_64/knot/
--
Tomas Krizek
PGP: 4A8B A48C 2AED 933B D495 C509 A1FB A5F7 EF8C 4869
Hello Knot team
I have just noticed that there are no more Debian packages for Debian 8 (Jessie) at https://deb.knot-dns.cz/knot/dists/. Debian 8 is in LTS until June 30, 2020 (see https://wiki.debian.org/LTS). Is the dropping of packages for Jessie intended or not? (I plan to migrate this summer, but for now it would be good to still get security fixes for Knot.)
Might be related to the PGP revocation?
Danilo
I have to integrate Knot in framework which will be testing for the
presence of a running instance of the Knot daemon by means of 'knotc
status'. This works fine, once knotd has loaded all the zones. However,
when launching it there will be a window of opportunity during which 'knotc
status' will fail, despite the fact that knotd is already running, but
still loading its zones.
Two questions:
1. Is this a correct description, or am I missing something here?
2. If it is right, is there a way, with the concourse of KNOT tools, to
find out whether knotd is already running or still loading zones?
I guess that, if it is a matter of just verifying that knotd is running,
using the 'ps' command would be simpler. I wonder, however, whether it can
be done with KNOT tools alone.
Hi there!
At work, we make use something often referred to as "weighted records",
a feature offered by many managed DNS vendors. We use it to implement
e.g. a canary environment, where we test changes on a small portion of
production traffic.
I played around with knot-dns for a bit (quite impressed), and figured
it might be nice to implement such a feature for it. Turns out, the code
is so amazing that if you abuse the infrastructure of the geoip module,
it only takes an hour (very impressed).
If you are interested in trying it, read on further below, but just to
state my intention: I was wondering if such a feature would be of
interest at all?
I realize the geoip module may not be the appropriate for such an
implementation, consider this merely a demo of sorts.
So here goes nothing:
1. apply attached patch
2. config file snippet:
mod-geoip:
- id: test
config-file: /etc/knot/test.conf
ttl: 600
mode: weighted
zone:
- domain: example.com.
file: "/var/lib/knot/example.com.zone"
module: mod-geoip/test
3. /etc/knot/test.conf:
lb.example.com:
- weight: 10
CNAME: www1.example.com.
- weight: 5
CNAME: www2.example.com.
Results in this:
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
68 www1.example.com.
32 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
72 www1.example.com.
28 www2.example.com.
conrad@deltree ~/hack/knot-dns $ for i in $(seq 1 100); do dig
@192.168.1.242 A lb.example.com +short; done | sort | uniq -c
66 www1.example.com.
34 www2.example.com.
You get the idea. Anyways, any thoughts and feedback would be greatly
appreciated.
Thanks a lot,
Conrad
Hi @all
I was trying to migrate knot to a new server with Ubuntu 18.04 and knot
2.8.0. I managed this by just copying the content of /var/lib/knot/*
from the old to the new server.
One of my domains threw an error when starting knot:
knotd[5186]: error: [<zone>] zone event 'load' failed (malformed data)
keymgr throws the same error. I dumped the data out of the 2.7.6
installation with mdb_dump and imported it into the 2.8.0 installation,
which did not help.
I then uninstalled 2.8.0 on the new server and installed 2.7.6, migrated
data from the old server with the same procedure and it worked, no errors.
Just out of curiosity I upgraded then the new server from 2.7.6 to 2.8.0
to see what will happen: It was the exact same behaviour, the zone
couldn't load.
How can I proceed here? There is obviously something wrong that the
update breaks the loading of the zone. Is this on my end or a bug in knot?
Thanks
Simon
I perfoms benchmarks with knot-dns as a authoritative server and dnsperf
as a workload client. Knot server has 32 cores. Interrupts from 10Gb
network card are spreaded across all 32 cores. Knot configured with
64 udp-workers. Each knot thread assigned to one core. So there are at
least two knot threads assigned to one core. Then i start dnsperf with
command
./dnsperf -s 10.0.0.4 -d out -n 20 -c 103 -T 64 -t 500 -S 1 -q 1000 -D
htop on knot server shows 3-4 cores completly unused. Then i restart
dnsperf unused cores are changes.
That is the reason for unused core?
When we try to use the root zone file to load to verify the UDP or TCP
perfermance of knot
the result TCP performance is greater than UDP performance
--
Best Regards!!
champion_xie