Hi Vladimir,
I appreciate your response and it's great to know you validate by default.
I apologize for posting to the wrong list.
Best,
Henry
On Wed, Mar 12, 2025 at 9:14 AM Vladimír Čunát <vladimir.cunat(a)nic.cz>
wrote:
> Hello.
> On 10/03/2025 17.01, birgelee--- via knot-dns-users wrote:
>
> This ballot requires compliance with RFCs 4035 (specifically an implementation of a "security-aware" resolver as defined in Section 4) and 6840. To the best of my knowledge Knot would be a viable choice for conforming to this ballot particularly since there is a reference to RFCs 4035 in the config documentation and 6840 implements several key features of modern DNSSEC. Given the need for documentable compliance by CAs, a statement of intended support from the Knot team would be extremely helpful.
>
> This is about resolvers apparently, so we're slightly off-topic here, as
> we have a split knot-resolver-users(a)lists.nic.cz - but I expect this
> thread to be very short.
>
> Knot Resolver *does support* modern DNSSEC validation, as described in RFC
> 4035, 6840, and some others. And we validate by default, etc.
>
> --Vladimir
>
Hi all,
I know this topic has been dead for a bit, but I did want to specifically find out if Knot is intended to be compliant with DNSSEC RFCs 4035 and 6840. I ask because I am computer security researcher and I do a lot of work with the CA/Browser Froum. I recently proposed a draft ballot that would mandate all publicly-trusted web CAs validate DNSSEC:
https://github.com/cabforum/servercert/pull/571
This ballot requires compliance with RFCs 4035 (specifically an implementation of a "security-aware" resolver as defined in Section 4) and 6840. To the best of my knowledge Knot would be a viable choice for conforming to this ballot particularly since there is a reference to RFCs 4035 in the config documentation and 6840 implements several key features of modern DNSSEC. Given the need for documentable compliance by CAs, a statement of intended support from the Knot team would be extremely helpful.
Best,
Henry
https://henrybirgelee.com/
is there any guidance on using mod-rrl on a public server with a
moderate load, say 6kqps? we have rtfm, and remain unsure of
what we are doing. we want cookies, and therefore need to turn
rrl on. but with it turned on, we seem to drop a *lot* of
replies, a lot.
mod-rrl:
- id: default
rate-limit: 200
slip: 2
randy
I have static zones that are regenerated continually (every few
seconds) with a knotc zone-reload _zonename_ after.
Doing dig queries using ixfr= always used AXFR until I enabled per-zone
zonefile-load: difference
We saw logs periodically like:
warning: [foo.bar.example.com.] failed to update zone file (operation
not permitted)
This is because the files created are a different user than the knotd.
We are not doing automated signing nor dynamic updates.
This is with Knot 3.2.6.
1) Why would it try writing to the source zone file?
Anyways, we got rid of that warning with
zonefile-sync: -1
And the time to between the zone-reload and serving the new zone data
improved too.
I noticed with experimenting with journal-db-max-size and
journal-max-size the data.mdb was around 80% larger than the defined
size. As I removed and added records, the number of differences reduced
and I couldn't do IXFRs for many versions behind which I assume is
default journal-max-depth: 20. I believe this means the journaling was
dropping versions and does not require keeping at least 20 versions (not
a "minimum". I am fine with that.
I was able to trigger error like
error: [foo.bar.example.com.] zone event 'load' failed (not enough
space provided)
which I assume was due to journaling even when dropping old versions
still doesn't have enough space for most recent changes.
We are experimenting with increasing journal-max-size to work around
this.
It appears that the differences are fully handled in memory first before
updating to the journal file. I also see the journal file have large
jump in file size and never small increments.
2) Since I have no purpose to reuse a journal file for system restart or
recovery -- since I am building new zone files frequently, is there any
way to not use a journal file and just have knotd do this all in memory?
That way it is never writing to the journal every few seconds while I
continue to offer IXFR out support?
If not, maybe I will use memory file system for the journal files.
By the way, this is for around ten zones varying from around 1000
records to over 8 million records for the larger zones, They are
changing from a few records every few seconds to maybe a thousand
records every 30 seconds.
Thanks
I am testing IXFR for servers I did not install nor have easy access
to. version.server. says it is 3.2.6. I know there are IXFR changes
since then per the NEWS file and from git log. I don't see same
behavior on my different version different systems but they are also
configured differently.
The knot.conf zones are not configured with "zonefile-load: difference"
and the response effectively has the entire zone as if was AXFR and not
the changes. If I pass the IXFR SOA SERIAL to latest it has no changes
(answer has has the SOA only with same serial).
I used dnspython to output the response from doing IXFR queries (IXFR
question with SOA authoritative set with the serial in the query). I
noticed the output abruptly stops when "dig" doesn't stop.
So I used tcpdump many times to compare knot, named, and my other knot.
I found an odd behavior in this knot 3.2.6 response which dig ignores
and my dnspython fails.
After the expect record it has
1) OPT record with the requestors pay load size (class 1232) and edns rcode
and flags (all zeros ttl), then 00 rdlength and 00 rdata field.
2) then 28 bytes I don't understand such as:
40 11be dc80 0000 0101 fa00 0000 01
or
40 20be dc80 0000 0102 0300 0000 01
or
40 0fe1 6a81 0000 0102 0500 0000 00
or
12 8de1 6a81 0000 0100 9200 0000 00
3) then an IXFR record
following other labels ...
0363 6f6d 00 three characters "com" and end of domain
00 fb IXFR record type
00 01 INternet class
and then ends there, with NO ttl, rdlength, nor rdata.
4) followed by next label length, label ... etc with rrtype, class, ttl,
rdlength, rdata and so on.
This odd OPT, bytes I don't know, partial/broken IXFR record, may be
repeated a few times. I assume these were interspersed where IXFR's SOA
records should be.
I couldn't find an RFC that suggested using interspersed OPT nor IXFR
records. I find it odd that OPT record is in my ANSWER section.
I find it odd that the IXFR record is incomplete. And I don't know what
the other bytes are in-between.
This recognizable to anyone?
The IXFR works fine as seen with dig or when I use named as my
secondary but I assume the named is ignoring the junk parts too.
Hi Knots,
I use catalog zones to sync the set of zones my (hidden)master and slaves
handle. I'm trying to stop messing with zone files on my master, instead
switching exclusively to nsupdate (along with Tony Finch's nsdiff).
In my testing it seems updating the zone after adding it via a catalog is
not possible:
$ knotc zone-status dxld.at
[dxld.at.] role: master | serial: - | catalog: dxld.catalog. | re-sign: +9D15h6m14s
Yet the update fails:
$ knsupdate -y $SECRET <<EOF
> server ns0.dxld.at.
> zone dxld.at.
> add dxld.at. 3600 IN SOA ns0.dxld.at. hostmaster.dxld.at. 1 2m 5m 1w 5m
> send
update failed: SERVFAIL
Nothing is logged with `logging: any: debug` except a "ACL, allowed, action
update".
As soon as I create the zone on the server with zone{-begin,-set,-commit}
it starts working ofc. I guess this is just not supported, but is there a
good reason? I would find it quite convenient to do all my DNS ops over
port 53 without touching ssh ;-)
Thanks,
--Daniel
Hello!
I have an issue.
Knot is configured as a secondary server, and when receiving a zone, a
"trailing data" error occurs, preventing the zone from being loaded from
the primary server.
```
Jan 30 11:03:40 hostname knotd[5407]: info: [domain.com.] refresh, remote
50788646-db98-4caa-b26e-95b30a470796, address 1.2.3.4@53, failed (trailing
data)
```
The same warning appears when using the `kdig` utility:
```bash
kdig @1.2.3.4 domain.com AXFR > /tmp/domain.com
;; WARNING: malformed reply packet (trailing data)
;; WARNING: malformed reply packet (trailing data)
```
The issue occurs specifically with large zones. If the zone requires 2
messages to be received (e.g., `Received 32720 B (2 messages, 442
records)`), one warning appears. If it requires 3 messages (e.g., `Received
49083 B (3 messages, 878 records)`), two warnings appear.
However, if I place this zone (`/tmp/domain.com`) into `/var/lib/knot` and
then execute:
```bash
knotc reload
knotc zone-refresh domain.com
```
Knot successfully loads the zone.
Unfortunately, due to confidentiality, I cannot share the contents of the
zone. Additionally, I do not have precise information about the software
installed on the primary server. However, if BIND is used as the secondary
server, there are no issues. A regular `dig` command also does not return
any errors.
Is there any way to make Knot ignore the "trailing data" error and
successfully load the zone?
Thank you for your help!