Hi Jan-Piet, Libor,
On Wed, Jan 29, 2025 at 08:49:06PM +0100, Jan-Piet Mens wrote:
- AXFR needs a
static IP for master vs. nsupdate allows dynamic IP
XFR from a primary (or secondary) needs a known address, correct, but directing
an RFC2136 update to a server requires knowing that server's IP as well.
Without specifying a target, the UPDATE will be sent to the SOA MNAME by
default.
I should have explained my topology better. The nsupdate based setup looks
like this:
[Workstation] ->RFC2136 [Primary] ->AXFR [Slaves...]
The tradeoffs I'm referring to are in that first hop. Primary always has a
static IP that's no problem, but with AXFR between Workstation and Primary
the Workstation needs a static IP too. That's what I'm avoiding by using
nsupdate. Note for auth I'd be using just TSIG here no IP ACL, or perhaps
one that allows my entire /48 space.
To me this is comparing apples and slices of chocolate
:)
I wouldn't mind both an apple and some chocolate right now, all I have is
bananas on the long journey to FOSDEM :-)
- AXFR needs
DNSSEC keys on machine keeping zonefiles vs nsupdate can
delegate signing to server (but doesn't have to)
DNSSEC keys must exist on the signer, typically a primary; it's
independent on whether that will be an XFR source or an UPDATE
target. The signer needs the keys.
nsupdate doesn't "delegate signing"; the update target will sign if signed.
Right, here I was confused. If I accept the "spin up a temporary knot"
approach this is also possible with AXFR. Perhaps my point should have been
this: the nsupdate approach is (also) flexible with regard to who the
signer is.
- AXFR makes
multi-master complicated^TM - nsupdate doesn't care as long
as SOA is increasing.
This sounds to me as though you intend sending UPDATEs to each of your
secondaries individually. Good luck doing that and keeping the result
consistent, particularly if you're signing.
No, I'm intending to add a second primary (ok, technically first-level
secondary if you're thinking in AXFR terms) if the basic setup works. I'm
still experimenting with that. Ofc signing is going to make things
complicated. Perhaps that will make having keys on the Workstation
preferable to dealing with that or I'll just drop the multi-master idea.
We'll see.
Consider a
setup where zone files are in git and I want to push DNS updates
in a gitops'y way. I could in principle setup a knot AXFR master as part of
the CI build, perhaps along with some VPN to satisfy the static IP
requirement, send a NOTIFY, parse logs to figure out if all slaves went and
got them and tear down the build, but it's a major hassle.
Indeed. That's why XFR was invented.
I dont think I get your meaning? If XFR was invented for this particular
use-case I sure hope the setup wouldnt need to be as convoluted and
difficult as I'm describing :-)
Libor,
On Thu, Jan 30, 2025 at 08:46:22AM +0100, Libor Peltan wrote:
how I understand the situation is that Daniel's
primary server (which is
probably also a signer) is also a catalog consumer.
Correct.
And the zone contents are probably derived from some
database (or something)
and the question is, how to fill them into the primary, if zone files are
not desired for any reason.
No database (or something). Just zonefiles on my workstation(s). The tricky
bit is that I switch between working on my laptop and desktop
machines. That's my personal use-case, but you can just as well imagine an
organizational use-case where different people should be able to work on
the same zonefiles synced via git instead of syncthing (which I use).
Note that in the git based scenario you could use CI automation for
deploying or people could just run nsupdate on their machines. Doesn't
matter. Which is a nice property of this setup IMO.
Point is, that DDNS was always (and this is not
limited to Knot DNS)
intended as an _update_ facility -- modifying existing zone. It does not
support bootstrapping, when the zone is not yet loaded from zone file (or
AXFRed from anywhere).
Right.
What I would recommend is to use generic zone file
(which uses "@" in all
places where zone name would be) with dummy contents, so that the newly
created (by catalog) zone is loaded with something, and after that fill the
right contents with DDNS.
I have considered the basic idea of using a dummy zonefile and discarded
the idea as janky :-P. My approach would have been a cronjob that parses
the catalog zone, but your approach of using just a generic zonefile does
make this significantly more appealing.
I see one problem: say I have secondaries where the relevant zone already
exists, now I redeploy the primary (after, say, the machine goes up in
flames). For a short while there secondaries will see the dummy zone's
serial. IF I'm unlucky and the serial-math works out whatever serial the
secondaries have right now could conceivably be lower than whatever is in
the dummy zone (remember: RFC1982) causing them to empty the zone causing
an otherwise harmless hidden-primary outage to escalate significantly.
Now that scenario is impossible in my lab as I use date-based serials but
I'd (ideally) like this setup to also be applicable to high update rate
zones where those won't work.
In the attachment, you can find an example of a
generic zone file (it can be
much shorter, Knot might even forgive not including any apex NS). With that,
the catalog member template can be configured along following ideas:
Thanks! I'm going to use this to carry on my experiments but I do think an
option to allow UPDATE to create zones (if they are in the catalog already)
would still be useful.
By the way, I would not recommend using SOA MNAME in
2025, better always
know the IP of the specific server you need to update.
The server has IP ACLs and requires TSIG for UPDATEs I don't see why it
matters where nsupdate gets the server name from. I know knsupdate doesn't
support this I'd probably still script that with (k)dig if needed ;-)
I'm not as well read in the more recent DNS RFCs as I perhaps should be
(did anything important happen since 1987 anyway? ;-P), is there a good
reason that I'm not seeing?
Thanks,
--Daniel