On 15/03/2018 14:01, Benno Overeinder wrote:
Hi Benno,
I'm
assuming you're migrating from BIND. I use BIND, Knot and NSD, and
in my expreince, BIND uses the least amount of RAM, and NSD uses the
most. Knot is somewhere in-between.
Is this still the case with the recent NSD 4.1.18 and later, which has
improved memory usage (less memory usage) with 18% and with an
aggressive option even 35%?
It is a honest question, as we did not compare the memory usage of NSD,
Knot and BIND.
You're right. I didn't check NSD's memory usage before replying. I've
just checked on 3 servers in our AS 197000 cluster, configured
identically with the same slave zones, and this is what I see:
BIND 9.11.2-P1:
# grep VmSize /proc/22365/status
VmSize: 10958428 kB
Knot 2.3.4:
# grep VmSize /proc/40430/status
VmSize: 15320232 kB
NSD 4.1.20:
# grep VmSize /proc/22400/status
VmSize: 12668536 kB
Another comparison in a K-root cluster shows (where the zones are very
small, such that the process itself takes more memory):
BIND 9.11.2-P1:
# grep VmSize /proc/20454/status
VmSize: 1597288 kB
Knot 2.3.4:
# grep VmSize /proc/48698/status
VmSize: 3456580 kB
# grep VmSize /proc/8672/status
VmSize: 363008 kB
So this very simple examination suggests that NSD does indeed use less
memory. Compared to BIND, it uses some more memory with lots of zones
loaded into it. If the number of zones or their sizes are small, then
NSD is the leanest of the three.
I will also note that we are NOT using the "--enable-packed" option when
compiling NSD. It might give us more memory savings, but we aren't sure
the saving is worth it if it's going to cause any kind of performance
degradation caused by misaligned reads (just a conjecture; I haven't
done any testing myself).
Regards,
Anand