This is sort of how the system allocator behaves and there's not much we can do (we do our free and then the memory is out of our hands). Well we can (and hopefully will), but it requires writing and using allocators tailored closely to what we need from it, OS then behaves a bit better when it's dealing just with used/unused pages and not zillion of tiny allocations using generic allocator. It's quite a complex topic, but there's a lot more to come in the memory consumption in the future releases. Maybe I can even sneak something in the final release, cheeky.

In any way, we are going to make the measurement tool and include it with the sources for later on. I agree it's quite useful idea, thanks!

Kind regards,
Marek

On 7 June 2013 15:28, Anand Buddhdev <anandb@ripe.net> wrote:
On 07/06/2013 15:12, Marek VavruĊĦa wrote:

Hi Marek,

Thanks for your explanation. I do understand that the virtual memory
usage will of course be higher than actual memory usage. I also notice
that not all memory is instantly freed when zones are removed. For
example, when I load an unsigned zone with about 3.9 million records, it
makes knot use 1.3 GB of RAM. If I then remove the zone from the config
file and reload knot, the RAM usage only drops down to 1.1 GB. if I then
add the zone back, and reload knot, the memory usage climbs as knot
loads the zone. It hits 2.1 GB, and then when the zone is fully loaded,
the memory drops to 1.9 GB. So now there appears to be another 600 MB of
RAM in usage, probably held in unfreed resources. I don't know whether
knot should immediately release memory, or whether it does lazy
cleanups, so this may be a feature or a bug :)

Coming back to doing estimates: yes a tool that makes an estimate based
on the zone file would be great. I asked for something similar for NSD4,
and Wouter has written such a tool. It's very useful for getting some
idea of how much extra memory I will need per zone, and what NSD's final
usage will be (of course it will use some more in practice, but that's
understood).

Regards,

Anand

> Hi Anand,
>
> the estimate highly depends on the zone contents of course,
> the estimate seems like a good upper bound.
> I remember measuring about 500MB (signed) zone and it yielded about 1.3G as
> well,
> but the virtual memory was at about 2G due to how memory allocation works.
> So it's not entirely clear what essentialy IS used memory and what is just
> waste or kept.
>
> The best course of action would be to write a tool that is able to make
> better estimate
> based on the zone file, would that be okay?