Feb 27 11:58:11 VM3 asyncio[834]: Using selector: EpollSelectorFeb 27 11:58:11 VM3 knot_resolver.manager.server[834]: Loading configuration from '/etc/knot-resolver/config.yaml' file.Feb 27 11:58:11 VM3 knot_resolver.manager.server[834]: Changing working directory to '/run/knot-resolver'.Feb 27 11:58:11 VM3 knot_resolver.manager.logging[834]: Changing logging level to 'WARNING'Feb 27 11:58:14 VM3 kresd[876]: [system] path = /run/knot-resolver/control/0Feb 27 11:58:14 VM3 kresd[878]: [system] path = /run/knot-resolver/control/0Feb 27 11:58:14 VM3 kresd[880]: [system] path = /run/knot-resolver/control/1Feb 27 11:58:14 VM3 kresd[882]: [system] path = /run/knot-resolver/control/2Feb 27 11:58:14 VM3 kresd[884]: [system] path = /run/knot-resolver/control/3Feb 27 11:58:14 VM3 supervisord[679]: captured stdio output from cache-gc[886]: Knot Resolver Cache Garbage Collector, version 6.0.11Feb 27 11:58:14 VM3 systemd[1]: Started knot-resolver.service - Knot Resolver Manager.Feb 27 13:23:03 VM3 supervisord[679]: captured stdio output from cache-gc[886]: Usage: 80.00%Feb 27 13:23:30 VM3 supervisord[679]: captured stdio output from cache-gc[886]: Cache analyzed in 26233 msecs, 5011945 records, limit category is 62.Feb 27 13:23:56 VM3 supervisord[679]: captured stdio output from cache-gc[886]: 555518 records to be deleted using 26.66 MBytes of temporary memory, 0 records skipped due to memory limit.Feb 27 13:24:10 VM3 supervisord[679]: captured stdio output from cache-gc[886]: Deleted 553725 records (1793 already gone) typesFeb 27 13:24:10 VM3 supervisord[679]: captured stdio output from cache-gc[886]: DS HTTPS AAAA SOA NSEC DNSKEY A TXT NSEC3 SRV NS PTR MX TYPE11 NAPTR TYPE65521 SSHFPFeb 27 13:24:10 VM3 supervisord[679]: captured stdio output from cache-gc[886]: It took 14252 msecs, 5555 transactions (OK)Feb 27 13:24:10 VM3 supervisord[679]: captured stdio output from cache-gc[886]:
Le 29 janv. 2025 à 23:39, oui.mages_0w--- via knot-resolver-users <knot-resolver-users_at_lists_nic_cz_48qbhjm2vj0347_06aede6e@icloud.com> a écrit :Hi Martin,--No, VM3 is the second most used one, used. VM2 is closer to the most customers.If VM3 is down, no problem (no noticeable increase in memory usage) on either of the other ones despite the increased queries/answers (mostly going to VM1).VM1 in yellow, VM2 in orange, VM3 in blue (shades in each color are the different workers).From 6pm to 7pm, VM3 was off.<image0.jpeg>Best Regards,Gabriel ROUSSEAULe 29 janv. 2025 à 20:36, Martin Doubrava <martin_at_doubrava_net_48qbhjm2vj0347_3aafc352@icloud.com> a écrit :Hi Gabriel,let me guess that VM3 is the nearest one to the customers? If you turn it down for a while, will memory leak on another VM occur?
--Best regardsBc. Martin DoubravaCEO_______________________________DOUBRAVA.NET s.r.o.Náklo 178, 783 32 Náklo
Mobil: +420 771 280 361Technická podpora: +420 776 778 173
Office: +420 588 884 000
E-mail: martin@doubrava.net
WWW: http://www.doubrava.net
Najdete nás i na facebooku: http://www.facebook.com/doubravanetHello to all,First, a big thank you to all the devs that are doing a great job in providing a high quality resolver.I have 3 instances of knot-resolver 6 running on different POP, and with similar setup and settings for an ISP. They work in anycast, the closest VM to a customer is the one serving them. If one or two VM fails, the traffic is automatically anycasted to the other ones.VMs are all on Ubuntu 24.04.1 LTS - Linux 6.8 - x86_64.They all worked well, until Knot Resolver 6.0.9, when one of the 3 started to have memleaks (only one curiously).Reverting to Knot Resolver 6.0.8 solved the problem for that VM, the two other were fine with Knot Resolver 6.0.9.I have the same exact problem with Knot Resolver 6.0.10, two VMs are fine, on one VM still has the memleak.So I have one VM on 6.0.8 (with apt package on hold) and the two other on 6.0.10.I cannot find what is different.The config in /etc/knot-resolver/config.yaml is similar.Only nsid changes between the VMs, and the management (for prometheus) and &private interface IPv6 too (aaaa:aaaa:aaaa::1, aaaa:aaaa:aaaa::2 and aaaa:aaaa:aaaa::3) :rundir: /run/knot-resolverworkers: 4nsid: vmnamemanagement:interface: aaaa:aaaa:aaaa::1@8453monitoring:enabled: alwayscache:storage: /var/cache/knot-resolversize-max: 1843Mlogging:level: noticenetwork:listen:# unencrypted private DNS on port 53- interface: &private- 127.0.0.1- aaaa:aaaa:aaaa::1# unencrypted public DNS on port 53- interface: &anycasted- 111.111.111.111- aaaa:aaaa:aaaa::44- aaaa:aaaa:aaaa::64# DNS over TLS on port 853- interface: *anycastedkind: dot# DNS over HTTPS on port 443- interface: *anycastedkind: doh2tls:cert-file: '/etc/knot-resolver/tls/dns.mydomain.com.fullchain.pem'key-file: '/etc/knot-resolver/tls/dns.mydomain.com.privkey.pem'dns64: trueviews:- subnets: ['0.0.0.0/0', '::/0']answer: refused- subnets: ['127.0.0.0/8', '123.123.123.0/22', '111.111.111.111/32']answer: allowoptions:dns64: false- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']dst-subnet: aaaa:aaaa:aaaa::1answer: allowoptions:dns64: false- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']dst-subnet: aaaa:aaaa:aaaa::44answer: allowoptions:dns64: false- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']dst-subnet: aaaa:aaaa:aaaa::64answer: allowforward:- subtree: 10.in-addr.arpaservers: [ 'aaaa:aaaa:ffff:ffff::1', '22.22.22.22' ]options:authoritative: truednssec: false- subtree: mydomain.comservers: [ 'aaaa:aaaa:ffff:ffff::1', '22.22.22.22' ]options:authoritative: truednssec: false--Here are the memory behavior of the 3 VMs. Around 11am, I upgraded the 3 of them. Clearly, VM3 has a different behavior and the used RAM keeps increasing.I finale reverted to 6.0.8 for that VM (et the very end of the third graph) and all is fine.<VM1.png><VM2.png><VM3.png>Any idea of what is going on?What do you need to help diagnose the issue?Thank you for you attention, and Best Regards,Gabriel ROUSSEAU<VM1.png><VM2.png><VM3.png>--