Le 29 janv. 2025 à 23:39, oui.mages_0w--- via
knot-resolver-users
<knot-resolver-users_at_lists_nic_cz_48qbhjm2vj0347_06aede6e(a)icloud.com> a écrit :
Hi Martin,
No, VM3 is the second most used one, used. VM2 is closer to the most customers.
If VM3 is down, no problem (no noticeable increase in memory usage) on either of the
other ones despite the increased queries/answers (mostly going to VM1).
VM1 in yellow, VM2 in orange, VM3 in blue (shades in each color are the different
workers).
From 6pm to 7pm, VM3 was off.
<image0.jpeg>
Best Regards,
Gabriel ROUSSEAU
Le 29 janv. 2025 à 20:36, Martin Doubrava
<martin_at_doubrava_net_48qbhjm2vj0347_3aafc352(a)icloud.com> a écrit :
Hi Gabriel,
let me guess that VM3 is the nearest one to the customers? If you turn it down for a
while, will memory leak on another VM occur?
--
Best regards
Bc. Martin Doubrava
CEO
_______________________________
DOUBRAVA.NET s.r.o.
Náklo 178, 783 32 Náklo
Mobil: +420 771 280 361 <tel:+420%20771%20280%20361>
Technická podpora: +420 776 778 173 <tel:+420%20776%20778%20173>
Office: +420 588 884 000 <tel:+420%20588%20884%20000>
E-mail: martin(a)doubrava.net <mailto:martin@doubrava.net>
WWW:
http://www.doubrava.net <http://www.doubrava.net/>
Najdete nás i na facebooku:
http://www.facebook.com/doubravanet
---------- Původní e-mail ----------
Od: oui.mages_0w--- via knot-resolver-users <knot-resolver-users(a)lists.nic.cz>
Komu: knot-resolver-users(a)lists.nic.cz
Kopie: oui.mages_0w(a)icloud.com
Datum: 29. 1. 2025 18:15:34
Předmět: [knot-resolver-users] Re: Knot Resolver 6.0.10
Hello to all,
First, a big thank you to all the devs that are doing a great job in providing a high
quality resolver.
I have 3 instances of knot-resolver 6 running on different POP, and with similar setup
and settings for an ISP. They work in anycast, the closest VM to a customer is the one
serving them. If one or two VM fails, the traffic is automatically anycasted to the other
ones.
VMs are all on Ubuntu 24.04.1 LTS - Linux 6.8 - x86_64.
They all worked well, until Knot Resolver 6.0.9, when one of the 3 started to have
memleaks (only one curiously).
Reverting to Knot Resolver 6.0.8 solved the problem for that VM, the two other were fine
with Knot Resolver 6.0.9.
I have the same exact problem with Knot Resolver 6.0.10, two VMs are fine, on one VM
still has the memleak.
So I have one VM on 6.0.8 (with apt package on hold) and the two other on 6.0.10.
I cannot find what is different.
The config in /etc/knot-resolver/config.yaml is similar.
Only nsid changes between the VMs, and the management (for prometheus) and &private
interface IPv6 too (aaaa:aaaa:aaaa::1, aaaa:aaaa:aaaa::2 and aaaa:aaaa:aaaa::3) :
rundir: /run/knot-resolver
workers: 4
nsid: vmname
management:
interface: aaaa:aaaa:aaaa::1@8453
monitoring:
enabled: always
cache:
storage: /var/cache/knot-resolver
size-max: 1843M
logging:
level: notice
network:
listen:
# unencrypted private DNS on port 53
- interface: &private
- 127.0.0.1
- aaaa:aaaa:aaaa::1
# unencrypted public DNS on port 53
- interface: &anycasted
- 111.111.111.111
- aaaa:aaaa:aaaa::44
- aaaa:aaaa:aaaa::64
# DNS over TLS on port 853
- interface: *anycasted
kind: dot
# DNS over HTTPS on port 443
- interface: *anycasted
kind: doh2
tls:
cert-file: '/etc/knot-resolver/tls/dns.mydomain.com.fullchain.pem'
key-file: '/etc/knot-resolver/tls/dns.mydomain.com.privkey.pem'
dns64: true
views:
- subnets: ['0.0.0.0/0', '::/0']
answer: refused
- subnets: ['127.0.0.0/8', '123.123.123.0/22',
'111.111.111.111/32']
answer: allow
options:
dns64: false
- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']
dst-subnet: aaaa:aaaa:aaaa::1
answer: allow
options:
dns64: false
- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']
dst-subnet: aaaa:aaaa:aaaa::44
answer: allow
options:
dns64: false
- subnets: ['::1/128', 'aaaa:aaaa::/32', 'bbbb:bbbb::/32']
dst-subnet: aaaa:aaaa:aaaa::64
answer: allow
forward:
- subtree: 10.in-addr.arpa
servers: [ 'aaaa:aaaa:ffff:ffff::1', '22.22.22.22' ]
options:
authoritative: true
dnssec: false
- subtree:
mydomain.com
servers: [ 'aaaa:aaaa:ffff:ffff::1', '22.22.22.22' ]
options:
authoritative: true
dnssec: false
Here are the memory behavior of the 3 VMs. Around 11am, I upgraded the 3 of them.
Clearly, VM3 has a different behavior and the used RAM keeps increasing.
I finale reverted to 6.0.8 for that VM (et the very end of the third graph) and all is
fine.
<VM1.png>
<VM2.png>
<VM3.png>
Any idea of what is going on?
What do you need to help diagnose the issue?
Thank you for you attention, and Best Regards,
Gabriel ROUSSEAU
--
<VM1.png>
<VM2.png>
<VM3.png>
--