We run 8 node unbound clusters as recursive resolvers. The setup forwards (using forward-zone) internal queries to a separate PowerDNS authoritative cluster.
Recently, we’ve had some connectivity issues to Cloudflare (who provides a lot of external DNS services in our environment). When this has happened, we’ve seen the requestlist balloon to around 1.5-2k entries as queries repeatedly time out.
However, the problem is that this affects forward-zones as well. We lose resolution for internal queries when these backup events happen.
We’re looking for suggestions on how to safeguard these internal forwards. We notice stub-zone may be the more appropriate stanza for our use case, but are unsure if that’d bypass this requestlist queuing (?)
Any thoughts greatly welcome, thank you!
Our config is fairly simple:
server:
num-threads: 4
# Best performance is a "power of 2 close to the num-threads value"
msg-cache-slabs: 4
rrset-cache-slabs: 4
infra-cache-slabs: 4
key-cache-slabs: 4
# Use 1.125GB of a 4GB node to start, but real usage may be 2.5x this so
# closer to 2.8G/4GB (~70%)
#
msg-cache-size: 384m
# Should be 2x the msg cache
rrset-cache-size: 768m
# We have libevent! Use lots of ports.
outgoing-range: 8192
num-queries-per-thread: 4096
# Use larger socket buffers for busy servers.
so-rcvbuf: 8m
so-sndbuf: 8m
# Turn on port reuse
so-reuseport: yes
# This is needed to forward queries for private PTR records to upstream DNS servers
unblock-lan-zones: yes
forward-zone:
name: "int.domain.tld"
forward-addr: "10.10.5.5"
# No caching in unbound
forward-no-cache: "yes"
If you are "forwarding" to authoritative nameservers indeed using stub-zone is the correct configuration as it expects to send queries to authoritative nameservers and not resolvers. That won't help with this issue though.
In a situation where Unbound is overwhelmed with client queries, using 'forward-no-cache: yes' (or 'stub-no-cache: yes' if you use the stub-zone) does not help since all those queries need to be resolved.
This way Unbound tries to combat Dos from slow queries or high query rates by trying to slowly fill up the cache from fast queries that eventually will drop the outgoing query rate and increase cache responses. (Glocal cache responses do not contribute to the increase of the request list).
In your case where you don't cache the upstream information, Unbound cannot protect itself with cached answers because all the internal upstream queries need to be resolved.
I am guessing the queries to the configured upstreams are not slower than jostle-timeout, so not candidates to be dropped initially, but it doesn't help that each one of them needs to always be resolved.
I would first try to use 'stub-no-cache: no' and see if the situation gets better.
It would be possible to introduce a new configuration option per forward/stub zone to give some kind of priority but unsure if it would generally help or in this case in particular.
Coming back to this I notice that you have configured
num-queries-per-thread: 4096
and you say the highest you see the request list is 2K.
So dropping would not be an issue.
Maybe your CPU can't handle all those recursion states and lowering that number would also help?
Are you reaching memory limits perhaps and the system starts swaping?
Could you also provide some more information?
What is the query response time normally to your auth cluster?
The serve-expired options could be useful in upstream failure situations but make sure to understand what the options are doing because you will be serving expired answers.
Apologies for not CCing the list into my last response (that’s quoted at the end).
We have some further details about what’s happening. In our environment, the heaviest user of DNS tends to be rspamd (since we run a public mail service at hey.com) and similar automated processes.
At top of the hour email load (when we ingest the most email), the rspamd workload to lookup RBLs is typically at its highest. On a per node basis, we generally go upto about 250 queries/s or so.
When any RBL nameserver starts to have issues, we seem to start queueing a lot of queries (upto 4k at a time per instance from my previous response, it looks like) into the requestlist.
The interesting thing is, Unbound seems quite conservative in serving SERVFAIL in these cases and appears to try for exactly 3 minutes before giving up and expunging these entries. We sort of figured that out by looking at the requestlist count + exceeded metrics via unbound_exporter (prom) and logging servfail responses.
The delta appears to reliably be 3 minutes from the start of the incident before it gives up. Here are some pool-level graphs showing this in detail for our two datacenters - https://cln.sh/kfzqNMZq
So our first question is whether there are any knobs we can use to bring this time down significantly? We would be happy to give up after 15 seconds (or even less!) to prioritize stability elsewhere.
Generally, we’re looking to safeguard unbound from dropping unrelated queries when external nameservers handling large volumes of queries have issues.