Why is Unbound not like a `dig +trace`?

Hi,

I have a question concerning Unbound but maybe it will be a question about the DNS protocol finally, I'm not sure.

First, you can test a:

     dig +trace ac-versailles.fr. CAA

it works _systematically_ (it's a real domain). It's a `dig +trace`, so it's an iteration of no recursive DNS requests by following referrals from a root server to... the good server. :slight_smile:

But now, with unbound, I have noticed that its way to solve a name is not a like a `dig +trace`. Let me show you. I have tested Unbound version 1.19.2 from an updated Ubuntu 24.04. Of course, I can test a more recent version (I will...). Here is my little configuration:

Hi,

Of course, I can test a more recent version (I will...).

I have done the same test with Unbound 1.24.0 (compiled from source) and I notice the same behaviour. To solve "in.ac-versailles.fr. CAA", Unbound makes this kind of request:

     dig +norecurse @a.ns.ac-versailles.fr. in.ac-versailles.fr. A

unlike a `dig +trace in.ac-versailles.fr. CAA` which makes this kind of request:

     dig +norecurse @a.ns.ac-versailles.fr. in.ac-versailles.fr. CAA

I would be curious to have some explanations. Of course, I'm not saying that Unbound is wrong. It's just that, with my level of DNS knowledge (which is definitely not that of an expert level), I expected Unbound to behave like a `dig +trace`.

Note: in each of my tests, I start Unbound with an empty cache. In other words, I start it up and the query I run is the first query that Unbound receives.

Bye.

After some searches I think I have the answer.

According to the RFC 1034 (maybe in 5.3.3), nothing forces a recursive DNS resolver to behave like a `dig +trace`, end of story.
Is that about right?

At least I have learned how to compile unbound from source and run it in a docker. :slight_smile:

Bye.

Hi François,

After some searches I think I have the answer.

According to the RFC 1034 (maybe in 5.3.3), nothing forces a recursive DNS resolver to behave like a `dig +trace`, end of story.
Is that about right?

At least I have learned how to compile unbound from source and run it in a docker. :slight_smile:

Bye.

What you are seeing is qname-minimisation [1] in action.
When Unbound does not yet know the delegation points in the DNS tree, it will try to slowly discover them without revealing more information than necessary to the parent domains.
The query type used while doing so is "A" as you have seen.

You can read more about qname minimisation in RFC 9156 [2].

Best regards,
-- Yorgos

[1] https://unbound.docs.nlnetlabs.nl/en/latest/manpages/unbound.conf.html#unbound-conf-qname-minimisation

[2] https://www.rfc-editor.org/rfc/rfc9156

Hi Yorgos,

What you are seeing is qname-minimisation [1] in action.
When Unbound does not yet know the delegation points in the DNS tree, it will try to slowly discover them without revealing more information than necessary to the parent domains.
The query type used while doing so is "A" as you have seen.

You can read more about qname minimisation in RFC 9156 [2].

Best regards,
-- Yorgos

[1] https://unbound.docs.nlnetlabs.nl/en/latest/manpages/unbound.conf.html#unbound-conf-qname-minimisation

[2] https://www.rfc-editor.org/rfc/rfc9156

Ok, many thanks for your answer. So this feature is a way to protect my privacy. :slight_smile:

I have done my tests again and of course, as you say:

* with "qname-minimisation: yes" (the default) a `dig in.ac-versailles.fr CAA` failed (timeout).
* with "qname-minimisation: no" a `dig in.ac-versailles.fr CAA` works. \o/

That's really interesting. We learn something new every day with DNS. :slight_smile:
Thanks again.

Bye.

Hi François,

I have done my tests again and of course, as you say:

* with "qname-minimisation: yes" (the default) a `dig in.ac- versailles.fr CAA` failed (timeout).
* with "qname-minimisation: no" a `dig in.ac-versailles.fr CAA` works. \o/

That's really interesting. We learn something new every day with DNS. :slight_smile:
Thanks again.

You can still learn a little more here!

You shouldn't be getting a timeout with qname-minimisation enabled!
The domain in.ac-versailles.fr is not properly configured and when asked with "in.ac-versailles.fr A" it will return a delegation with designated servers at:
  prd-dns-int-01.in.ac-versailles.fr, and
  prd-dns-int-02.in.ac-versailles.fr

Those servers do not seem to reply and cause the timeout you encounter with dig.

qname-minimisation exposes broken delegations by its way of operation.

Now, why do the ac-versailles.fr nameservers reply with a NODATA answer specifically for "in.ac-versailles.fr CAA" queries only, I don't know.

Best regards,
-- Yorgos

Hi,

You can still learn a little more here!

:slight_smile:

You shouldn't be getting a timeout with qname-minimisation enabled!
The domain in.ac-versailles.fr is not properly configured and when asked with "in.ac-versailles.fr A" it will return a delegation with designated servers at:
prd-dns-int-01.in.ac-versailles.fr, and
prd-dns-int-02.in.ac-versailles.fr

Those servers do not seem to reply and cause the timeout you encounter with dig.

qname-minimisation exposes broken delegations by its way of operation.

Now, why do the ac-versailles.fr nameservers reply with a NODATA answer specifically for "in.ac-versailles.fr CAA" queries only, I don't know.

Indeed. In fact, this is the reverse. I explain. The "in.ac-versailles.fr" delegation (the "in" zone) is correct. It's a private zone for private usages etc. so not publicly reachable. Until very recently, it was perfectly OK for us like that. But, since few days, our CA (Certificate Authority) needs to make DNS requests like "type=CAA name=in.ac-versailles.fr" before to deliver our SSL certificates (If we want a certificates for the fqdn foo.in.ac-versailles.fr, the CA must check the CAA requests). It's new for us, our CA applies the RFC 8659 just recently. I have explained the problem in details here:

     https://lists.nlnetlabs.nl/pipermail/unbound-users/2025-September/008575.html

So we have:

* Our private zone "in.ac-versailles.fr" publicly unreachable but until very recently, this was not a problem for us (on the contrary, we were quite happy like that).
* But now our CA needs to solve "type=CAA name=in.ac-versailles.fr" to deliver our SSL certificates (and even a empty or NXDOMAIN response is OK but timeout is not acceptable by the RFC).

So, in the NS of the "ac-versailles.fr" zone (ie the *public* zone), we have installed a kind of DNS proxy (dnsdist) so that:

1. If the request is "type=CAA name=in.ac-versailles.fr" (or in the subdomain in.ac-versailles.fr), the proxy answers directly an empty response (with the "aa" flag, and yes, that's a lie, because the NS of the public zone have no authority over the "in" zone).
2. For any other request, the proxy passes the requests to the "real" nameserver of the zone "ac-versailles.fr".

So, currently:

* The "in.ac-versailles.fr" nameservers are publicly unreachable => It's OK.
* And the thing not really OK is the fact that the NS of "ac-versailles.fr" (ie the public zone) answer an empty response for "type=CAA name=in.ac-versailles.fr" requests. That is the real anomaly, the lie :). The NS should respond with a REFERRAL to the NS of the "in.ac-versailles.fr" but they don't do that anymore because of the proxy we put in place.

This proxy is a workaround to retrieve SSL certificates from our CA, we lie to our CA a little bit. But it's a workaround. And like most workarounds, it's not "100% RCF-compliant". So, in this post, with your explanations, I understand that our workaround doesn't work well with recursive DNS resolvers that perform qname-minimisation. :slight_smile:

But currently, it seems to work well with our CA which now validates our SSL certificates correctly.

Maybe there is a better solution to solve this problem but, for now, it's the least bad solution we've found. We only cheat for CAA records to validate our SSL certificates by our CA. :slight_smile:

And by the way, the CAs are in the process of generalizing the application of this RFC 8659, and I think there are quite a few organizations with private zones like us that are facing the same problem.

I hope my explanations were clear and thanks again for your help.