Google Public DNS

Hi,

I suppose many of us read Google’s announcement yesterday:

http://googleonlinesecurity.blogspot.nl/2013/03/google-public-dns-now-supports-dnssec.html

Now, Google Public DNS only validates when either the DO-bit or, according to RFC6840, the AD-bit is set in the query.

https://developers.google.com/speed/public-dns/faq#dnssec

Validation upon request, instead of ignoring validation by means of the CD-bit, so to speak.

In a way, I kind of like the idea. As for some environments -such as the one at Google- it might (for now) be a good alternative.It sort of adheres to the idea; “everything stays the same, unless you want it to be different” (which at the same time may be considered as undesirable…).

Anyway…

I was wondering what the opinions are on this list, regarding the design-choices of Google. And if this feature is being considered for Unbound (in addition to the already present ’ val-permissive’ mode)?

Regards,

The question to answer is: How many stub resolver do set DO/AD flag or eve allow to set it? So this doesn’t make much sense to me to implement in Unbound too, since I consider this practically useless.

Ondřej Surý

Client applications can set it, because stub resolvers do permit it to
be set. It's the RES_USE_DNSSEC flag for the resolver options field in
the resolv.h interface; if your platform doesn't use resolv.h, pass.

Exim current git head does this, if the dns_use_dnssec option is set; I
added it last June.

Mind, I think that unbound's approach is sane and I'm happy it is as it
is, but still, if an application wants to _rely_ on DNSSEC, then it
should be setting the DO flag and checking AD. This affects forthcoming
DANE support, for instance.

I think if an application wants to _rely_ on DNSSEC, then it should be setting the DO bit and the CD bit, and doing its own validation.

Joe

In the general case I would agree. There might be specific cases where this doesn't make sense - for example, if you have an MTA with a local caching resolver, accessed over 127.0.0.1, trusting AD is reasonable.

I think it's OK to trust AD if the resolver is on the local host. However
checking that with the usual resolver API requires some fairly grotty
furtling around inside the res_state structure...

It's a bad idea for recursive clients to set CD because this makes
validation brittle if (for example) some of a domain's authority servers
have broken data. A validating iterative resolver can retry the query
against different authorities when it gets bad data; a validating stub
that makes recursive queries cannot, and if the upstream has cached
the bad data the client is stuck. Yes I know RFC 6840 says you should set
CD, but it seems wrong to me especially regarding "all DNSSEC data that
exists", which reminds me of QTYPE=* breakage with cached partial answers.

http://www.ietf.org/mail-archive/web/ietf/current/msg73417.html

Tony.

This violates encapsulation and segregation of concerns.

For an MTA with a caching validating resolver on localhost (since all
but the validating part is common best practice today):

If validation logic goes into an MTA, then the MTA needs to be updated
to know about new signing algorithms, deal with yet more discovered
flaws in DNSSEC handling, and generally process UDP data received over
the network as the mail run-time user.

By contrast, letting the DNS resolver handle validation lets the work be
done right, by the experts, and updated accordingly. It lets trust
anchor management be done by DNS software, not mail-software. It lets
the cache, cache only the data that validates. Since all SMTP
mail-servers need a DNS cache near them to perform at all acceptably,
making sure that the cache is well managed and that the mail-server
works cooperatively with the cache is critical.

For Exim, we set DO, we check AD, and if the administrator sets things
up insecurely, that's their problem. The most I might do is add a check
that the resolver IP is 127/8 or ::1 or other local system IP address,
(in Exim speak, matches the @[] addresslist) and require a second option
dnssec_really_trust_offhost_validator when that doesn't match. Even
that would be dubious, leading to more debugging issues and insecurity
in practice, I suspect ... "DANE worked, until the localhost resolver
failed and resolution failed over to the resolver on the machine next
door, across a network with anti-spoof rules at the ingress".

I don't see any way I'd be happy moving the rest of the validation logic
into the MTA. We let Unbound do what Unbound is good at, and trust it.
Exim works _with_ other systems and is already pretty damned large for a
security-sensitive component, without deciding we can't trust any other
part of the OS and its facilities and replicating them internally.

In fact, I'm going to go so far as to say "Hell no!" -- we won't be
smoking that crack.

-Phil

I think if an application wants to _rely_ on DNSSEC, then it should be
setting the DO bit and the CD bit, and doing its own validation.

This violates encapsulation and segregation of concerns.

For an MTA with a caching validating resolver on localhost (since all
but the validating part is common best practice today):

If validation logic goes into an MTA, then the MTA needs to be updated
to know about new signing algorithms, deal with yet more discovered
flaws in DNSSEC handling, and generally process UDP data received over
the network as the mail run-time user.

... or by linking against a libresolv type API that includes validation, under the hood.

I don't see any way I'd be happy moving the rest of the validation logic
into the MTA. We let Unbound do what Unbound is good at, and trust it.
Exim works _with_ other systems and is already pretty damned large for a
security-sensitive component, without deciding we can't trust any other
part of the OS and its facilities and replicating them internally.

In fact, I'm going to go so far as to say "Hell no!" -- we won't be
smoking that crack.

:slight_smile:

Joe