Does Unbound use otherwise non-trustworthy data simply because it has
valid DNSSEC signatures?
I'm asking because of this recent dnsop thread:
<https://mailarchive.ietf.org/arch/msg/dnsop/0bbEYp9RIGunDS4Vt_MvD2veMHg>
Does Unbound use otherwise non-trustworthy data simply because it has
valid DNSSEC signatures?
I'm asking because of this recent dnsop thread:
<https://mailarchive.ietf.org/arch/msg/dnsop/0bbEYp9RIGunDS4Vt_MvD2veMHg>
How can data be signed and validated and also "non-trustworthy" ?
I see how data can be unwanted or superfluous, but if it validates then the daemon could obtain the same data using direct queries. So I am not sure what the actual problem is. "If crypto fails then evil could happen" isn't a very convincing augment against additional signed data and efforts to reduce latency in a proper implementation.
Paul
* Paul Wouters:
Does Unbound use otherwise non-trustworthy data simply because it has
valid DNSSEC signatures?How can data be signed and validated and also "non-trustworthy" ?
Non-trustworthy according to DNS rules. For example, data from the
target in a complete different zone for which the server providing the
reply is not even authoritative.
I see how data can be unwanted or superfluous, but if it validates
then the daemon could obtain the same data using direct queries.
Only if the cryptographic validation is correct.
So I am not sure what the actual problem is. "If crypto fails then
evil could happen" isn't a very convincing augment against
additional signed data and efforts to reduce latency in a proper
implementation.
It absolutely is because cryptographic never works correctly. Most
people assume they don't have to worry too much about DNSSEC
validation bugs because there are other non-cryptographic security
features an attacker would have to bypass as well.
If DNSSEC, as implemented, disables these security features and more,
then enabling DNSSEC increases risk.
Enabling DNSSEC is fine if it is an add-on measure, but if it throws
out pretty much all the other protocol protections, it's unlikely that
it's a win from a security perspective.
Hi, Florian:
It's been a while since I studied the Unbound architecture, but I
believe the answer to your question is "no", due to Unbound's separation
of iteration and validation into separate modules. (E.g.,
'module-config: "validator iterator"'.) If I understand correctly, the
iterator module is responsible for "scrubbing" response messages, which
includes things like deleting out-of-zone information from the response,
and it doesn't scrub conditionally based on whether the validator module
is also present in the module stack.
Why? If an attacker can steal a zone signing key and use it to forge
signatures, *and* a validator implementation does not enforce
out-of-bailiwick rules for validly signed data, then there is no need
for the forged data to also be available via direct queries. That is a
good reason to continue to reject out-of-bailiwick data even if it is
validly signed.