;; ANSWER SECTION:
secunia.com. 3532 IN NS a.ns.secunia.com.
secunia.com. 3532 IN NS b.ns.secunia.com.
secunia.com. 3532 IN NS c.ns.secunia.com.
secunia.com. 3532 IN NS d.ns.secunia.com.
;; ADDITIONAL SECTION:
a.ns.secunia.com. 3532 IN A 213.150.41.253
b.ns.secunia.com. 3532 IN A 213.150.41.254
c.ns.secunia.com. 3532 IN A 91.198.117.1
d.ns.secunia.com. 3532 IN A 91.198.117.2
Note the identical double records for c.ns.secunia.com and d.ns.secunia.com
; <<>> DiG 9.4.2-P2.1 <<>> @127.0.0.1 c.ns.secunia.com A
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50785
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 4, ADDITIONAL: 4
;; QUESTION SECTION:
;c.ns.secunia.com. IN A
;; ANSWER SECTION:
c.ns.secunia.com. 3555 IN A 91.198.117.1
Is it simply a misconfigured Zone or some other glitch?
Unbound is Version 1.4.7 used as validating resolver without upstream cache.
Is it simply a misconfigured Zone or some other glitch?
Hi,
looks to me like a missconfigured zone:
$ dig -b 193.27.54.7 @213.150.41.253 c.ns.secunia.com. +short
91.198.117.1
Andreas
--
Andreas Schulze
Internetdienste | P532
Yes, but i'm a little bit baffled that identical records are returned. Does unbound cache it this way or are the records subtil different?
Bind 9.7 only provide only one record for the same query.
Yes, but i'm a little bit baffled that identical records are returned.
Does unbound cache it this way or are the records subtil different?
Unbound just returns what the authoritative nameserver sent.
Duplicate A records like this are often produced by djbdns' tinydns-data
tool when its built-in shortcuts are used, e.g. multiple "&" records
with address, which generate NS + A records.
would produce 1 NS record for example.{org,com,net} each and 3 A records
for ns1.example.org.
Bind 9.7 only provide only one record for the same query.
BIND removes duplicate nameserver addresses from responses, it seems.
Hauke.
JFTR, when using tinydns, I usually advise not to use the macros and
stick to one output record per line, ie. use "Z", "&" and "+" instead of
a single "." and define all A records explicitly. It may be neat to save
a few bytes per zone but it can be difficult to trace problems. And I
don't like most of the defaults.
(yes, talking about djbdns syntax feels a bit like speaking Esperanto
Yes, but i'm a little bit baffled that identical records are returned.
Does unbound cache it this way or are the records subtil different?
Unbound just returns what the authoritative nameserver sent.
Duplicate A records like this are often produced by djbdns' tinydns-data
tool when its built-in shortcuts are used, e.g. multiple "&" records
with address, which generate NS + A records.
would produce 1 NS record for example.{org,com,net} each and 3 A records
for ns1.example.org.
Bind 9.7 only provide only one record for the same query.
BIND removes duplicate nameserver addresses from responses, it seems.
It is not limited to the nameserver record case because query A record for d.ns.secunia.com deliver also two identical results. I guess Bind only deliver one result because the cache detects that the results are identical and stores only one instance, or maybe it isn't able to store two identical results in the cache anyway.
As it does not hurt i will simply ignore it further on.
Yes, but i'm a little bit baffled that identical records are returned.
Does unbound cache it this way or are the records subtil different?
Unbound just returns what the authoritative nameserver sent.
Yes. It caches what the authority server sends. For speed reasons it
does not (try to) remove duplicates. Except in special corner cases
where it does remove duplicates (where it tries to make sense of RRSIGs
that are in the wrong section of the message, and when it thus adjusts
the message it removes duplicates).
BIND removes duplicate nameserver addresses from responses, it seems.
Unbound does not introduce duplicates itself, but also does not remove
them if the authority server sends them like that.
Unbound preserves the exact order of the records as well.
As it does not hurt i will simply ignore it further on.
this is another challenge for the robustness principle, but RFC 2181
introduced the "RRSet" and deprecated (even recommended removing
duplicate RRs. This was later confirmed (in DNSSEC context, though)
by section 6.3 of RFC 4034. More importantly, it appears more
consumer/application friendly to me to suppress the duplicates. YMMV.
Yes. It caches what the authority server sends. For speed reasons it
does not (try to) remove duplicates. Except in special corner cases
where it does remove duplicates (where it tries to make sense of RRSIGs
that are in the wrong section of the message, and when it thus adjusts
the message it removes duplicates).
this is another challenge for the robustness principle, but RFC 2181
introduced the "RRSet" and deprecated (even recommended removing
duplicate RRs. This was later confirmed (in DNSSEC context, though)
by section 6.3 of RFC 4034. More importantly, it appears more
consumer/application friendly to me to suppress the duplicates. YMMV.
So, unbound does not introduce duplicates itself. It does transmit the
upstream duplicates to clients. As a feature it could suppress the
duplicates; is that really worth it? It makes RR parsing O(n^2) for the
number of RRs in an RRset; or for more O(nlogn) solutions the overhead
becomes high as well; thus I think performance would suffer. I figured
an authority server that sends out duplicates can then have duplicates
for their domain and the issues ..
I think Unbound is doing the right thing. Authoritative servers sending duplicate records should be exposed to the end systems.
If a validator "fails" to duplicates before verifying the RRset may or may not fail as it is just as likely that the set was signed with duplicates in it.
The principle no DNS protocol element should not change RRsets that originate at another protocol element.
So, unbound does not introduce duplicates itself. It does transmit the
understood, but "be conservative in what you send".
As a feature it could suppress the
duplicates; is that really worth it? It makes RR parsing O(n^2) for the
number of RRs in an RRset; or for more O(nlogn) solutions the overhead
becomes high as well; thus I think performance would suffer. I figured
First, I think nlogn is closer to reality (sort, then compare and collapse),
but then the numbers n are small enough that O() probably isn't too helpful
at all. Most RRSets will contain a single RR anyway.
Second, to borrow from another thread, the performance penalty for this
protocol compliance feature is cheaper than the one introduced by "RTT
banding" (SCNR).
Doesn't the validator have to canonicalize the RRSet anyway?
And finally, the invalid RRSet has to be dealt with anyway, then why not
do it once at the recursor instead of multiple times at the stub or
within the consumin application?
Zitat von "W.C.A. Wijngaards" <wouter@NLnetLabs.nl>:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi Peter,
Yes. It caches what the authority server sends. For speed reasons it
does not (try to) remove duplicates. Except in special corner cases
where it does remove duplicates (where it tries to make sense of RRSIGs
that are in the wrong section of the message, and when it thus adjusts
the message it removes duplicates).
this is another challenge for the robustness principle, but RFC 2181
introduced the "RRSet" and deprecated (even recommended removing
duplicate RRs. This was later confirmed (in DNSSEC context, though)
by section 6.3 of RFC 4034. More importantly, it appears more
consumer/application friendly to me to suppress the duplicates. YMMV.
So, unbound does not introduce duplicates itself. It does transmit the
upstream duplicates to clients. As a feature it could suppress the
duplicates; is that really worth it? It makes RR parsing O(n^2) for the
number of RRs in an RRset; or for more O(nlogn) solutions the overhead
becomes high as well; thus I think performance would suffer. I figured
an authority server that sends out duplicates can then have duplicates
for their domain and the issues ..
The even potential harm done by simply provide duplicated RRs is low IMHO (second connect to the same IP for example), so i would not vote for making Unbound slower and more complex to solve this. It is an error orginating in the zone after all and it is not the resolvers duty to fix it.