Multicast address alerts in logs

Hello,

We are running several unbound nodes using anycast and I keep seeing
this error in the log files.

Jan 18 11:07:15 rcn-b3s5-01 unbound: [3856:0] notice: sendto failed:
Invalid argument
Jan 18 11:07:15 rcn-b3s5-01 unbound: [3856:0] notice: remote address is
244.254.254.254 port 53

Are these anything to worry about? What does this message actually mean?

Yes, as DNS queries are not being forwarded and most probably will fail.

The problem is that the source IP for those queries is probably
incorrect, I am pretty sure 'outgoing-interface:' solves the problem but
the proper way to solve it is to put your *anycast* address on your
loopback interface :slight_smile:

My website covers a anycasting with unbound (and DNS blacklisting):

http://www.digriz.org.uk/ha-ospf-anycast

Cheers

That's not a multicast address; it's an (unusable) class-E 240/4 address.

Weird to see traffic to it...

Maybe someone has some bad glue?

Sorry, did not pay attention to your subject line and now see the
address is a multicast address. Nothing to worry about :slight_smile:

Cheers

That's not a multicast address; it's an (unusable) class-E 240/4 address.

Weird to see traffic to it...

Maybe someone has some bad glue?

That's entirely possible. These servers are handling queries from
thousands of servers and I'm sure that some domains have bad records.

Each server has interface tracking set to automatic and the anycast
IPs are on loopback interfaces. The messages don't seem to be causing
any problems but I'd like to figure out what's causing them.

there is a modification to unbound that allows it to
  use IPv4 multicast addreses to discover other DNS servers

  there is also a patch for BIND that does the same thing.

  I thought I was pretty careful w/ the code and didn't think
  it had escaped into the wild.

  if there is anyone w/ logs they would be willing to share,
  I'd like to ensure it not my code doing this...

--bill

But as I said "244.254.254.254" is not a multicast address.

But as I said "244.254.254.254" is not a multicast address.

Sorry, I guess I got mixed up a bit. I'm not sure where this traffic
is coming from but it does appear to be happening on every node.

I still haven't been able to figure out what is causing these notices
in the system log. Does unbound have a log level setting that could
filter the messages out? Our DNS resolvers are working fine and I'd
rather not be spammed by pointless notices.

A google search pops up something interesting:

http://forums.fedoraforum.org/showpost.php?p=51979&postcount=5

Leave tcpdump running on a resolver and wait for the misconfigured
offender to appear. Use one of the following:

Hi Alexander, Michael,

Thanks. I think I know what to do now.

* Phil Mayers:

Jan 18 11:07:15 rcn-b3s5-01 unbound: [3856:0] notice: sendto failed:
Invalid argument
Jan 18 11:07:15 rcn-b3s5-01 unbound: [3856:0] notice: remote address is
244.254.254.254 port 53

Maybe someone has some bad glue?

This is the most likely culprit:

; <<>> DiG 9.6-ESV-R3 <<>> +dnssec +norecurse @ns2.sosdg.org. exemptions.ahbl.org.
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39796
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 2800
;; QUESTION SECTION:
;exemptions.ahbl.org. IN A

;; AUTHORITY SECTION:
exemptions.ahbl.org. 3628800 IN NS invalid.ahbl.org.
exemptions.ahbl.org. 3628800 IN NS localhost.ahbl.org.

;; ADDITIONAL SECTION:
invalid.ahbl.org. 3628800 IN A 244.254.254.254
localhost.ahbl.org. 3600 IN A 127.0.0.1

;; Query time: 173 msec
;; SERVER: 66.113.102.6#53(66.113.102.6)
;; WHEN: Mon Mar 21 09:43:35 2011
;; MSG SIZE rcvd: 126

Leave tcpdump running on a resolver and wait for the misconfigured
offender to appear. Use one of the following:
----
tcpdump -i bond0 -n -p port 53 -s 0 -w /tmp/dump.pcap
tcpdump -i bond0 -n -p port 53 -s 0 -w - -U | tee /tmp/dump.pcap | tcpdump -r - -n
----

Good hunting :slight_smile:

Cheers

--
Alexander Clouter
.sigmonster says: Future looks spotty. You will spill soup in late evening.

This may be problematic on DNS nodes that are handling thousands of
queries per second. Is there a way to make unbound log what lookups
are causing these messages?

Hi Michael,

Leave tcpdump running on a resolver and wait for the misconfigured
offender to appear. Use one of the following:
----
tcpdump -i bond0 -n -p port 53 -s 0 -w /tmp/dump.pcap
tcpdump -i bond0 -n -p port 53 -s 0 -w - -U | tee /tmp/dump.pcap | tcpdump -r - -n
----

Good hunting :slight_smile:

Cheers

--
Alexander Clouter
.sigmonster says: Future looks spotty. You will spill soup in late evening.

This may be problematic on DNS nodes that are handling thousands of
queries per second. Is there a way to make unbound log what lookups
are causing these messages?

Attached a small patch that logs the UDP packet that it tried to send to
that (multicast) address. It logs for all UDP failures.

with echo <that hex> | drill -i - you can see what query was being
asked.

This patch has not been tested (but its tiny).

Best regards,
   Wouter

(attachments)

patch_log_failed_udp.diff (423 Bytes)

* Michael Watters <wattersmt@gmail.com> [2011-03-25 17:38:27-0400]:

> Leave tcpdump running on a resolver and wait for the misconfigured
> offender to appear. Use one of the following:
> ----
> tcpdump -i bond0 -n -p port 53 -s 0 -w /tmp/dump.pcap
> tcpdump -i bond0 -n -p port 53 -s 0 -w - -U | tee /tmp/dump.pcap | tcpdump -r - -n
> ----
>
> Good hunting :slight_smile:

This may be problematic on DNS nodes that are handling thousands of
queries per second.

I doubt it, what matters is the amount of data going through and if your
harddisk can keep up with the pace, I doubt you are pushing 30MB/s :slight_smile:

As it's high-throughput I recommend you go with the first command (the
second one will chock your computer/terminal).

Is there a way to make unbound log what lookups are causing these
messages?

Patch the source I imagine, you might be able to do something with the
python bindings though.

Cheers