Is there anyone in here get a problem like me.
My Unbound server reveice flood that query randomize subdomain that make traffic jammed and CPU Load is full ?
Any solution that can be shared ? Thanks
Here is sample log of mine :
Mar 31 17:56:47 ns1 unbound: [7679:1] info: 49.128.xxx.xxx cdexevevyp.www.136.xxx. A IN
Mar 31 17:56:47 ns1 unbound: [7679:0] info: 103.247.xxx.xxx cnsjwhclifax.www.136.xxx. A IN
Mar 31 17:56:47 ns1 unbound: [7679:0] info: 111.68.xxx.xxx avezsvuvehgnajun.www.136.xxx. A IN
Mar 31 17:56:47 ns1 unbound: [7679:1] info: 119.2.xxx.xxx epsruvodazqz.www.136.xxx. A IN
I try to install unbound-bloomfilter but got error.
Any information will be nice.
Thank you.
Thanks to Daisuke that help me privately that patch bloomfilter.
Now is worked on my unbound, still try to use it.
The traffic getting little bit down now ... Around 4 til 6 mbps.
Yes, these domains change quite often, unfortunately
This is an attack called water torture.
Actually, no. I've seen very often the "water torture" or "random
qnames" attack and it is the first time I see in the wild such a rapid
change of the suffix.
Attacks are random and with many source IPs (botnets). Therefore it is harder to have an automatic system to block source IPs. Our kind of "workaround" was to increase the request_list size from the default 1024 to a higher number and to enable jostle-timeout to something like 4sec. Therefore requests do not stay too long in the request_list once the box is under load. Manual iptables rules are not maintainable, we only manually block IPs for the biggest hitter. I agree what we are doing is _not_ a fix to the problem because we just allocated more resources to deal with the junk, but jostle-timeout definetely helps. I asked about it almost a year ago on this mailing-list.
> Any solution that can be shared ?
By trying to find my previous post, I actually realised that I missed Daisuke's email.
Attacks are random and with many source IPs (botnets).
Stable suffix or not? battossai claimed that the suffix changed every
second.
Therefore it is
harder to have an automatic system to block source IPs.
It's not the source IP that you should block (they are probably forged
so you would block innocent people) but the suffix (I sent the
iptables rule for that a few messages ago).
Manual iptables rules are not maintainable,
In my experience, they are, if the attacker does not change the
suffix.
I have just subscribed here, but we have been dealing with this problem for about a year.
Here is our solution - a watchdog script that does "unbound-control dump_requestlist" at regular interval to see how many concurrent recursive queries are being worked upon.
If there is a flood, this will spike over a defined limit (depending on normal traffic), and the following action is taken:
The flooding queries have typically the same structure - <random_string>.<some_domain>, co that the server cannot use cache and wastes resources on doing a recursive query.
When the number of concurrent queries spike, the script counts them by domain, and those domains that exceed a defined share (usually over a quarter) are temporarily blacklisted via "ubound-control local_zone deny" (you can use "reject" too, or serve an authoritative NXDOMAIN answer if you prefer). This solution takes advantage of the fact that legitimate queries are most often quickly finished, and only the bogus ones pile up and clog the server's memory.
This temporary blacklist is cleared once a day automatically. All blacklisted zones are logged and I review them regularly, there is an absolute minimum of false positives. The script also supports whitelisting of zones you never ever want to blacklist.
Just my 2 cents here :
The pattern I am seeing on my side does not evolve as fast as one per
second,
but the attacker does change domains every few hours or so.
However, the authoritative servers being hammered as a result do not
change that much.
(Most domains I am seeing are chinese domains related to online gambling
and what not.)
And, in my situation, trying to maintain local zones or iptables rules
is a litteral "whack-a-mole" game,
you can't humanely do that manually for an extended period of time.
It's like, these guys have troves of domains to use and abuse...
(Things get further tricky when some of these domains are set with
wildcard records too)
And, in my situation, trying to maintain local zones or iptables rules
is a litteral "whack-a-mole" game,
you can't humanely do that manually for an extended period of time.
It's like, these guys have troves of domains to use and abuse...
However, you can maintain local zone list in unbound automatically fairly easily, we have been doing it for over a year with minimal necessity of manual intervention. If you wish, have a look at the attached perl script.
The only other option is to persuade the users of the compromised machines to clean their systems.
However, you can maintain local zone list in unbound automatically fairly
easily, we have been doing it for over a year with minimal necessity of
manual intervention. If you wish, have a look at the attached perl script.
unbound-bloomfilter's attack detection mechanisms implement almost
same thing as your script.
I used public suffix list (source code embedded, currently) to
determine depth of blocking domain
which corresponds to your "third_level_domains.conf".
Note that the bloomfilter itself is a way to reduce collateral damage
caused by filtering.
Of course to reduce damage caused by wrong (false positive) filtering and
to accept legitimate queries for the filtered domain
The only other option is to persuade the users of the compromised machines
to clean their systems.
The only other option is to persuade the users of the compromised machines to clean their systems.
good point.
question to make the problem clear (to me):
the unbound resolver you talk about here provide resolving services for
own/trusted/known/paying users but not for anybody in the world. Correct?