I think that xfrd daemon suffers a scalability problem with respect to the number of zones. For every zone, xfrd adds a netio_handler to the linked list of handlers. Then, every netio_dispatch call sequentially scans the entire list for "valid" filedescriptors and timeouts. With a large number of zones, this scan is pretty expensive and superfluous, because almost all zone filedescriptors/timeouts are usually not assigned. The problem is most obvious during "nsdc reload". Because server_reload function sends soa infos of all zones to xfrd, xfrd performs full scan of the linked list for every zone. So the resulting complexity of reload is O(n^2). Just try "nsdc reload" with 65000 zones and you'll see that xfrd daemon consumes 100% CPU for several _minutes_! However, I guess that the scalability problem is not only limited to reload, because _every_ socket communication with xfrd goes through the same netio_dispatch. There is "perf record" result of xfrd process during reload:
Thanks for the perf measurements. I did not know that. I wrote that
code some time ago, and decided against optimizing xfrd like this,
because the netio handler is also used by the server processes. Those
processes listen on only a limited number of sockets, and thus this is
more efficient for them. If this is the only bottleneck for a larger
number of zones, then it may be relatively easy to fix.
Thank you for your response. I agree with you that server processes
performance is more important than xfrd performance. It should be
sufficient to add xfrd zone handlers to the netio only when their
sockets/timeouts are set. I've found no other serious bottleneck for a
large number of zones yet
I have also been trying to run some tests using 60k+ zones. I grabbed a very recent snapshot of these zones from bind so there shouldn't be too many zones that need updating. But its been 30 minutes or more and all zones seem to be returning servfail. I see some zone transfer traffic in the logs. CPU on the nsd process shows 99.9%, w/ 3.3% memory usage. Centos, 8gb ram, quad core 5500. Also applied the Memory patch posted earlier this month on 3.2.4. In bind we use the serial-query-rate option, the default value is too low for how often our zones change. Does an option like that exist in nsd?
Any help would be appreciated. The performance of NSD on a single zone is phenomenal. 112k qps on this hardware.
Dan
Dan, I think that you hit the same problem with xfrd as me. Which of
your nsd processes uses 100% cpu? There should be at least three -- a
main process, a child process, and a xfrd process. Xfrd is usually the
one that has different RES memory usage in "top" than the others.
If you want to be sure, try the attached patch. It logs every
netio_dispatch call with some useful numbers:
"netio dsp: nnnnnn, sum: xxxx, fds: yyyy, tms: zzzz", where xxxx = total
number of handlers scanned, yyyy = handlers with a socket, zzzz =
handlers with a timeout. If your log will be continually populated with
thousands of these lines having xxxx roughly equal to the number of your
zones, you met the scalability issue.
Be careful! Do not try the patch on a server that processes any queries.
The patch logs at least one line for every dns query too ;-))
I guess that you run NSD as a slave... If so, could you please send me
few typical "netio dsp" lines from your log? I'm curious what will be
the yyyy and zzzz numbers for you.
I only see just the one nsd process in top. After about a day it started returning responses for the zones. However I tried an update, saw that nsd received the notify and wrote the changes, but still hasn't updated what it's serving just yet. I'll try the patch a little later today when I get some free time.