Could nsd handle the future ".eu" ?

Opinions ? Facts ?

Stephane Bortzmeyer writes:

Opinions ? Facts ?

For someone who runs a .fr name server, getting a copy of the .de, .net and .com zones should be easy. The zones files don't have to be from the same day, either, they just have to be big and real.

Trying to load those three together on a spare machine would provide the answer in a hurry.

Arnt

rbl-plus.hea.net (a mirror of the MAPS RBL-plus zone currently has
2,664,478 records. Works fine with NSD, machine is a Dell Poweredge
1650 with 1GB of RAM, but NSD appears to only use about 180MB
when loaded.

Stephane Bortzmeyer wrote:

Opinions ? Facts ?

Here are some of my back-of-the-envelope calculations. Use at your own risk.

As far as NSD scalability is concerned there are three main limitations that I'm aware of:

1. Performance

Domain name lookup is O(log N) for N domains. Performance is not otherwise affected by the size of the database, so this should not be a big scalability limitation.

With large databases zonec and reloads will also take some time.

2. Memory usage.

On a 64-bit machine NSD uses about 100 bytes per RR (based on the .nl zone). Memory usage is roughly tripled after signing the .nl zone with a 1024-bit key. The .nl zone used is rather old and has about 920,000 delegations and almost 2 million RRs.

3. Internal limitations.

Currently NSD assigns each fully qualified domain name a unique 32-bit id. So the maximum number of domain names that NSD can handle is about 4*10^9 (four billion). I think this is less of a problem than the memory usage limitation. This limitation can be removed without too much effort.

Today you can order a four processor AMD Opteron machine with 64 GB of memory for roughly $50,000.- from HP.

Assuming you want to do on-the-fly database reloads (instead of restarting NSD) you will be able to load about 300 million RRs (NSD memory usage will then be about 30 GB). If you don't mind restarts or swapping while reloading you can double the number of RRs.

With a four processer system performance should also be very good.

So I'd say .eu could be handled with NSD on current "low-cost" hardware.

Erik

Erik Rozendaal writes:

Assuming you want to do on-the-fly database reloads (instead of restarting NSD) you will be able to load about 300 million RRs (NSD memory usage will then be about 30 GB). If you don't mind restarts or swapping while reloading you can double the number of RRs.

Isn't it possible to run the zone compiler on a separate, cheap machine, and transfer the compiled zone to the production server using e.g. ttcp?

A 64GB single-CPU opteron should be nice and cheap, and can also be used to test a new NSD before running it on the production server.

Arnt

Arnt Gulbrandsen wrote:

Isn't it possible to run the zone compiler on a separate, cheap machine, and transfer the compiled zone to the production server using e.g. ttcp?

A 64GB single-CPU opteron should be nice and cheap, and can also be used to test a new NSD before running it on the production server.

The main cost is the memory (about $35,000.-), not the CPUs. Also, each Opteron CPU is currently limited to 8 DIMMs, so there is a maximum of 16 GB per CPU (2 GB per DIMM is the current maximum I think).

Erik