Have an Android phone?

Try our new game!

Server experience

<i>Ultrastudio.org</i> server in preparation, before deployment in the server center
Ultrastudio.org server in preparation, before deployment in the server center
There are some "permanent" questions when trying to decide about the CPU, RAM, disk space and other capabilities when you plan to buy a dedicated server. There are articles on the web that try to provide some "generic rules", but too often after reading them you stay as confused as before. We have no ambitions for a broad, expert grade analysis. We just want to share our own experience that may be interesting for others to read.

How capable it should be ?

Surely, everyone starting an exciting project expects a lot of visitors but, to look realistically, the site will need many years to become truly popular. During the first months the real visitors may load your server by two orders of magnitude from that your dedicated device is capable to support. Does this mean that this reserve is redundant? Yes, sometimes. But sometimes not.

First of all, of course, there are various bots of all kinds. Initially they may make the major half of your server audience, making sure it will never go to nap. Serious bots respect robots.txt, but even there - how can you set low crawling rates when your priority is to have site indexed as much as possible. With bot crawling several times per second and no extra resources, the server may stall for ordinary user until the bot finishes its work. This is not the best way to make impression for the user.

Next, the site scanners come out. There are some really good services that will scan your server stack for security issues, allowing to correct problems before the real cracker finds them. The GoDaddy team that we use is really cool and competent. Unfortunately, proper scan also means that hundreds of pages must be fetched - you need a reserve of performance for that.

And finally, of course, script kiddies of all kinds. These attacks are not very frequent, but when arranged, they also mean multiple interactions per second. Our server has sophisticated system that blocks the hyperactive users after understanding that they are nothing else than the DOS attack bots. Still, some service must be made for a bot before it discloses itself, running out of any clever limits of the traffic.

Because of these three user groups, the expected project load is definitely not a maximal load for that your system must be designed. If you do not want to go offline several times a day and respond to the user after he already closes the browser, you must have resources. And the more resources you have, the better. The more serious is the project, the less attractive is the so much advertised virtual hosting, when you share the slice of the real machine with numerous other users.

Where to keep it?

"Why not at home?" - said one friend, but this idea never looked for us very great. Of course, at home you may (potentially) have a lot of place and cooling is not an issue, but your bandwidth is really not good for a server. Most important, it is often optimized for the fast download and slower upload - this is exactly how the desktop computer usually works and exactly the opposite how the server works. Also, desktops are just not build for 7*24 operation, and you likely need something more suitable. This "something" must be quite special, as the fans of the ordinary server are also too noisy for running in the living or working room. Housing at home is also requires to have a static IP address. Finally, it is just something "not how it should be done", and we decided to keep our server in the center where it would have good connection, reserved power, proper protection and so on.

From the other side, the usual server center likely will not accept that most of people understand as a "computer". Servers are dedicated machines with specific form factor, where width is strictly fixed, length can vary in some boundaries and height is changing in the fixed standard increments - so called HE (or sometimes U) units. 4HE is roughly the shortest edge of the usual desktop, but most or the servers are slimmer - 1HE is common, unless you need a larger enclosure to put more that four disk drives. Usually the higher the server, the more it will cost to house it. Depth is typically not that much an issue, but you do need to check if the server will fit into the rack. Not only it can be too long; in our case it was too short and we lost time looking for the proper adapter rails and then bringing the server into the center for the second time.

Which CPU?

A relatively recent Xeon usually costs more than any other part of your machine and the recent high end model may cost more than the rest of the computer in total. Old models may be cheaper per chip, but the new expensive models have the same or even better price if expressed in dollars per performance. Still, dual socket server seemed not so great idea as doubling the CPU near doubles the price.

Depending from where do you plan to house your server, power may or may not be an issue. As our housing costs directly depend on the power usage, surely we were searching for the CPU that would also offer the good performance per watt. After many hours of googling, we found a wonderful site (spec.org) that has opened our eyes, showing the power to performance curves of the new models are way steeper, offering much better performance per watt. They maximal power consumptions may be high but they are never reached if the machine is not running at the full load. After looking at these curves we picked at that time very new and high end Xeon 5570, even if it is, in general, a CPU for a dual socket motherboard.

Our 5016T-MTFB barebone, before starting work on it
Our 5016T-MTFB barebone, before starting work on it

How much RAM?

When planing the server, we already had our software stack so it was possible to test how much memory it may need. These tests have shown that Java stack (be it Tomcat or Glasfish, does not matter a lot) uses more memory under load, and the maximal number of interactions per second (WIPS) strongly depends on the amount of memory available. These tests were run on the machine that has 2 Gb of RAM available for tests. For our final server, we quite have picked 12 Gb of unbuffered memory that was not a bad choice as even under extreme loads the memory usage never went over 8 Gb or about. Most of this memory operating system was using as a cache (should still count as not a useless hardware). Tomcat uses about 3 Gb under maximal loads but this is when it is aware that over 10 Gb are available; in a more constrained machine maybe it would run with less.

There is another issue that is often forgotten - memory bus speed. Recent processor works internally much faster than it can read or write to the memory; high CPU frequencies are only useful because they have much faster internal caches. If the program interacts strongly with memory, the limited data transfer speed can be a bottleneck. While the transfer speed depends on processor and memory itself, it also varied dramatically depending on memory configuration - usually a lot of memory chips means slower transfer rates.

Also:

  • Unbuffered memory uses two to three times less power, but the maximal capacity is limited.
  • With more than 12 Gb of RAM (3 of 6 sockets filled in) the BIOS of our box on its own initiative switches the memory bus speed from 1333 to 1066 MHZ - without slightest hint about this in documentation. 30 % are quite a loss of performance, and can easily slow your task in more or less the same degree. The documentation does contain the table, explaining that monster memory sizes, while supported in general, reduce the bus speed to 800 MHz - enjoy virtualization!

Which hard drives?

We needed to pick the drive configuration in that unlucky time when everyone was yelling how good the SSD's are, but at the same time there was no TRIM support. Without TRIM, performance degrades over time and the project has no resources to replace the worn-out SSDs on regular bases. We have also looked into various exotic solutions like PCI-Express SSD, but these were even more expensive and still had no TRIM. At the end we concluded that the good RAID controller also has 512 Mb or about cache memory. As this memory is true RAM, it should be even faster. Exactly at that moment we managed to get at half price ("condition - new, packing - opened") Adaptec 5405Z where this memory is protected by supercapacitor and backed up into internal flash memory in case of the power loss - was shame not buy, even if we are still not sure if this card is worth to have for the full price in the project of our size. To tell the truth, it was really problematic to get the driver working properly under Linux, but once the card started to work, it works really well.

After spending that much on this controller, the further options were already reduced due budget. We bought four ambition-less (but still "server grade") 1 Tb SATA drives, configuring 2 Tb in RAID 1E that are spinning happily up till today (as you see this page). We only need a fraction of the space they provide but SATA space is really cheap now in comparison with the rest. Maybe we will use it in some ambitious project later. Another standard (SAS) is claimed to be more reliable and is also a little bit faster, but it really costs more per gigabyte and we also wanted enough space for automated backups without the need to delete them periodically.

Remote management

As the server is housed in a dedicated center, far in the city, we were really needed the remote management capabilities. Winbond chip that we use allows to check all fans, all temperatures and all voltages, and watching all this daily is just a nice computer game. It also allows to reboot the server remotely and have a direct console - same as if you were really standing next to the machine. This should be really great when something goes wrong enough that you cannot longer fix with SSH. Of course, if the server is in a room next to you, this may be much less an issue.

Tests

With 12 Gb running under the promised 1333 MHZ speed, 2.93 GHZ Xeon 5570 and RAID 1E on dedicated controller, the maximal performance of our server was limited by CPU, its usage reaching 100 % for all four cores at full load. This looked very differently from the our previous results collected on the test machine, while it only had dual core processor.

In general, the server supports about 10 full web interactions per second (downloading several real-world size applets, images and JavaScript inserts as part of the single interaction).

The disk system also looked efficient. When the server was on our table and still without the final setup, we tried to run common tools like OpenOffice and Eclipse on it - they were starting and in general working even faster as if launched from SSD that we tried on one of desktops many months later. Sadly the server is not suitable as a desktop replacement even for fanatic - it is way, way too noisy in a room.

Final word

Most important - we are sure we offer much better response times than you can typically get with free web hosting. Or with virtual private server. Or even with dedicated server if you pay for it a price that we just pay for the housing. Our memory, CPU and other resources that explain why we are between 5 % of the fastest sites in the web.

And we think that response times are important. This may be one of the reasons to join us in Ultrastudio.org project.