Home Server – 2023 Edition

A major core of the home network is the server computer. This computer is located in the basement and handles data storage and other capabilities that are accessible throughout the house and in some cases over the Internet. All of my music, photos, videos, TV recordings, documents, backups, and other digital files are stored on this server, and may be viewed and, given the proper permissions, modified by any computer, TV, or other device in the house. The home server also handles all of the Home Automation functions as well as providing network services, such as DNS and DHCP, and security. While other articles cover specific functions and how they are implemented, this article only describes the server hardware itself.

The home server is actually two servers that have all the same components. In a normal state, both servers are running and doing different things. For example, one server could be the main storage pool and record TV shows, while the other could host the web site and run the firewall. But they also replicate all data between them so that if one server fails, the other server could handle all the tasks on its own. This works well and if one server is shut down for maintenance or other reasons, the other server automatically and (almost) seamlessly takes over handling the tasks previously done by the unavailable server.

In 2024, I plan to make one server handle everything and the other server will only “wake up” nightly and copy everything on the primary sever that had changed since the last backup. The downside of this approach is that if the primary server is turned off or stops working, the backup server needs to be manually “woken up” and configured as the primary in order to take over. The upside is lower overall energy consumption.

Selection Process

The home server in its current state was originally built around 2017 using technology that was already five years old at that point. This allowed me to purchase quality, server-grade equipment on the used market for a fraction of the price of what it would have cost buying them new a few years earlier. That may seem like I would be missing out on performance and power consumption improvements of newer equipment, but that was not the case. In fact, I recently looked into upgrading the now 11-year-old components to newer, consumer-grade equipment, and found that it did not make financial sense. The current server is still comparatively power-efficient, and more than capable of performing what I want it to do.

CPU

The main consideration when first building the server was finding a CPU that could handle the tasks I could see giving it then as well as have some room for growth. I leaned toward professional-grade equipment for a few reasons:

  • While not a hard-and-fast rule, professional-grade equipment tends to be more stable then commercial components, although sometimes at the cost of some performance.
  • Server motherboards tend to have more connectivity options such as multiple network connections, more full-speed slots for expansion cards, more bandwidth for storage, and other things more suited to a server than a desktop PC.
  • Since some companies tend to buy professional-grade equipment new then upgrade it every few years, I could potentially get some quality items at a great discount.
  • It just seemed like fun.

I did consider commercial gear–which can also be found at a discount except not as old as many people seem to upgrade their computer every couple of years–but at the time there was no real fiscal reason to go that route.

My main performance concern at the time was to be able to transcode three full-HD video streams at the same time, as well as have some extra processing power to handle its other tasks. The transcoding requirement was in the unlikely event that all three members of the household were watching different videos at the same time on their phone or other device that required transcoding.

The CPU’s physical core count was a consideration and the more cores the better! At least that was my first thought. It turned out that I did not have any reasons why I needed a large number of cores. Instead I realized I should focus on single-treaded performance, or how fast a single core can process something.

Motherboard

Once the CPU was selected, the question was if I should just search the used market for a bundle discount. It might be cheaper to find someone, (or more likely some company), that was selling the CPU along with a motherboard and RAM, and maybe even the case as well. However, it proved too time-consuming to look through bundle listings and research the individual components that came with it.

The first thing to look into for a motherboard was what chipset I wanted. There were three chipset options for the CPU I had chosen, and it was easy choice to select the “mid-range” one. The entry-level option did not have enough bandwidth to support the number of storage devices I wanted without additional add-in cards, (which would then limit the number of other things I could add to the server internals). The top-of-the-line model was basically the same as the model I chose except it added more business-type features that I wouldn’t use.

My main concerns for a motherboard were:

  • Every capability of the CPU and chipset be available. For example, if the CPU and chipset supported 12 USB lanes then the motherboard should also either by internal or external connectors.
  • It had to have at least four 1G network ports. This is because the main network switch that everything plugged into provided hardware bonding of ports. That would allow me to use the four connections on the server as a single, larger connection.

RAM

Deciding on a type and manufacturer for the RAM was pretty easy as motherboard manufacturers usually publish a list of specific memory modules they’ve tested with their product. I didn’t want to gamble on purchasing RAM not on that list and perhaps face issues. Also, the motherboard supported 32G of RAM, so I wanted to find the fastest 32G memory configuration that was on the “qualified” list. A brief glance at the list only yielded one or two results that met my requirements, meaning it took me longer trying to find a deal on the RAM then researching my options. In the end, the RAM was bought new.

Power Supply

The first thing I did when selecting a power supply was to calculate the total number of watts the server would need when everything in it was running and maximum speed. I also wanted to leave some room for growth such as adding faster network connections or maybe a graphics card for transcoding. Since power supplies, at least at the time, were typically most efficient when running between 50 and 70% load, you didn’t want one that was too overpowered. On the other hand, you didn’t want something that would fail under full load, or when adding something like a video card. Luckily there were calculators on the Internet that let you enter in all your components and give you a wattage recommendation.

If I didn’t already have power supplies that met my needs, then the next step would have been to read some reviews from trusted sources about power supplies that provided the wattage I wanted. Instead, I just used the same power supplies that were in my current servers.

Case

I had been using a couple rack-mount cases for a few years at this point, and I had no reason to replace them. They were large at 4U (7 inches) of height, but I liked that as it gave me plenty of room for me to assemble, maintain, and also to add more things to them in the future.

Storage

I had tried various ways to have a “pool” of storage devices that could not only act as a single storage area, but do so in a way that allowed for redundancy and future expansion. This included hardware, software, and hybrid options. At the time I had decided to use unRAID as the software running the server. This software allowed me to add (and remove) drives of any size, (at least no larger than the parity drives) and at any time. It could also, with the correct configuration, keep working even if two hard drives failed. And if I needed to add more space to the storage pool, I could just add another hard drive to the server assuming I had some place to plug it in. Finally, it also allowed for redundant, usually much higher-speed storage, (like an SSD), that could receive data quickly. That meant when I would upload a large video file to the server, it would complete much faster than if I didn’t have a cache.

Given that I wanted to make full use of the cache and pool redundancy, I needed at a minimum of two SSDs and four hard drives. My chipset and motherboard supported exactly six places to plug in storage devices, two of those supporting higher-speed devices. However, I also wanted a little room to add more storage in the future if needed, so added a card to the server that provided two more storage connections. The cache drives also store the various virtual machines and Docker containers which run things like the web server and home automation.

Miscellaneous

The case has space for seven external 5.25″ devices and for one external 3.5″ device. The 3.5″ bay is populated with a trayless holder for two 2.5″ SSDs. Four of the 5.25″ openings hold six trayless slots for hard drives. The remaining three 5.25″ slots are populated with a large air intake fan. Two smaller fans are mounted on the rear of the case to exhaust air from the system. Two add-in cards provide additional plugs for storage and additional USB ports.

Specs

A few things have changed with the two nearly identical servers over the past five years, but only with things that typically need to be replaced over time such as fans, hard drives, and the power supply. As of 2023, the servers are as follows:

  • CPU: Intel Xeon E3-1240v2
  • Motherboard: ASRock E3C204-V4L
  • RAM: 2 Kingston KVR13E9K2/16I 16G pairs (32G total)
  • Power Supply: Seasonic SS-430GM
  • Case: iStarUSA D-407PL
  • Cache Drives: 2 480G SSDs
  • Parity Drives: 2 4T NAS hard drives
  • Pool Drives: 2 4T NAS hard drives (2 drive bays available for future expansion)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top