During the past few weeks, I have been working on a solution to help consolidate the storage I’m using for both personal and business files. Weather data is pretty massive, so any time I do some serious weather server work for future Seasonality functionality, I’m almost always using a ton of disk space to do it. On the home side, media files are the dominant storage sucker. With switching to a DSLR camera and shooting RAW, I’ve accumulated over 100GB of photos in the past 6 months alone. I’ve also ripped all my music into iTunes, and have a Mac with an EyeTV Hybrid recording TV shows to watch later. All of this data adds up.

Before starting this project, I had extra hard drives dispersed across multiple computers. My Mac Pro (used as a primary development box) had weather data, a Linux file server held media and some backup, the older Mac has the media files, and my laptop has just the essentials (but as much as I can possibly fit on it’s 320GB disk).

Requirements

The requirements for this my new storage solution are as follows…

  1. Provide a single point of storage for all large files and backups of local computers on the network.
  2. Support more than 10 disks, preferably closer to 20. I have 7 disks ready for this box right now, another 4 coming in the next 6 months, and want to have more room after that to grow.
  3. Offer fast access to the storage to all machines on the network. I have a gigabit network here, and I would like to see close to 100MB/sec bandwidth to the data.
  4. Rackmounted. I recently setup a rack for my network equipment in another room of the house. A file server should go in that rack, and have an added benefit of keeping me from having to listen to a bunch of disks and fans spinning while I work.
  5. Keep a reasonable budget. This setup shouldn’t break the bank. Fiber Channel SANs costing $5k+ need not apply.

There are several ways to attack this problem. Here are just a few of the options that could potentially solve this storage problem.

Option 1: Buy a Drobo

The easiest solution to this problem is to just go out and buy a Drobo. Drobos will let you just pop in some hard drives and it takes care of the rest (redundancy, etc). Unfortunately, being the easiest option comes with some drawbacks. First is cost… A 4 bay Drobo goes for around $400, and the more formidable 8 bay model starts at $1500. With 10+ drives I would need to spend $1200 at a minimum, and that is more than I want to budget. The second disadvantage is speed. I’ve heard from many people who have the 4 bay model that copying files to/from the device take a long time (maybe only around 20MB/sec bandwidth). If I’m paying a premium, I want fast access to the storage.

Option 2: Buy an external multi-bay eSATA enclosure

This is an appealing option, especially if you want all the storage to just be available on a single machine. It’s directly attached, so it’s fast. The enclosures can be relatively inexpensive. The main problem with this option for me was that buying an enclosure with space for 10+ disks was more costly, and having that many disks spinning at my desk would be pretty loud and distracting. Furthermore, I would like to have a storage system that is all one piece, instead of having a separate computer and storage box.

Option 3: Buy a NAS

Cheap NAS boxes are a dime-a-dozen these days. I actually already went down this route a couple of years ago. I bought a 1TB NAS made by MicroNet. The biggest drawback was it was too slow. Transfer rates were usually only around 10MB/sec, which got to be a drag. The better NAS boxes these days offer iSCSI targets, giving you some speed benefits as well as the advantage of your other client computers seeing the disk as DAS (direct attached storage). Again though, check out some of the costs on a rackmount NAS supporting 10+ disks…they can get to be pretty expensive. This time I’m going to try another route.

Option 4: Build out a more advanced Linux file server.

This is the option I chose to go with. With my current Linux file server, I can get around 75MB/sec to a 3 disk RAID 5 using Samba. Rackmount enclosures supporting several disks are fairly inexpensive. All the storage is in one place and, if you know something about Linux, it’s pretty easy to manage.

The Chassis

I’m using my current Linux file server as a base, because it’s already setup to fit my needs. I needed to find a new enclosure for the Linux server though, because my current case will only hold 4 hard drives. I recently started to rack up my network equipment, so I began looking for a rackmount enclosure (3-4U) that would hold a bunch of disks. I ended up finding the Norco RPC-4020. It’s a 4U chassis with 20 hot-swap disk trays. The disk trays all connect to a backplane, and there are two different versions of the case depending on what kind of backplane you would like to have. The first (RPC-4020) has direct SATA ports on the backplane (20 of them, one for each disk). The second (RPC-4220) has 5 mini-SAS ports on the backplane (one Mini-SAS for 4 disks), which makes cable management a little easier. I went with the cheaper (non-SAS) model in an effort to minimize my file server cost.

The Controllers

After finding this case, the next question I had to answer was what kind of hardware I would need on the host side of these 20 SATA cables. This ended up being a very difficult question to answer, because there are so many different controllers and options available. My motherboard only supports 4 hard drives, so I need controllers for 16 more disks. Disk controllers can get to be pretty expensive, especially when you start adding lots of ports to them. A 4 port SATA PCI-Express controller will run you about $100, and jumping up to 8 ports will put you in the $200-300 range for the cheapest cards. When buying a motherboard, try to find a model with decent on-board graphics. That way, you don’t have to waste a perfectly good 16 lane PCI-Express slot on a graphics card (you’ll need it for a disk controller later).

This is also the point where you will need to decide on hardware or software RAID. If you are going for hardware RAID, there’s just no other way around it, you’ll have to spend a boatload of money for a RAID card with a bunch of ports on it. I’ve been using software RAID (levels 0, 1, 5, and 10) on both FreeBSD and Linux here for almost 10 years, and it’s almost always worked beautifully. Every once in awhile I have run into a few hiccups, but I’ve never lost any data from it. Software RAID also has the benefit of allowing you to stick with several smaller disk controllers and combining the disks into the RAID only once you get to the OS level. So with that in mind, I choose to stick with software RAID.

Depending on the type of computer you are converting, you might be better off buying a new motherboard with more SATA ports on it. Motherboards with 8 and even 12 SATA ports are readily available, and often are much less expensive than an equivalent RAID card. With more SATA ports on the motherboard, you have more open PCI(-Express/X) slots available for disk controllers, and more capacity overall.

Digression: SAS vs. SATA

There are many benefits to using SAS controllers over SATA controllers. I won’t give a comprehensive comparison between the two interfaces, but I will mention a couple of more important points.

1. SAS controllers work with SATA drives, but not the other way around.
So if you get a SAS controller, then you can use any combination of SAS and/or SATA drives. On the other hand, if you just have a SATA controller, you can only use SATA drives and not SAS drives.

2. SAS is much easier to manage when cabling up several disks.
Mini-SAS ports (SFF-8087) carry signals for up to 4 directly attached disks. Commonly, people will buy Mini-SAS to 4xSATA converter cables (watch which ones you buy, “forward” cables go from Mini-SAS at the controller to SATA disks, “reverse” cables go from SATA controllers/motherboards to Mini-SAS disk backplanes). These cables provide a clean way to split a single Mini-SAS port out to 4 drives. Even better if you get a case (like the 4220 above) that have a backplane with Mini-SAS ports already. Then for 20 disks, you just have 5 cables going from the RAID controller to the backplane.

PCI Cards

The least expensive option to adding a few disks to your system is buying a standard PCI controller. These run about $50 for a decent one that supports 4 SATA disks. The major drawback here is the speed, especially when you start adding multiple PCI disk controllers to the same bus. With a max bus bandwidth of just 133MB/sec, a few disks will quickly saturate the bus leaving you with a pretty substantial bottleneck. Still, it’s the cheapest way to go, so if you aren’t looking for top-notch performance, it’s a consideration.

1 Lane PCI-Express Cards

PCI-Express 1x cards are a mid-range price consideration. The models that support only 2 disks start pretty inexpensive, and you’ll see the price of these go all the way up to the couple hundred dollar range. Most of the time, these will not support more than 4 disks per card because of bandwidth limitations on a 1 lane PCI-Express bus. A 1x slot has about double the bandwidth of an entire PCI bus (250MB/sec). The other advantage here is when adding more than one card, each card will have that amount of bandwidth, instead of sharing the bandwidth with multiple cards in the PCI slots.

4 Lane PCI-Express Cards

By far the most popular type of card for 4 or more disks, PCI-Express 4x cards have plenty of bandwidth (1000MB/sec) and are fairly reasonably priced. They start at around $100 for a 4 disk controller, and go on up to the $500 range. What you have to watch here is to check how many 4x slots you have on your motherboard. It doesn’t do you any good to have several 4x cards without any slots to put them in. Fortunately, most newer motherboards are coming with multiple 4x or even 8x PCI-Express slots, so if you are buying new hardware you shouldn’t have a problem.

For my file server, I ended up with a RocketRAID 2680 card. This PCI-Express 4x card has 2 Mini-SAS connectors on it, for support of up to 8 disks (without an expander, more on that later). Newegg had an amazing sale on this card, and I was able to pick it up for half price. A nice bonus is its compatibility with Macs, so if I ever change my mind I can always move the card to my Mac Pro and use it there.

Using Expanders

Expanders provide an inexpensive way to connect a large number of disks to a less expensive controller. Assuming your controller provides SAS connections (it’s better if it has a Mini-SAS port on the card), you can get a SAS expander to connect several more disks than the controller card can support out of the box. When considering a SAS expander, you should check to make sure your RAID controller will support it. Most SAS controllers support up to 128 disks using expanders, but not all do.

A typical expander might look something like the Chenbro CK12804 (available for $250-300). Even though this looks like a PCI card, it’s not. The expander is made in this form factor to make it easy for you to mount in any PCI or PCI-Express slot that is available on your computer. There are no leads to make a connection between this card and your motherboard. Because of this, the expander draws power from an extra Molex connector (hopefully you have a spare from your power supply). You simply plug a Mini-SAS cable from your controller to the expander, and then plug several Mini-SAS cables from the expander to your disks. With this particular expander, you can plug 24 hard drives into a controller that originally only supported 4. A very nice way to add more capacity without purchasing expensive controllers.

The drawback is that you are running 24 drives with the same amount of bandwidth as 4. So you are splitting 1200MB/sec among 24 disks. 50MB/sec for a single disk doesn’t seem too unreasonable, but if you are trying to squeeze as much performance out of your system as possible, this might not be the best route.

Power Supplies

When working with this many disks, the power supply you use really comes into play. Each 7200 rpm hard drive uses around 10-15 watts of power, so with 20 drives you are looking at between 200-300 watts of power just for the disks. Throw in the extra controller cards, and that adds up to a hefty power requirement. So make sure your power supply can keep up by getting one that can output at least 500 watts (600-750 watts would be better).

Results

So putting all this together, what did I end up with? Well, I upgraded my motherboard by buying an old one off a friend (they don’t sell new Socket 939 motherboards anymore), this one has 8 SATA ports on it. Then I bought a RocketRAID 2680 for another 8 disks. What about the last 4? Well, for now I’m not going to worry about that. If I need more than 16 disks in this computer, I’ll most likely get another 4 disk (1x PCI-Express) controller and use that for the remaining drive trays. What did it cost me? Well, the chassis, “new” motherboard, RAID card, and some Mini-SAS to SATA cables came in just over $500. Components that I’m using from another computer include the power supply (550 watt), processor, memory, and of course the disks I already have. Pretty reasonable considering the storage capacity (up to 32TB with current 2TB drives and without using the last 4 drive bays).

Next will come the software setup for this server, which I’ll save for another blog post.