Life, Technology, and Meteorology

Category: Network (Page 1 of 2)

Distributing load across multiple volumes

When it was time to implement a new online service to store observations for tens of thousands of weather stations and make that data available to Seasonality users, I had a lot to think about with respect to the hardware configuration of the weather servers. The observations service requires a lot of disk I/O (not to mention storage space), but it’s pretty light on processor and memory requirements. I had spare cycles on the current weather servers, so I didn’t see the need to buy all new equipment. However, I wanted to be careful because I didn’t want the increased disk load to slow down the other services running on the servers.

Let’s back up a bit and talk about what kind of setup I currently have on the weather servers for Seasonality. I have a couple of servers in geographically diverse locations, each running VMware ESX with multiple virtual machines. Each virtual machine (VM) handles different types of load. For instance, one VM handles dynamic data like the weather forecasts, while a different VM serves out static data like the websites and map tiles. These VMs are duplicated on each server, so if anything goes down there is always a backup.

One of the servers is a Mac Mini. It had an SSD and a hard drive splitting the load. With the new observations service in the pipeline, I replaced the hard drive with a second SSD to prepare for the upgrade. With this particular server being marked as a backup most of the time, I didn’t have any load issues to worry about.

The other server is a more purpose-built Dell rack mount, with enterprise hardware and SAS disks, and this is the box that I lean on more for performance. Before the observations server I had two RAID mirrors setup on this server. One RAID was on a couple of 15K RPM disks and handled all the dynamic VMs that needed the extra speed, like the forecast server and the radar/satellite tile generator. The other RAID was on a couple of more typical 7200 RPM disks and hosted VMs for the base map tiles, email, development, etc. There were two more disk bays that I could put to use, but I had to decide the best way to use them.

One option was to fill the extra two disk bays with 7200 RPM disks, and expand the slower RAID to be a bit more spacious, and probably increase the speed a reasonable amount as well. The other option was to add two disks that didn’t match any of the other RAIDs, effectively adding a 3rd mirrored RAID to the mix.

I decided on the later option, because I really wanted to make sure any bottlenecks would be isolated to the observations server. For the price/performance, I settled on 10K RPM disks to get some of the speed of the faster spindles, while not breaking the bank like 15K or SSDs. The observations service would be run completely on the new RAID, so it wouldn’t interfere with any of the current services running on the other volumes. So far it has worked beautifully, without any hiccups.

My point here is that it’s not always the best idea to create a single big volume and throw all your load at it. Sometimes that setup works well because of its simplicity and the extra speed you might get out of it. However, with most server equipment having enough memory and CPU cycles to create several virtual machines, usually the first limitation you will run into is a disk bottleneck. When splitting the load between multiple RAID volumes, you not only make it easier to isolate problem services that might be using more than their fair share, but you also limit the extent of any problems that do arise while still retaining the benefit of shared hardware.

Fixing BufferBloat

For years I’ve had issues with with my internet connection crapping out whenever I’m uploading data. The connection will be fine when browsing the web and downloading data at the rated speed of 60 mbps. However, whenever I tried to upload a large file, it would saturate my upload link and slow every other network connection to a crawl. Websites wouldn’t load, DNS wouldn’t resolve, and ping times would be measured in seconds.

My theory for a long time was that the upload bandwidth was becoming so saturated, that TCP ACKs for incoming data wouldn’t get sent out in a reasonable amount of time. So for a long time I was looking for a way to prioritize TCP SYN/ACK packets. However, I never ended up figuring out how to do this.

A few nights ago while looking for a different solution, I stumbled across the idea of BufferBloat causing issues when saturating an internet link. Apparently, modern networking equipment has a lot more connection buffer memory than older equipment. This seems like a good thing, with memory prices being so cheap it makes sense to include a large buffer to help keep the network link saturated. The increased buffer could be in your router, cable modem, or any number of components at your ISP. Unfortunately, this can cause problems when your link is slow enough to fill the buffer quickly.

When TCP connections are created, the networking hardware starts spooling up the connection speed by gauging the TCP ACK packet response times. The faster the ACK packets arrive, the faster the network connection appears to be to the network interface, so the interface tries to send data even faster until the link is saturated.

The problem is that networking equipment between the network interface on your computer and your destination may be buffering a lot of network packets. In my case, I have a 5 mbps upload link, so either my modem or router is buffering enough data while I’m uploading a large file that TCP ACK packets are taking several seconds to arrive back. During that time, the packets are just sitting in the buffer waiting to be sent. Once the bandwidth to send the packets is available, they transmit relatively quickly, but from the standpoint of my computer the response time is very slow. This kills the connection.

The fix is to limit the amount of outgoing bandwidth on your router using QoS. What you want to do is to limit your bandwidth to about 0.5 – 1 mbps less than your connection can handle. On the ZyXel ZyWALL 110, this is done through the BWM (bandwidth management) screen in the configuration. First, enable BWM with the checkbox at the top of the page. Then add a new rule:

Enable: checked
Description: Outgoing Bandwidth Limit
BWM Type: Shared

User: any
Schedule: none
Incoming Interface: lan1
Outgoing Interface: wan1
Source: any
Destination: any
DSCP Code: any
Service Type: Service Object
Service Object: any

DSCP Marking
Inbound Marking: preserve
Outbound Marking: preserve

Bandwidth Shaping
Inbound: 0 kbps (disabled), Priority 4, Maximum 0 kbps (disabled)
Outbound: 4000 kbps, Priority 4, Maximum 4000 kbps

802.1P Marking
Priority Code: 0
Interface: none

Log: no

The key above is in the Bandwidth Shaping section. Set your outbound guaranteed bandwidth and bandwidth limit to 0.5 – 1 mbps below your maximum upload speed. Here I set mine to 4000 kbps, which is a megabit less than my internet upload speed of 5 mbps.

Once I did this, my connection improved dramatically. I take a slight hit in upload rate for the single upload connection, but overall my internet connection is a lot more responsive for other connections and devices while uploading. If you think you might be experiencing this same issue, try running a speed test over at DSL Reports on your internet connection. Their speed test will check and give you a report on BufferBloat. Before the fix, my upload BufferBloat was around 600 ms. After the fix the BufferBloat is down to only 20 ms.

On the New Mac Pro

Apple talked more about the new Mac Pro at it’s special event today, giving more details on when it will start shipping (December) and how much it will cost ($2999 for the base model). They also covered some additional hardware details that weren’t mentioned previously and I thought I would offer my 2 cents on the package.


There’s been a lot of complaints about the lack of expansion in the new Mac Pro, particularly when it comes to storage. With the current Mac Pro able to host up to 4 hard drives and 2 DVD drives, the single PCIe SSD slot in the new Mac Pro can be considered positively anemic. This has been the biggest issue in my eyes. Right now in my Mac Pro, I have an SSD for the OS and applications, a 3TB disk with my Home directory on it, and a 3TB disk for Time Machine. That kind of storage just won’t fit in a new Mac Pro, which only has a single PCIe SSD slot.

I believe Apple’s thought here is that big storage doesn’t necessarily belong internally on your Mac anymore. Your internal drives should be able to host the OS, applications, and recently used documents, and that’s about it. Any archival storage should be external, either on an external hard drive, on a file server, or in the cloud. Once you start thinking in this mindset, the lack of hard drive bays in the new Mac Pro start to make sense.

Personally, if I decide to buy one, I’ll probably start migrating my media to a file server I host here in a rack and see just how much space I need for other documents. I already moved my iTunes library a couple months back (300GB), and if I move my Aperture photo libraries over, that will reduce my local data footprint by another 700-800GB (depending on how many current photo projects I keep locally). That’s an easy terabyte of data that doesn’t need to be on my Mac, as long as it’s available over a quick network connection.

VMware virtual machines are a little tricky, because they can use a lot of small random accesses to the disk, and that can be really slow when done over a network connection with a relatively high latency. The virtual disks can grow to be quite large though (I have a CentOS virtual machine to run weather models that uses almost 200GB). I’ll have to do some testing to see how viable it would be to move these to the file server.

All this assumes that you want to go the network storage route. To me, this is an attractive option because a gigabit network is usually fast enough, and having all your noisy whirring hard drives in another room sounds… well… peaceful. If you really need a lot of fast local storage though, you’ll have to go the route of a Thunderbolt or USB 3 drive array. If you have big storage requirements right now, you most likely have one of these arrays already.

CPU/GPU Configurations

The new Mac Pro comes with a single socket Xeon CPU and dual socket AMD FirePro GPUs. This is reverse from the old Mac Pro, which had 2 CPU sockets and a single graphics card (in its standard configuration). The new Mac Pro certainly is being geared more toward video and scientific professionals that use the enhanced graphics power.

With 12 cores in a single Xeon, I don’t think the single socket CPU is a big issue. My current Mac Pro has 8 cores across 2 sockets, and other than when I’m compiling or doing video conversion, I have never come close to maxing all the cores out. Typical apps just aren’t there yet. You’re much better off having 4-6 faster cores than 8-12 slower cores. Fortunately, Apple gives you that option in the new Mac Pro. A lot of people have complained about paying for the extra GPU though. FirePro GPUs aren’t cheap, and a lot of people are wondering why there isn’t an option to just have a single GPU to save on cost.

I think the reason for this is the professional nature of the Mac Pro. The new design isn’t really user expandable when it comes to the graphics processors, so Apple decided to include as much GPU power as they thought would be reasonably desired by their pro customers. The new Mac Pro supports up to three 4K displays, or up to six Thunderbolt displays. A lot of professionals use dual displays, and it’s increasingly common to have three or more displays. With dual GPUs this isn’t a problem in the new Mac Pro, while if they just configured a single GPU the display limit would be comparable to the iMac. Personally, I have 2 graphics cards in my Mac Pro, and have used up to 3 displays. Currently I only use 2 displays though, so I could go either way on this issue. I do like the idea of having each display on it’s own GPU though, as that will just help everything feel snappier. This is especially true once 4K displays become standard on the desktop. That’s a lot of pixels to push, and the new Mac Pro is ready for it.

External Expansion

I’ve seen people comment on the lack of Firewire in the new Mac Pro. This, in my opinion, is a non-issue. Even Firewire 800 is starting to feel slow when compared to modern USB 3 or Thunderbolt storage. If you have a bunch of Firewire disks, then just buy a $30 dongle to plug into one of the Thunderbolt ports. Otherwise you should be upgrading to Thunderbolt or USB 3 drives. USB 3 enclosures are inexpensive and widely available.

Outside that, the ports are very similar to the old Mac Pro. One port I would have liked to see in the new Mac Pro was 10G ethernet. The cost per port of 10G is coming down rapidly, and with moving storage out onto the network, it would have been nice to have the extra bandwidth 10G ethernet offers. Apple introduced gigabit ethernet on Macs well before it was a common feature on desktop computers as a whole. Perhaps there will be a Thunderbolt solution to this feature gap sometime down the road.

Power Consumption and Noise

This alone is a good reason to upgrade from a current Mac Pro. The new Mac Pro will only use around 45W of power at idle, which isn’t much more than a Mac Mini and is about half of the idle power consumption of the latest iMacs (granted, the LCD in the iMac uses a lot of that). My 2009 Mac Pro uses about 200W of power at idle. Assuming you keep your Mac Pro on all the time, and are billed a conservative $0.08 per kilowatt hour, you can save about $100/year just by upgrading. That takes some of the sting out of the initial upgrade cost for sure.

Using less energy means needing less cooling. The new Mac Pro only has a single fan in it, and it’s reportedly very quiet. Typically the unit only makes about 12dB of noise, compared to around 25dB in the current Mac Pro. With perceived volume doubling for every 3dB increase, the new Mac Pro is about 16 times quieter than the old one. Surely the lack of a spinning HD helps here as well.


Overall the new Mac Pro is a slick new package, but you already knew that. It isn’t for everybody, but it fits the needs of the professional customer pretty well moving forward. Personally, I haven’t decided if I will buy one yet. My Mac Pro is almost 5 years old at this point, and while it still does a good job as a development machine, I’m starting to feel its age. However, I haven’t decided whether I will replace it with a new Mac Pro, the latest iMac, or even a Retina MacBook Pro in a form of docked configuration. There are benefits and drawbacks to each configuration, so I’m going to wait until I can get my hands on each machine and take them for a spin.

Moving back a generation

This week, I’m doing something rare and actually stepping back a generation of hardware. I usually try to keep the latest and greatest around here, within reason, but with this I just can’t help it.

Back in 2008 I purchased a Cisco ASA 5505 firewall/router. It has worked perfectly since then, and I probably only use 10% of its amazing feature set. I have it configured to forward a bunch of ports (using NAT/PAT), provide VPN service for my devices while I’m out of the office, and do basic packet inspection to avoid DoS attacks and other issues. The router has never once crashed on me and has stayed online for hundreds of days at a time without any issue.

So why am I replacing it? Well, it turns out that Cisco’s licensing absolutely cripples the 5505. I have a 10 user license, which I thought would be plenty when I bought it. Of course, this was before all the extra mobile devices, game devices, webcams, and printers were added to the network. I quickly passed this 10 device limit and am well on my way to three times that. Everything has WiFi built-in these days, and 10 devices just doesn’t cut it anymore.

I looked into what it would cost to upgrade the ASA to a 50 user license and an unlimited license. The upgrade to a 50 user license is around $250, and the unlimited license is a $350 upgrade. That’s more than I spent on the router hardware itself.

For the past couple years, I’ve gotten around the limitation by segmenting the network. I put my main systems (development Mac, the file server, etc) on the primary network connected to ASA, and have connected everything else to a second subnet that uses an Airport Extreme as a gateway. So the ASA only sees a few devices on the primary network, and everything else hides behind the Airport. This works pretty well, but the Airport Extreme bottlenecks communication between the two subnets, and devices on the primary network can’t connect to devices on the secondary network.

I’m tired of it. So this week when I saw someone on Craigslist was selling a PIX 515e firewall, I jumped at the chance to have an unrestricted network. Even though the PIX is a few years older, it’s a higher-end model so it can handle 50% more bandwidth than the ASA (up to 190 mbps). If I ever wanted to segment the network again, the PIX supports up to 25 VLANs. And the previous owner added a memory upgrade, so it runs the same OS version that my newer ASA has. There really isn’t a drawback I can see.

Of course I am still keeping the Airport Extreme on the network. I definitely don’t want to give up wireless. But now the Airport can act as a bridge and allow two-way traffic between wired and wireless clients. I also brought all wired devices from the secondary network back onto the primary, where they can talk to each other directly using a 24 port gigabit smart switch. It is a much faster and cleaner setup.

Here’s a shot of the home network rack since the upgrade.

Home Office Rack

Office Network Updates

Over the past several weeks, I’ve been spending a lot of time working on server-side changes. There are two main server tasks that I’ve been focusing on. The first task is a new weather forecast server for Seasonality users. I’ll talk more about this in a later post. The second task is a general rehash of computing resources on the office network.

Last year I bought a new server to replace the 5 year old weather server I was using at the time. This server is being coloed at a local ISPs datacenter. I ended up with a Dell R710 with a Xeon E5630 quad-core CPU and 12GB of RAM. I have 2 mirrored RAID volumes on the server. The fast storage is handled by 2 300GB 15000 RPM drives. I also have a slower mirrored RAID using 2 500GB 7200 RPM SAS drives that’s used mostly to store archived weather data. The whole system is running VMware ESXi with 5-6 virtual machines, and has been working great so far.

Adding this new server meant that it was time to bring the old one back to the office. For its time, the old server was a good box, but I was starting to experience reliability issues with it in a production environment (which is why I replaced it to begin with). The thing is, the hardware is still pretty decent (dual core Athlon, 4GB of RAM, 4x 750GB disks), so I decided I would use it as a development server. I mounted it in the office rack and started using it almost immediately.

A development box really doesn’t need a 4 disk RAID though. I currently have a Linux file server in a chassis with 20 drive bays. I can always use more space on the file server, so it made sense to consolidate the storage there. I moved the 4 750GB disks over to the file server (setup as a RAID 5) and installed just a single disk in the development box. This brings the total redundant file server storage up past 4 TB.

The next change was with the network infrastructure itself. I have 2 Netgear 8 port gigabit switches to shuffle traffic around the local network. Well, one of them died a few days ago so I had to replace it. I considered just buying another 8 port switch to replace the dead one, but with a constant struggle to find open ports and the desire to tidy my network a bit, I decided to replace both switches with a single 24 port Netgear Smart Switch. The new switch, which is still on its way, will let me setup VLANs to make my network management easier. The new switch also allows for port trunking, which I am anxious to try. Both my Mac Pro and the Linux file server have dual gigabit ethernet ports. It would be great to trunk the two ports on each box for 2 gigabits of bandwidth between those two hosts.

The last recent network change was the addition of a new wireless access point. I’ve been using a Linksys 802.11g wireless router for the last several years. In recent months, it has started to drop wireless connections randomly every couple of hours. This got to be pretty irritating on devices like laptops and the iPad where a wired network option really wasn’t available. I finally decided to break down and buy a new wireless router. There are a lot of choices in this market, but I decided to take the easy route and just get an Apple Airport Extreme. I was tempted to try an ASUS model with DD-WRT or Tomato Firmware, but in the end I decided I just didn’t have the time to mess with it. So far, I’ve been pretty happy with the Airport Extreme’s 802.11n performance over the slower 802.11g.

Looking forward to finalizing the changes above. I’ll post some photos of the rack once it’s completed.

Enabling Ping and Traceroute on the Cisco ASA 5505

Today I found some time to sit down and figure out why my ASA box was denying ping, traceroute and other ICMP traffic. Denying all ICMP traffic is the most secure option, and I think Cisco made a good choice by making this the default. However, I really wanted to be able to ping and traceroute from inside my network to the outside world, if for no other reason than to check the latency of my servers. Here’s how to do it in ASDM.

First, open an ASDM connection to your router. Go into the Configuration screens and click on Firewall to configure the firewall options. Then click on Service Policy Rules to configure the services that the firewall software will monitor. Select the global policy (first and only one in the list), and click on the Edit button. Switch to the Rule Actions (3rd) tab, and in the list check to enable ICMP. You can leave ICMP Error unchecked. Close that and Apply the changes.

Now, if you just want to be able to ping, stop here and you are done. However, traceroute will not work with this setup. For traceroute to work, you have to complete this follow-up task.

While still under the Firewall configuration switch to the Access Rules item. Add an access rule to permit ICMP traffic. Click the Add button, make sure the interface is set to outside, action is Permit, and Source/Destination is any. Under Service, click the … button and select the icmp line and click OK. Click OK again in the Add Access Rule dialog and Apply the results to finish the process.

Building a SoHo File Server: The Hardware

During the past few weeks, I have been working on a solution to help consolidate the storage I’m using for both personal and business files. Weather data is pretty massive, so any time I do some serious weather server work for future Seasonality functionality, I’m almost always using a ton of disk space to do it. On the home side, media files are the dominant storage sucker. With switching to a DSLR camera and shooting RAW, I’ve accumulated over 100GB of photos in the past 6 months alone. I’ve also ripped all my music into iTunes, and have a Mac with an EyeTV Hybrid recording TV shows to watch later. All of this data adds up.

Before starting this project, I had extra hard drives dispersed across multiple computers. My Mac Pro (used as a primary development box) had weather data, a Linux file server held media and some backup, the older Mac has the media files, and my laptop has just the essentials (but as much as I can possibly fit on it’s 320GB disk).


The requirements for this my new storage solution are as follows…

  1. Provide a single point of storage for all large files and backups of local computers on the network.
  2. Support more than 10 disks, preferably closer to 20. I have 7 disks ready for this box right now, another 4 coming in the next 6 months, and want to have more room after that to grow.
  3. Offer fast access to the storage to all machines on the network. I have a gigabit network here, and I would like to see close to 100MB/sec bandwidth to the data.
  4. Rackmounted. I recently setup a rack for my network equipment in another room of the house. A file server should go in that rack, and have an added benefit of keeping me from having to listen to a bunch of disks and fans spinning while I work.
  5. Keep a reasonable budget. This setup shouldn’t break the bank. Fiber Channel SANs costing $5k+ need not apply.

There are several ways to attack this problem. Here are just a few of the options that could potentially solve this storage problem.

Option 1: Buy a Drobo

The easiest solution to this problem is to just go out and buy a Drobo. Drobos will let you just pop in some hard drives and it takes care of the rest (redundancy, etc). Unfortunately, being the easiest option comes with some drawbacks. First is cost… A 4 bay Drobo goes for around $400, and the more formidable 8 bay model starts at $1500. With 10+ drives I would need to spend $1200 at a minimum, and that is more than I want to budget. The second disadvantage is speed. I’ve heard from many people who have the 4 bay model that copying files to/from the device take a long time (maybe only around 20MB/sec bandwidth). If I’m paying a premium, I want fast access to the storage.

Option 2: Buy an external multi-bay eSATA enclosure

This is an appealing option, especially if you want all the storage to just be available on a single machine. It’s directly attached, so it’s fast. The enclosures can be relatively inexpensive. The main problem with this option for me was that buying an enclosure with space for 10+ disks was more costly, and having that many disks spinning at my desk would be pretty loud and distracting. Furthermore, I would like to have a storage system that is all one piece, instead of having a separate computer and storage box.

Option 3: Buy a NAS

Cheap NAS boxes are a dime-a-dozen these days. I actually already went down this route a couple of years ago. I bought a 1TB NAS made by MicroNet. The biggest drawback was it was too slow. Transfer rates were usually only around 10MB/sec, which got to be a drag. The better NAS boxes these days offer iSCSI targets, giving you some speed benefits as well as the advantage of your other client computers seeing the disk as DAS (direct attached storage). Again though, check out some of the costs on a rackmount NAS supporting 10+ disks…they can get to be pretty expensive. This time I’m going to try another route.

Option 4: Build out a more advanced Linux file server.

This is the option I chose to go with. With my current Linux file server, I can get around 75MB/sec to a 3 disk RAID 5 using Samba. Rackmount enclosures supporting several disks are fairly inexpensive. All the storage is in one place and, if you know something about Linux, it’s pretty easy to manage.

The Chassis

I’m using my current Linux file server as a base, because it’s already setup to fit my needs. I needed to find a new enclosure for the Linux server though, because my current case will only hold 4 hard drives. I recently started to rack up my network equipment, so I began looking for a rackmount enclosure (3-4U) that would hold a bunch of disks. I ended up finding the Norco RPC-4020. It’s a 4U chassis with 20 hot-swap disk trays. The disk trays all connect to a backplane, and there are two different versions of the case depending on what kind of backplane you would like to have. The first (RPC-4020) has direct SATA ports on the backplane (20 of them, one for each disk). The second (RPC-4220) has 5 mini-SAS ports on the backplane (one Mini-SAS for 4 disks), which makes cable management a little easier. I went with the cheaper (non-SAS) model in an effort to minimize my file server cost.

The Controllers

After finding this case, the next question I had to answer was what kind of hardware I would need on the host side of these 20 SATA cables. This ended up being a very difficult question to answer, because there are so many different controllers and options available. My motherboard only supports 4 hard drives, so I need controllers for 16 more disks. Disk controllers can get to be pretty expensive, especially when you start adding lots of ports to them. A 4 port SATA PCI-Express controller will run you about $100, and jumping up to 8 ports will put you in the $200-300 range for the cheapest cards. When buying a motherboard, try to find a model with decent on-board graphics. That way, you don’t have to waste a perfectly good 16 lane PCI-Express slot on a graphics card (you’ll need it for a disk controller later).

This is also the point where you will need to decide on hardware or software RAID. If you are going for hardware RAID, there’s just no other way around it, you’ll have to spend a boatload of money for a RAID card with a bunch of ports on it. I’ve been using software RAID (levels 0, 1, 5, and 10) on both FreeBSD and Linux here for almost 10 years, and it’s almost always worked beautifully. Every once in awhile I have run into a few hiccups, but I’ve never lost any data from it. Software RAID also has the benefit of allowing you to stick with several smaller disk controllers and combining the disks into the RAID only once you get to the OS level. So with that in mind, I choose to stick with software RAID.

Depending on the type of computer you are converting, you might be better off buying a new motherboard with more SATA ports on it. Motherboards with 8 and even 12 SATA ports are readily available, and often are much less expensive than an equivalent RAID card. With more SATA ports on the motherboard, you have more open PCI(-Express/X) slots available for disk controllers, and more capacity overall.

Digression: SAS vs. SATA

There are many benefits to using SAS controllers over SATA controllers. I won’t give a comprehensive comparison between the two interfaces, but I will mention a couple of more important points.

1. SAS controllers work with SATA drives, but not the other way around. So if you get a SAS controller, then you can use any combination of SAS and/or SATA drives. On the other hand, if you just have a SATA controller, you can only use SATA drives and not SAS drives.

2. SAS is much easier to manage when cabling up several disks. Mini-SAS ports (SFF-8087) carry signals for up to 4 directly attached disks. Commonly, people will buy Mini-SAS to 4xSATA converter cables (watch which ones you buy, “forward” cables go from Mini-SAS at the controller to SATA disks, “reverse” cables go from SATA controllers/motherboards to Mini-SAS disk backplanes). These cables provide a clean way to split a single Mini-SAS port out to 4 drives. Even better if you get a case (like the 4220 above) that have a backplane with Mini-SAS ports already. Then for 20 disks, you just have 5 cables going from the RAID controller to the backplane.

PCI Cards

The least expensive option to adding a few disks to your system is buying a standard PCI controller. These run about $50 for a decent one that supports 4 SATA disks. The major drawback here is the speed, especially when you start adding multiple PCI disk controllers to the same bus. With a max bus bandwidth of just 133MB/sec, a few disks will quickly saturate the bus leaving you with a pretty substantial bottleneck. Still, it’s the cheapest way to go, so if you aren’t looking for top-notch performance, it’s a consideration.

1 Lane PCI-Express Cards

PCI-Express 1x cards are a mid-range price consideration. The models that support only 2 disks start pretty inexpensive, and you’ll see the price of these go all the way up to the couple hundred dollar range. Most of the time, these will not support more than 4 disks per card because of bandwidth limitations on a 1 lane PCI-Express bus. A 1x slot has about double the bandwidth of an entire PCI bus (250MB/sec). The other advantage here is when adding more than one card, each card will have that amount of bandwidth, instead of sharing the bandwidth with multiple cards in the PCI slots.

4 Lane PCI-Express Cards

By far the most popular type of card for 4 or more disks, PCI-Express 4x cards have plenty of bandwidth (1000MB/sec) and are fairly reasonably priced. They start at around $100 for a 4 disk controller, and go on up to the $500 range. What you have to watch here is to check how many 4x slots you have on your motherboard. It doesn’t do you any good to have several 4x cards without any slots to put them in. Fortunately, most newer motherboards are coming with multiple 4x or even 8x PCI-Express slots, so if you are buying new hardware you shouldn’t have a problem.

For my file server, I ended up with a RocketRAID 2680 card. This PCI-Express 4x card has 2 Mini-SAS connectors on it, for support of up to 8 disks (without an expander, more on that later). Newegg had an amazing sale on this card, and I was able to pick it up for half price. A nice bonus is its compatibility with Macs, so if I ever change my mind I can always move the card to my Mac Pro and use it there.

Using Expanders

Expanders provide an inexpensive way to connect a large number of disks to a less expensive controller. Assuming your controller provides SAS connections (it’s better if it has a Mini-SAS port on the card), you can get a SAS expander to connect several more disks than the controller card can support out of the box. When considering a SAS expander, you should check to make sure your RAID controller will support it. Most SAS controllers support up to 128 disks using expanders, but not all do.

A typical expander might look something like the Chenbro CK12804 (available for $250-300). Even though this looks like a PCI card, it’s not. The expander is made in this form factor to make it easy for you to mount in any PCI or PCI-Express slot that is available on your computer. There are no leads to make a connection between this card and your motherboard. Because of this, the expander draws power from an extra Molex connector (hopefully you have a spare from your power supply). You simply plug a Mini-SAS cable from your controller to the expander, and then plug several Mini-SAS cables from the expander to your disks. With this particular expander, you can plug 24 hard drives into a controller that originally only supported 4. A very nice way to add more capacity without purchasing expensive controllers.

The drawback is that you are running 24 drives with the same amount of bandwidth as 4. So you are splitting 1200MB/sec among 24 disks. 50MB/sec for a single disk doesn’t seem too unreasonable, but if you are trying to squeeze as much performance out of your system as possible, this might not be the best route.

Power Supplies

When working with this many disks, the power supply you use really comes into play. Each 7200 rpm hard drive uses around 10-15 watts of power, so with 20 drives you are looking at between 200-300 watts of power just for the disks. Throw in the extra controller cards, and that adds up to a hefty power requirement. So make sure your power supply can keep up by getting one that can output at least 500 watts (600-750 watts would be better).


So putting all this together, what did I end up with? Well, I upgraded my motherboard by buying an old one off a friend (they don’t sell new Socket 939 motherboards anymore), this one has 8 SATA ports on it. Then I bought a RocketRAID 2680 for another 8 disks. What about the last 4? Well, for now I’m not going to worry about that. If I need more than 16 disks in this computer, I’ll most likely get another 4 disk (1x PCI-Express) controller and use that for the remaining drive trays. What did it cost me? Well, the chassis, “new” motherboard, RAID card, and some Mini-SAS to SATA cables came in just over $500. Components that I’m using from another computer include the power supply (550 watt), processor, memory, and of course the disks I already have. Pretty reasonable considering the storage capacity (up to 32TB with current 2TB drives and without using the last 4 drive bays).

Next will come the software setup for this server, which I’ll save for another blog post.

Setting up a Mac/iPhone VPN to a Cisco ASA Router

I bought a Cisco ASA 5505 about 6 months ago, and love it so far. While setting up a VPN between my iPod touch and the ASA was straightforward, I was less fortunate when trying to get the same thing working from my MacBook Pro. Here’s a description of how to configure the ASA VPN so both devices work.

First, let me give a brief outline of what I am trying to do. I want both my iPod touch and my MacBook Pro to be able to connect to the Cisco ASA box over a VPN interface. Once the VPN has been established, I want all of my internet traffic to go first to the ASA and then out to the rest of the internet from there (otherwise known as split-tunneling in network jargon). With a default VPN setup on the ASA, this works fine from the iPhone, but from the Mac I was only able to access the internal network. The rest of my internet traffic just wouldn’t get sent. Note that this configuration will not work with Mac OS X’s L2TP VPN client, you’ll need to install the Cisco VPN client instead.

The solution isn’t too difficult. First, setup a fairly default VPN configuration on the ASA. Use the VPN Wizard on the ASDM console with the following settings…

Page 1 VPN Tunnel Type: Remote Access VPN Tunnel Interface: outside Check the box to enable inbound IPsec sessions to bypass interface access lists.

Page 2 Select Cisco VPN Client for the client type.

Page 3 Select Pre-shared key for authentication method, typing a password into the Pre-Shared Key field. Type in a Tunnel Group Name to use, which will be used again later. I’ll use VPNGroup as an example.

Page 4 Authenticate using the local user database.

Page 5 Make sure your ASDM username is in the list on the right side, so you are able to connect to the VPN with that account.

Page 6 If you haren’t already, create a IP address pool to use for VPN connections. This is an IP range within your internal network. I use with a subnet mask of

Page 7 Type in your primary and secondary DNS servers into the box. I also set my default domain name to my domain (

Page 8 Leave everything default: Encryption is 3DES, Authentication is SHA, and DH Group is 2.

Page 9 Again, leave everything default. Encryption is 3DES and Authentication is SHA.

Page 10 Leave everything as-is, except check the box at the bottom to enable split tunneling.

Page 11 Click Finish and you are done.

Now, your iPhone should be working just fine. Just go into the VPN preferences and setup a new IPSec configuration with your server, user account/password, and group name/pre-shared secret. Unfortunately, the Mac will not be able to access the entire internet when connected to the VPN. To fix this issue, some additional configuration needs to take place in a terminal connection to the ASA box. If you haven’t already, enable SSH access to the ASA box and login. Then run the following commands: (comments in red)

cisco-gw> enable
Password: your password here
cisco-gw# config terminal

cisco-gw(config)# access-list outside_nat extended permit ip
Use your pool network and subnet mask in the last two args above. cisco-gw(config)# nat (outside) 1 access-list outside_nat

cisco-gw(config)# group-policy DfltGrpPolicy attributes
cisco-gw(config-group-policy)# dns-server value
Replace IP above with first DNS server cisco-gw(config-group-policy)# nem enable cisco-gw(config-group-policy)# exit

cisco-gw(config)# group-policy VPNGroup attributes
Replace VPNGroup above with your group from earlier. cisco-gw(config-group-policy)# split-tunnel-policy tunnelall cisco-gw(config-group-policy)# split-tunnel-network-list none cisco-gw(config-group-policy)# exit

cisco-gw(config)# write memory

That’s it! Just open the Cisco VPN Client on your Mac and add a new connection profile with the group and user settings you configured on the ASA.

Wireless Network

When upgrading to the ASA 5505 router, I was left in a situation where there would be two routers on my home office network: the ASA acting as a main wired router, and my old Linksys router acting as a host for wireless clients. The ASA was connected to the cable modem to my provider, and I set the internal network to The wireless router was a host on that internal network with a WAN IP of and a LAN network of This works fine when accessing hosts on the internet, but it was less than ideal when trying to access the wired internal network from a wireless computer. Because of the firewall and NAT happening on the Linksys device, wireless devices were second-class citizens on the LAN.

There was this little radio button the Linksys router that would switch the device from Gateway mode to Router mode. Hmm, that looked promising, so I tried it. This was nice, because NAT was no longer active…a host on the network could talk to a host on the wireless network. The drawback was that I would have to add a separate route from wired hosts to send traffic to the network through instead of the default ASA gateway at With the relatively small size of my network here, that’s not much of a problem, but I still felt there should be a better way.

Since I wanted to stick with one default route of, I looked into adding another VLAN to the ASA box, to see if it could route packets to down the port that connects to the wireless router. Unfortunately, my ASA is only licensed for 3 VLANs which are all in use (outside link, inside link, and DMZ). I could spend a few hundred bucks upgrading my ASA license to support more VLANs, but it just didn’t seem worth it.

Another option is to add a managed switch to the internal network and use that to setup VLANs. New hardware is always fun, but again this would cost a couple hundred bucks and there has to be another way…

Finally, the solution became immediately obvious…so obvious that it’s amazing I hadn’t thought of it before. Instead of connecting a wire from an internal port on the ASA to the WAN port on the Linksys, I tried connecting from the same internal port on the ASA to an internal LAN port on the Linksys, leaving the WAN port on the Linksys unused.

This setup works perfectly. I changed the internal network of the Linksys to the same as the ASA internal network, and gave the Linksys an internal IP of The ASA is already running a DHCP server on the network, so I disabled the Linksys DHCP server. Wireless hosts are now first-class citizens on this network…

ASA Port Forwarding

I came across the first less-than-trivial configuration situation on the ASA router this morning—port forwarding. On consumer routers, this is absolutely simple to setup, just specify what port number you want to forward and select the internal IP to forward it to. On the ASA, it’s a bit more complicated, and I decided to document it here in case anyone is Googling around for an answer. For this example, we are forwarding incoming traffic on port 8080 to a device on the internal network using the same port number.

First, you have to add the port to be forwarded to the outside interface’s access list. In ADSM, go to the Configuration panel under the Firewall section. Then click on Access Rules, and select the outside interface in the table. Click the Add button. Here, use the following settings:

  • Interface: outside
  • Action: Permit
  • Source: any
  • Destination: any
  • Service: tcp/8080 (or any other port number you would like to forward)
  • Description: (optional)
  • Enable Logging: (optional)

Click OK to add the access rule. Then click Apply at the bottom to upload the configuration to the router. In the end, it should look like this:

Now that we are allowing traffic on that port, we need to tell the router where to send the traffic. Click on the NAT Rules section and click the Add button to add a Static NAT Rule, using the following settings:

  • Original Interface: inside
  • Original Source: (replace with internal IP)
  • Translated Interface: outside
  • Translated IP: Use Interface IP Address
  • Enable Port Address Translation (PAT)
  • PAT Protocol: TCP
  • PAT Original Port: 8080 (replace with your port, on the outside interface)
  • PAT Translated Port: 8080 (replace with your port, on the internal device)

Again, hit OK to add the NAT rule and apply the settings to the router. It should look like this:

That’s it, you’re done!

« Older posts

© 2022 *Coder Blog

Theme by Anders NorenUp ↑