*Coder Blog

Life, Technology, and Meteorology

Month: January 2014

Pegasus2 Impressions

With the lack of drive bays in the new Mac Pro, Apple is definitely leaning toward external storage with its future models.  My Mac Pro won’t arrive until next month, but in the mean time I had to figure out what kind of storage system I was going to buy.

As I mentioned in a previous post, I had considered using my local file server as a large storage pool.  After trying it out for the past couple months, I wanted something that was a bit faster and more reliable though.  I decided to look at my direct attached storage (DAS) options.  Specifically, I was looking at Thunderbolt enclosures.

My data storage requirements on my desktop machine are currently between 3-4TB of active data, so single disk options weren’t going to cut it.  I need at least 2 disks in a striped RAID 0 at a minimum.  I’m not particularly comfortable with RAID 0 setups, because any one of the drives can fail and you would lose data.  However, with good automatic Time Machine backups, that shouldn’t be too much of an issue.  Ideally I want something with 3-4 drives that included a built-in hardware RAID 5 controller though.  This way, I would have a little bit of redundancy.  It wouldn’t be a replacement for good backups, but if I disk went offline, I could keep working until a replacement arrives.

The only 3 disk enclosure I found was the Caldigit T3.  This looks like a really slick device, and I was pretty close to ordering one.  The main drawback of the unit is that it doesn’t support RAID 5.  I would have to either have a 2 disk RAID 0 with an extra drive for Time Machine, or a 3 disk RAID 0 (which is pretty risky) to support the amount of storage I need.  I decided this wasn’t going to work for me.

Once you get into the 4 disk enclosures, the prices start to go up.  There are two options I considered here.  First is the Areca ARC-5026.  Areca earned a good reputation by manufacturing top-end RAID cards for enterprise.  The 5026 is a 4 bay RAID enclosure with Thunderbolt and USB 3 ports on the back.  The drawback is that it’s pretty expensive ($799 for just the enclosure), and it doesn’t exactly have a nice look to it.  It reminds me of a beige-box PC, and I wasn’t sure I wanted something like that sitting on my desk.

The other option I looked at was a Promise Pegasus2.  It’s also a 4 disk RAID system (with 6 and 8 disk options).  They offer a diskless version that is less expensive than the Areca.  It doesn’t support USB 3 like the Areca, but it does support Thunderbolt 2 instead of Thunderbolt 1.  And the case is sharp.  Between the faster host interface and the cost savings, I decided to get the Pegasus.

The diskless model took about 2 weeks to arrive.  The outside of the box claimed it was the 8TB R4 model, so Promise isn’t making a separate box for the diskless version.  I suspect that Apple twisted Promise’s arm a little bit to get them to release this model.  Apple knew there was going to be some backlash from Mac Pro upgraders who needed an external replacement for their previous internal drives.  Apple promoted Promise products back when the xServe RAID was retired, and I imagine Apple asked Promise to return the favor here.  The only place you can buy the diskless R4 is the Apple Store.  It isn’t sold at any other Promise retailers.

Since the enclosure doesn’t include any drives, I decided on Seagate 3TB Barracuda disks.  They are on the Promise supported drive list and I generally find Seagate to make the most reliable hard drives from past experience.  With a RAID 5, I would have about 9TB of usable space.  More than I need right now, but it’s a good amount to grow into.  Installing the hard drives was pretty straightforward: eject each tray, attach each drive with the set of 4 screws, and latch them back in.  Then I plugged it into my Mac with the included 3 foot black Thunderbolt cable and turned it on.

This being the diskless version, the default setup is to mount all four disks as if there was no RAID.  This is counter to the Pegasus models that include drives, where the default configuration is a RAID 5.  This module instead uses this pass-through mode (JBOD), so you can take drives right out of your old computer and use them with the new enclosure.  I had to jump through a few hoops, but getting the RAID setup wasn’t too bad.  I had to download the Promise Utility from their website first.  Once you install the software, you can open up the utility and then do the advanced configuration to setup a new RAID volume.  The default settings for creating a RAID 5 weren’t ideal.  Here’s what you should use for a general case…

Stripe Size:  128KB
Sector Size:  512 bytes
Read Cache Mode:  Read Ahead
Write Cache Mode:  Write Back

The Pegasus2 has 512MB of RAM, which is used for caching.  It’s a battery-backed cache, so using Write Back mode instead of Write Through should be okay for most cases.  Only use Write Through if you really want to be ultra-safe with your data and don’t care about the performance hit.

Once you get the RAID setup, it starts syncing the volume.  The initial sync took about 8 hours to complete.  The RAID controller limits the rebuild speed to 100MB/sec per disk.  This is a good idea in general because you can use the device during the rebuild and it let’s you have some bandwidth to start using the volume right away.  However, it makes me wonder how much time could be saved if there wasn’t a limit (I found no way to disable or increase the limit using their software).

Drive noise is low to moderate.  The documentation claims there are two fans, one big one for the drives and one small one for the power supply.  Looking through the power supply vent though, it doesn’t look like there’s actually a fan there.  Maybe it’s further inside and that is just a vent.  The bigger fan spins at around 1100-1200rpm (this is while doing the rebuild, but idle is no lower than 1000rpm).  It’s definitely not loud, but it’s not really quiet either.  Sitting about 2 feet away from the Pegasus, it makes slightly less noise as my old Mac Pro (I keep the tower on my desk about 3 feet away).  The noise from the Pegasus is a bit higher pitch though.  When the new Mac Pro gets here, I’ll have the Pegasus further away from me, so I’ll wait to fully judge the amount of noise at that point.

Overall I’m very happy with the system so far.  Initial benchmarks are good.  Since I don’t have the new Mac Pro yet, I’m testing on a 2011 MacBook Air over a Thunderbolt 1 connection.  Using the AJA System Test, I saw rates of around 480MB/sec reads and 550MB/sec writes.  Switching to BlackMagic, the numbers bounced around a lot more, but it came up with results around 475MB/sec reads and 530MB/sec writes.  With RAID 5 having notoriously slow writes because of the parity calculation, I’m a little surprised the Pegasus writes faster than it reads.  The RAID controller must be handling the parity calculation and caching well.  It will be interesting to see if benchmarks improve at all when connected to the new Mac Pro over Thunderbolt 2.

File Server Upgrade

Last month, the RAID card in my file server died.  I tried to replace the card with a newer model, but found that not all PCI Express cards match well with all motherboards.  The motherboard was old enough that the new card simply wouldn’t work with it.  Being that the server components (other than the drives) were almost 10 years old, I decided it was time to rebuild the internal components.

I already had a solid base from the old file server.  The case is a Norco RPC-4020.  It’s a 4U enclosure with 20 drive bays.  The most I’ve ever used was 12 bays, but with the increasing size of modern drives, I am whittling it down to 8.  The drives I have are pretty modern, so this build doesn’t factor in any additional drive cost.  Other than the drives though, the rest of the server’s guts needed a good refresh.  Here’s what I put in there:

Motherboard:  Asus Z87-Pro
I went with this Asus because it had a good balance of performance and economy (and Asus’ reliability).  The board has 8 SATA ports, which is great for a file server when you are trying to stuff a bunch of disks in there.  I also liked how the board used heatsinks instead of fans for cooling.  Less moving parts to wear out.  Finally, this board has plenty of PCIe slots in case I want to add RAID/HBA cards for more drives, or a 10GBASE-T Ethernet card down the line.

CPU:  Intel Core i5-4570S
This is one of the low power models in the Haswell (4th generation) line.  TDP is a moderate 65 watts.  I was debating between this chip and the 35 watt Core i3-4330T.  If this server just served files, then I would have bought the Core i3, but I also use the box to host a moderately-sized database and do some server-side development.  The Core i5 chip is a quad core instead of a dual core, and I decided it would be worth it to step up.  You’ll notice that a GPU isn’t included in the list here, and that’s because I’m just using the embedded GPU.  One less component to worry about.

Memory:  2x4GB Crucial Ballistix Sport DDR3-1600
I’ve never been into over-clocking, so I just went with whatever memory ran at the CPU’s native 1600Mhz.  Crucial is always a safe bet when it comes to memory.  This particular memory has a relatively low CL9 latency.

Power Supply:  Antec EA-550 Platinum 550 watt
The power supply is a make-or-break part of a server, especially when you have a lot of disks.  I wanted something that was very efficient, while also supplying plenty of power.  This power supply is 93% efficient, meaning a lot more energy is making it to the computer components themselves instead of being wasted in the form of heat.  The one drawback of this power supply is that it’s a 4 rail unit and all the Molex/SATA power connectors are on a single rail.  So it’s not quite ideal for servers with a lot of disks (you need enough to cover the power spike as the disks spin up), but it handles 8 drives just fine with some room to grow.

Boot Drive:  USB 3 internal motherboard header and flash drive
I really wanted the OS to stay off the data drives this time around.  The best way I found to do that is to use the USB 3 header built in to most modern motherboards.  Typically this header is for cases that have USB 3 ports on the front, but my case only has a single USB 2 port on the front so this header was going unused.  I found a small Lian Li adapter to convert the 20 pin port on the motherboard to 2 internal USB 3 ports.  Then I picked up a 128GB PNY Turbo USB 3 flash drive on sale.  The motherboard has no problem booting off the USB drive, and while latency is higher, raw throughput of this particular flash drive is pretty good.

The Lian Li adapter is great because I don’t have to worry about the flash drive coming unplugged from the back of the case.  It’s inside the server, where it won’t be messed with.

Once I had all the components installed, I had to cable everything up.  You use about a million tie-wraps when cleaning up the cabling, but it looks nice in the end.  The cables are nowhere near as elegant as the cabling inside a Mac, but for a PC I think it turned out pretty good.  Here’s a shot of the inside of the server:

The power savings over the old server components were pretty dramatic.  The old system had a standard 550 watt power supply and was using an Athlon X2 CPU.  Typically, the load would hover between 180-240 watts.  This new server idles at 80 watts and will occasionally break 100 watts when it’s being stressed a little bit.  It’s great to get all this extra performance while using less than half the power.

Overall, it turned out being a great build.  Component cost was less than $600 (not including the case or drives), while still using quality parts.  Looking forward to this one lasting another 10 years.


A fairly significant feature in Seasonality Pro is the ability to edit the gradients used to show weather data on a map.  When looking around for some sample open source gradient editors online, I didn’t come across anything I could really use.  So I decided to write my own and offer it under an MIT license.  I posted the source code (link below) on GitHub.  Here’s what it looks like:

I’ve included a lot of documentation as well as a sample Xcode project to show how to use it over on the GitHub page:

GSGradientEditor on GitHub

I looked at quite a few different graphics apps when working on the UI.  I wanted to see not only how other implementations looked, but how they worked.  With iOS 7 being more gesture-centric, I wanted to make sure that interaction with GSGradientEditor was intuitive.  I found the Inkpad app most helpful during this process.  In the end, I like how GSGradientEditor turned out.


© 2017 *Coder Blog

Theme by Anders NorenUp ↑