Setting up a Small Desktop RAID System

September 25th, 2014

With the exodus of mass internal storage hitting even the top end of the line in the 2013 Mac Pro, a lot more people are going to start looking for external solutions for their storage needs. Many will just buy an external hard drive or two, but others like myself will start to consider larger external storage arrays. One of the best solutions for people who need 5-15GB of storage is a 4 disk RAID 5 system. As I mentioned in a previous post, I went with a Pegasus2, and set it up in a RAID 5. This brings up a lot of questions about individual RAID settings though, so I thought I would put together a primer on typical RAID settings you should care about when purchasing a Pegasus or comparable desktop RAID system.

Stripe Size
Stripe size is probably the setting that has one of the biggest impacts on performance of your RAID. A lot of people will run a benchmark or two with different stripe sizes and incorrectly determine that bigger stripe sizes are faster, and use them. In reality, the best performing stripe size highly depends on your workload.

A quick diversion to RAID theory is required before we can talk about stripe sizing. With RAID 5, each drive is split up into blocks of a certain size called stripes. In a 4 disk RAID 5, 3 disks will have real data in their stripes, and the 4th disk will have parity data in it’s stripe (in reality, the parity stripes in a RAID 5 alternate between drives, so not all the parity is on the same disk). The parity stripe allows a disk to fail while still keeping your array online. You give up 25% of the space to gain a certain amount of redundancy.

When you read data from the volume, the RAID will determine which disk your data is on, read the stripe and return the requested data. This is pretty straightforward, and the impact of stripe size during reading is minimal.

However, when writing data to the disk, stripe size can make a big performance difference. Here’s what happens every time you change a file on disk:

  1. Your Mac sends the file to the RAID controller to write the change to the volume.
  2. The RAID controller reads the stripe of data off the disk where the data will reside.
  3. The RAID controller updates the contents of the stripe and writes it back to the disk.
  4. The RAID controller then reads the stripes of data in the same set from the other disks in the volume.
  5. The RAID controller recalculates the parity stripe.
  6. The parity slice is written to the final disk in the volume.

This results in 3 stripe reads, and 4 stripe writes every time you write even the smallest file to the disk. Most RAIDs will default to a 128KB stripe size, and will typically give you a stripe size range anywhere from 32KB to 1MB. In the example above, assuming a 128KB stripe size, even a change to a 2KB file will result in almost 1MB of data being read/written to the disks. If a 1MB stripe size is used instead of 128KB, then 7MB of data would be accessed on the disks just to change that same 2KB file. So as you can see, the stripe size greatly determines the amount of disk I/O required to perform even simple operations.

So why not just choose the smallest stripe size? Well, hard drives are really good at reading contiguous blocks of data quickly. If you are reading/writing large files, grouping those accesses into larger stripe sizes will greatly increase the transfer rate.

In general, if you use mostly large files (video, uncompressed audio, large images), then you want a big stripe size (512KB – 1MB). If you have mostly very small files, then you want a small stripe size (32KB – 64KB). If you have a pretty good mix between the two, then 128KB – 256KB is your best bet.

Read Ahead Cache
A lot of RAID systems will give you the option of enabling a read ahead cache. Enabling this can dramatically increase your read speeds, but only in certain situations. In other situations, it can increase the load on your hard drives without any benefit.

Let’s talk about what happens in the read cache when read ahead is disabled. The read cache will only store data that you recently requested from the RAID volume. If you request the same data again, then the cache will already have that data ready for you on demand without requiring any disk I/O. Handy.

Now how is it different when read ahead caching is enabled? Well, with read ahead caching, the RAID controller will try and guess what data you’ll want to see next. It does this by reading more data off the disks than you request. So for example, if your Mac reads the first part of a bigger file, the RAID controller will read the subsequent bytes of that file into cache, assuming that you might need them soon (if you wanted to read the next part of the big file, for example).

This comes in handy in some situations. Like I mentioned earlier, hard drives are good at reading big contiguous blocks of data quickly. So if you are playing a big movie file, for instance, the RAID controller might read the entire movie into cache as soon as the first part of the file is requested. Then as you play the movie, the cache already has the data you need available. The subsequent data is not only available more quickly, but the other disks in your RAID volume are also free to handle other requests.

However, the read ahead results in wasted I/O. A lot of times, you won’t have any need for the subsequent blocks on the disk. For instance, if you are reading a small file that is entirely contained in a single stripe on the volume, there is no point in reading the next stripe. It just puts more load on the physical disks and takes more space in the cache, without any benefit.

Personally, I enable read ahead caching. It’s not always a win-win, but it can greatly speed up access times when working with bigger files (when the speed is needed most).

Write Back Cache
There are two write cache modes: write through, and write back. Your choice here can have a dramatic impact on the write speed to your RAID. Here’s how each mode works.

Write Through: When writing data to the disk, the cache is not used. Instead, OS X will tell the RAID to write the data to the drive, and the RAID controller waits for the data to be completely written to the drives before letting OS X know the operation was completed successfully.

Write Back: This uses the cache when writing data to the disk. In this case, OS X tells the RAID to write a given block of data to the disk. The RAID controller saves this block quickly in the cache and tells OS X the write was successful immediately. The data is not actually written to the disks until some time later (not too much later, just as soon as the disks can seek to the right location and perform the write operation).

Enabling the write back cache is “less safe” than write through mode. The safety issue comes into play during a power outage. If the power goes out between the time that the RAID told OS X the data was written, and the time when the data is actually on the disks themselves, data corruption could take place.

More expensive RAID systems, like the Pegasus2, have a battery-backed cache. The benefit here is that if a power outage happens as described above, the battery should power the cache until the power goes back on and the RAID controller can finish writing the cache to disks. This effectively overcomes the drawback of enabling write back caching.

Another potential drawback for enabled write back caching is a performance hit to the read speed. The reason for this is that there is less cache available for reading (because some is being used for writes). The hit should be pretty minor though, and only applicable when a lot of write operations are in progress. Otherwise, the amount of data in the write back cache will be minimal.

The big advantage of using a write back cache is speed though.  When write back caching is enabled, OS X doesn’t have to wait for data to be written to the disks, and can move on to other operations.  This performance benefit can be substantial, and gives the RAID controller more flexibility to optimize the order of write operations to the disks based on the locations of data being written.  Personally, I enable write back caching.

Wrap-up

That about covers it.  Small desktop RAID systems are a nice way to get a consolidated block of storage with some a little redundancy and a lot more performance than just a stack of disks can provide.  I hope this overview has helped you choose the options that are best for your desktop RAID system. In the end, there is no right answer to the settings everyone should use. Choose the settings that best fit your workload and performance/safety requirements.

Apple’s Rumored 12″ Notebook

September 22nd, 2014

The rumors are growing for a new 12 inch MacBook Air.  According to MacRumors, this laptop would be slimmer than the current MacBook Airs, fanless, and come in the same silver/gold/space gray color variations as the iPhone.

Sounds like an clamshell iPad with a keyboard to me.

The new notebook has a much thinner design the appears to sacrifice many of the usual ports seen on Apple’s current notebooks and may adopt the new reversible USB Type C connector that has seen its specifications recently finalized.

The MacBook Air has very few ports to begin with (video, USB, headphone and an SD card slot on the 13″).   If you are sacrificing many of the usual ports, you end up with no ports at all, like the iPad.

Interestingly, the report raises some questions about charging on the notebook, indicating that the usual MagSafe port has been removed in favor of a new, unspecified charging method.

Hmm, like a Lightning cable?  It’s reversible too.

In line with previous rumors, the machine is reportedly fanless, suggesting it will adopt an ultra low-power processor such as the Broadwell-Y Core M processors recently announced by Intel.

The A8 is another ultra low-power processor…

Many people prefer the iPad as a productivity machine.  With a standard keyboard attached, you can definitely get some serious work done.  A 12″ iPad with a permanent keyboard attached sounds like a great little mobile computer.

First Impression of the iPhone 6

September 19th, 2014

I’m one of the 4 million people who pre-ordered a new iPhone 6 this past weekend. It arrived earlier today, and I wanted to share some observations after using it for the first couple of hours.

This is a big upgrade for me. I’m upgrading from an iPhone 4, which feels absolutely ancient. At one point, I’m sure it felt amazingly quick, but not anymore. (I have to remind myself that someday this iPhone 6 is going to feel just as slow.) To give you an example, sometimes my iPhone 4 was so slow that it missed incoming calls because sliding to answer wouldn’t happen before voicemail picked up the call. Needless to say, the new iPhone 6 is crazy quick in comparison.

Like most people, I was torn between ordering the iPhone 6 or the 6+, sight unseen. There is a pretty big size difference between the two, and some functionality is lost by choosing the 6. I made some cardboard cutouts to help make my decision easier. The cutout for the 6+ felt way too big in my pocket, so I decided to go with the more comfortable 6. Now that I’m using it, I’m really glad I didn’t order the 6+. The 6, even as the smaller option, is so much bigger than previous iPhones. This is especially true coming from an iPhone 4. I have space for 2 more rows of app icons on each home screen, which is great.

One-handed operation is pretty tough though. My first instinct when trying to reach the top of the screen with my thumb was to shimmy it down in my hand. Well, that’s a big mistake, because I almost dropped it while doing this. So then I tried using the new action that Apple built-in, to double-tap the home button to shift the screen contents downward. This helps a lot, but it’s going to take some time before I have the muscle-memory to do this without thinking about it. I’ll probably use it two-handed most of the time.

As an app developer, I think we are in a bit of a transition when it comes to choosing the best location to place often-used buttons in the user interface. Up until now, the top navigation bar was a good location to put navigation controls. However, with devices becoming larger, that may no longer be the case. Instead, it might make more sense to have a more prominent toolbar on the bottom of the screen. I know I’ll be spending a lot of time thinking about this over the coming months.

One last note on the size I would like to bring up is with respect to the depth. It’s amazing to physically grasp just how thin the iPhone 6 is as a phone. It’s almost difficult to pick up off a table because there isn’t much depth to hold onto. Once you are holding it though, it feels very solid and comfortable.

Moving on, this morning was the first time I’ve used TouchID. I don’t know how I lived without it. Actually I do: I would avoid the inconvenience of a delayed unlock process by not setting a password on my devices. TouchID makes security easy by reading your fingerprint on the home button to unlock the device, and I can’t wait for more 3rd party apps to make use of it in iOS 8.

Speaking of buttons, the power button is now on the side rather than the top of the device. My first impression of this change isn’t good. More than once I’ve tried to tap that button to turn off the phone, and instead did a combo press with the volume button on the other side. The OS gives the volume button priority (which is probably the best choice) and the device stays on. Taking screenshots using the home/power button combination is more difficult too. Aesthetically, the power button placement is nice (both with respect of having a clean top panel, and also having the side buttons laid out symmetrically), but usability would be greatly improved if the button wasn’t directly across from the volume buttons.

Finally, I was originally going to use the iPhone 6 without a case, but due to a few of my observations above I think I might change my mind. The device will be easier to pick up off a table if it has a little bit more depth, which a case would add. And with the almost-drop due to using it one-handed, I would feel more comfortable if I had some protection if the worst was to happen and I actually did drop it on a concrete floor or other hard surface. I’ve never really dropped a phone in the past, but I’m not sure my luck will hold with this model. The third reason buying a case is a good idea is due to the extended camera lens on the back of the device. When setting the iPhone 6 down on a table, it is resting on the camera lens. That invites the possibility of scratching the glass, which as a photographer I would hate to have happen. I will use it for a few weeks before making a final decision, but I expect I’ll be plunking some money down for a case in the near future.

Overall, the iPhone 6 is an awesome upgrade and I couldn’t be happier right now. It’s just going to get better as HealthKit comes online and Apple Pay starts rolling out to commercial locations next month.

10 Years

April 1st, 2014

Today is the 10th anniversary of Gaucho Software.  I registered the domain gauchosoft.com on April 1, 2004 and after moving to Michigan a couple months later, I started building the website and getting to work on product development.

Since then I’ve developed several apps, writing hundreds of thousands of lines of code and creating hundreds of graphics in the process.  I can’t begin to count how many press releases I’ve sent out.  And I’m proud to say Gaucho Software has participated in a number of fundraising efforts for worthy causes, from supporting the recovery after Hurricane Katrina and the earthquake in Haiti, to raising money to bring clean water to developing countries.

It’s hard to believe it’s been so long.  Thinking back, my first development box was the then-brand-new liquid-cooled 2.5Ghz PowerMac G5.  It was running OS X 10.3 Panther, and Xcode had just come out (previously it was called Project Builder, which I used to develop the first versions of XRG back in 2002).  I suppose this is telling of the time that has passed.

I wanted to thank everyone who has supported Gaucho Software.  Thanks to the companies who have trusted me with their contract work.  A big thanks to everyone who has purchased my apps.  And a very special thanks to my wife, Katrina, and the rest of my family for their endless help and support.

Here’s to the next 10…

Flooding Moby Dick

March 2nd, 2014

This weekend, a pretty heavy storm hit the California coast. One city hit particularly hard was Santa Barbara, where two restaurants at different beaches several miles apart were flooded by waves. Luckily, there were only minor injuries. The event caught me by surprise because of the coastal layout of that region.  You see, the Santa Barbara coast in general faces south.  So you don’t get a whole lot of big waves hitting the region.  That makes an event like this especially rare.  Even during the El Nino year of 1997-98, when strong storms battered the coast all winter, we never saw anything quite like this.

The most surprising incident of the two was the Moby Dick restaurant at Sterns Wharf. Here’s a frame from a YouTube video taken by someone in the restaurant as it hit.  Click the image to see the full video on YouTube.

And a news article talking about what happened at (KEYT.com).

The interesting thing about this destruction is where it happened. Sterns Wharf is actually on a beach facing southeast, so for swells coming in from the Pacific along the west to be strong enough to wrap around the coast and strike a beach facing southeast this hard is quite astounding.

Let’s take a look at the swell map from that morning to see what was actually happening. CDIP has a nice view of the swell state that morning:

Swell Map from CDIP

I’ve marked Sterns Wharf on that map. As you can see, the swell was coming from directly west, which is just about the worse possible case. Any northwestern component to a swell would force it to not only wrap around the peninsula in Santa Barbara but also around Point Conception. Any southwestern component to the swell would result in the Channel Islands blocking Santa Barbara from getting hit. A swell coming from exactly west can slot right through to the Santa Barbara area, perhaps even resulting in a higher tide because of the channeling of the water between the coast and the islands off shore.

You may think a westerly swell direction would be normal, but usually the swell in this area of California comes from the northwest.  This is due to the strongest winds of storms like these typically being further north, off the coast of Oregon and Washington.

From the video, it sounds like this event happened at an abnormally high tide of 6 feet (high tides are usually between 3-5 feet), and a 12 foot swell was actually reaching the coast in downtown Santa Barbara. Whenever you have a combined effect of high tide and high swells like this, disaster is sure to follow.

Hopefully Moby Dick can get things cleaned up there before too long. There are definitely a few restaurant patrons who will have a story to tell for quite some time.

Pegasus2 Impressions

January 31st, 2014

With the lack of drive bays in the new Mac Pro, Apple is definitely leaning toward external storage with its future models.  My Mac Pro won’t arrive until next month, but in the mean time I had to figure out what kind of storage system I was going to buy.

As I mentioned in a previous post, I had considered using my local file server as a large storage pool.  After trying it out for the past couple months, I wanted something that was a bit faster and more reliable though.  I decided to look at my direct attached storage (DAS) options.  Specifically, I was looking at Thunderbolt enclosures.

My data storage requirements on my desktop machine are currently between 3-4TB of active data, so single disk options weren’t going to cut it.  I need at least 2 disks in a striped RAID 0 at a minimum.  I’m not particularly comfortable with RAID 0 setups, because any one of the drives can fail and you would lose data.  However, with good automatic Time Machine backups, that shouldn’t be too much of an issue.  Ideally I want something with 3-4 drives that included a built-in hardware RAID 5 controller though.  This way, I would have a little bit of redundancy.  It wouldn’t be a replacement for good backups, but if I disk went offline, I could keep working until a replacement arrives.

The only 3 disk enclosure I found was the Caldigit T3.  This looks like a really slick device, and I was pretty close to ordering one.  The main drawback of the unit is that it doesn’t support RAID 5.  I would have to either have a 2 disk RAID 0 with an extra drive for Time Machine, or a 3 disk RAID 0 (which is pretty risky) to support the amount of storage I need.  I decided this wasn’t going to work for me.

Once you get into the 4 disk enclosures, the prices start to go up.  There are two options I considered here.  First is the Areca ARC-5026.  Areca earned a good reputation by manufacturing top-end RAID cards for enterprise.  The 5026 is a 4 bay RAID enclosure with Thunderbolt and USB 3 ports on the back.  The drawback is that it’s pretty expensive ($799 for just the enclosure), and it doesn’t exactly have a nice look to it.  It reminds me of a beige-box PC, and I wasn’t sure I wanted something like that sitting on my desk.

The other option I looked at was a Promise Pegasus2.  It’s also a 4 disk RAID system (with 6 and 8 disk options).  They offer a diskless version that is less expensive than the Areca.  It doesn’t support USB 3 like the Areca, but it does support Thunderbolt 2 instead of Thunderbolt 1.  And the case is sharp.  Between the faster host interface and the cost savings, I decided to get the Pegasus.

The diskless model took about 2 weeks to arrive.  The outside of the box claimed it was the 8TB R4 model, so Promise isn’t making a separate box for the diskless version.  I suspect that Apple twisted Promise’s arm a little bit to get them to release this model.  Apple knew there was going to be some backlash from Mac Pro upgraders who needed an external replacement for their previous internal drives.  Apple promoted Promise products back when the xServe RAID was retired, and I imagine Apple asked Promise to return the favor here.  The only place you can buy the diskless R4 is the Apple Store.  It isn’t sold at any other Promise retailers.

Since the enclosure doesn’t include any drives, I decided on Seagate 3TB Barracuda disks.  They are on the Promise supported drive list and I generally find Seagate to make the most reliable hard drives from past experience.  With a RAID 5, I would have about 9TB of usable space.  More than I need right now, but it’s a good amount to grow into.  Installing the hard drives was pretty straightforward: eject each tray, attach each drive with the set of 4 screws, and latch them back in.  Then I plugged it into my Mac with the included 3 foot black Thunderbolt cable and turned it on.

This being the diskless version, the default setup is to mount all four disks as if there was no RAID.  This is counter to the Pegasus models that include drives, where the default configuration is a RAID 5.  This module instead uses this pass-through mode (JBOD), so you can take drives right out of your old computer and use them with the new enclosure.  I had to jump through a few hoops, but getting the RAID setup wasn’t too bad.  I had to download the Promise Utility from their website first.  Once you install the software, you can open up the utility and then do the advanced configuration to setup a new RAID volume.  The default settings for creating a RAID 5 weren’t ideal.  Here’s what you should use for a general case…

Stripe Size:  128KB
Sector Size:  512 bytes
Read Cache Mode:  Read Ahead
Write Cache Mode:  Write Back

The Pegasus2 has 512MB of RAM, which is used for caching.  It’s a battery-backed cache, so using Write Back mode instead of Write Through should be okay for most cases.  Only use Write Through if you really want to be ultra-safe with your data and don’t care about the performance hit.

Once you get the RAID setup, it starts syncing the volume.  The initial sync took about 8 hours to complete.  The RAID controller limits the rebuild speed to 100MB/sec per disk.  This is a good idea in general because you can use the device during the rebuild and it let’s you have some bandwidth to start using the volume right away.  However, it makes me wonder how much time could be saved if there wasn’t a limit (I found no way to disable or increase the limit using their software).

Drive noise is low to moderate.  The documentation claims there are two fans, one big one for the drives and one small one for the power supply.  Looking through the power supply vent though, it doesn’t look like there’s actually a fan there.  Maybe it’s further inside and that is just a vent.  The bigger fan spins at around 1100-1200rpm (this is while doing the rebuild, but idle is no lower than 1000rpm).  It’s definitely not loud, but it’s not really quiet either.  Sitting about 2 feet away from the Pegasus, it makes slightly less noise as my old Mac Pro (I keep the tower on my desk about 3 feet away).  The noise from the Pegasus is a bit higher pitch though.  When the new Mac Pro gets here, I’ll have the Pegasus further away from me, so I’ll wait to fully judge the amount of noise at that point.

Overall I’m very happy with the system so far.  Initial benchmarks are good.  Since I don’t have the new Mac Pro yet, I’m testing on a 2011 MacBook Air over a Thunderbolt 1 connection.  Using the AJA System Test, I saw rates of around 480MB/sec reads and 550MB/sec writes.  Switching to BlackMagic, the numbers bounced around a lot more, but it came up with results around 475MB/sec reads and 530MB/sec writes.  With RAID 5 having notoriously slow writes because of the parity calculation, I’m a little surprised the Pegasus writes faster than it reads.  The RAID controller must be handling the parity calculation and caching well.  It will be interesting to see if benchmarks improve at all when connected to the new Mac Pro over Thunderbolt 2.

File Server Upgrade

January 20th, 2014

Last month, the RAID card in my file server died.  I tried to replace the card with a newer model, but found that not all PCI Express cards match well with all motherboards.  The motherboard was old enough that the new card simply wouldn’t work with it.  Being that the server components (other than the drives) were almost 10 years old, I decided it was time to rebuild the internal components.

I already had a solid base from the old file server.  The case is a Norco RPC-4020.  It’s a 4U enclosure with 20 drive bays.  The most I’ve ever used was 12 bays, but with the increasing size of modern drives, I am whittling it down to 8.  The drives I have are pretty modern, so this build doesn’t factor in any additional drive cost.  Other than the drives though, the rest of the server’s guts needed a good refresh.  Here’s what I put in there:

Motherboard:  Asus Z87-Pro
I went with this Asus because it had a good balance of performance and economy (and Asus’ reliability).  The board has 8 SATA ports, which is great for a file server when you are trying to stuff a bunch of disks in there.  I also liked how the board used heatsinks instead of fans for cooling.  Less moving parts to wear out.  Finally, this board has plenty of PCIe slots in case I want to add RAID/HBA cards for more drives, or a 10GBASE-T Ethernet card down the line.

CPU:  Intel Core i5-4570S
This is one of the low power models in the Haswell (4th generation) line.  TDP is a moderate 65 watts.  I was debating between this chip and the 35 watt Core i3-4330T.  If this server just served files, then I would have bought the Core i3, but I also use the box to host a moderately-sized database and do some server-side development.  The Core i5 chip is a quad core instead of a dual core, and I decided it would be worth it to step up.  You’ll notice that a GPU isn’t included in the list here, and that’s because I’m just using the embedded GPU.  One less component to worry about.

Memory:  2x4GB Crucial Ballistix Sport DDR3-1600
I’ve never been into over-clocking, so I just went with whatever memory ran at the CPU’s native 1600Mhz.  Crucial is always a safe bet when it comes to memory.  This particular memory has a relatively low CL9 latency.

Power Supply:  Antec EA-550 Platinum 550 watt
The power supply is a make-or-break part of a server, especially when you have a lot of disks.  I wanted something that was very efficient, while also supplying plenty of power.  This power supply is 93% efficient, meaning a lot more energy is making it to the computer components themselves instead of being wasted in the form of heat.  The one drawback of this power supply is that it’s a 4 rail unit and all the Molex/SATA power connectors are on a single rail.  So it’s not quite ideal for servers with a lot of disks (you need enough to cover the power spike as the disks spin up), but it handles 8 drives just fine with some room to grow.

Boot Drive:  USB 3 internal motherboard header and flash drive
I really wanted the OS to stay off the data drives this time around.  The best way I found to do that is to use the USB 3 header built in to most modern motherboards.  Typically this header is for cases that have USB 3 ports on the front, but my case only has a single USB 2 port on the front so this header was going unused.  I found a small Lian Li adapter to convert the 20 pin port on the motherboard to 2 internal USB 3 ports.  Then I picked up a 128GB PNY Turbo USB 3 flash drive on sale.  The motherboard has no problem booting off the USB drive, and while latency is higher, raw throughput of this particular flash drive is pretty good.

The Lian Li adapter is great because I don’t have to worry about the flash drive coming unplugged from the back of the case.  It’s inside the server, where it won’t be messed with.

Once I had all the components installed, I had to cable everything up.  You use about a million tie-wraps when cleaning up the cabling, but it looks nice in the end.  The cables are nowhere near as elegant as the cabling inside a Mac, but for a PC I think it turned out pretty good.  Here’s a shot of the inside of the server:

The power savings over the old server components were pretty dramatic.  The old system had a standard 550 watt power supply and was using an Athlon X2 CPU.  Typically, the load would hover between 180-240 watts.  This new server idles at 80 watts and will occasionally break 100 watts when it’s being stressed a little bit.  It’s great to get all this extra performance while using less than half the power.

Overall, it turned out being a great build.  Component cost was less than $600 (not including the case or drives), while still using quality parts.  Looking forward to this one lasting another 10 years.

GSGradientEditor

January 17th, 2014

A fairly significant feature in Seasonality Pro is the ability to edit the gradients used to show weather data on a map.  When looking around for some sample open source gradient editors online, I didn’t come across anything I could really use.  So I decided to write my own and offer it under an MIT license.  I posted the source code (link below) on GitHub.  Here’s what it looks like:

I’ve included a lot of documentation as well as a sample Xcode project to show how to use it over on the GitHub page:

GSGradientEditor on GitHub

I looked at quite a few different graphics apps when working on the UI.  I wanted to see not only how other implementations looked, but how they worked.  With iOS 7 being more gesture-centric, I wanted to make sure that interaction with GSGradientEditor was intuitive.  I found the Inkpad app most helpful during this process.  In the end, I like how GSGradientEditor turned out.

Enjoy!

GSShapefile

October 30th, 2013

It’s been awhile since I’ve open sourced any code (cough…XRG…cough) and I thought it was about time to contribute something new.

This code is a small collection of classes that will parse ESRI Shapefiles. As I’m getting further into the development of Seasonality Pro (which you can follow at the new Seasonality: Behind the Scenes blog), I thought it would be important to be able to show Shapefile data on a map. There are a few basic implementations out there (see iOS-Shapefile and Cocoa Shapefile), but I wanted a more modern code design, and something that was flexible enough to add expanded support in the future. So I dug up the Shapefile spec and got started.

The result after hacking on it for a few days is GSShapefile. It should work on both the Mac and iOS platforms, as long as you have ARC enabled in your project. GSShapefile takes an NSData object as its input, so it doesn’t make any assumption of whether the data is coming from a local file or somewhere online. After the file is parsed, you can retrieve the shape records and points associated with each shape. It really should be pretty easy to integrate with your own code.

I hope somebody finds it helpful.

On the New Mac Pro

October 23rd, 2013

Apple talked more about the new Mac Pro at it’s special event today, giving more details on when it will start shipping (December) and how much it will cost ($2999 for the base model). They also covered some additional hardware details that weren’t mentioned previously and I thought I would offer my 2 cents on the package.

Storage

There’s been a lot of complaints about the lack of expansion in the new Mac Pro, particularly when it comes to storage. With the current Mac Pro able to host up to 4 hard drives and 2 DVD drives, the single PCIe SSD slot in the new Mac Pro can be considered positively anemic. This has been the biggest issue in my eyes. Right now in my Mac Pro, I have an SSD for the OS and applications, a 3TB disk with my Home directory on it, and a 3TB disk for Time Machine. That kind of storage just won’t fit in a new Mac Pro, which only has a single PCIe SSD slot.

I believe Apple’s thought here is that big storage doesn’t necessarily belong internally on your Mac anymore. Your internal drives should be able to host the OS, applications, and recently used documents, and that’s about it. Any archival storage should be external, either on an external hard drive, on a file server, or in the cloud. Once you start thinking in this mindset, the lack of hard drive bays in the new Mac Pro start to make sense.

Personally, if I decide to buy one, I’ll probably start migrating my media to a file server I host here in a rack and see just how much space I need for other documents. I already moved my iTunes library a couple months back (300GB), and if I move my Aperture photo libraries over, that will reduce my local data footprint by another 700-800GB (depending on how many current photo projects I keep locally). That’s an easy terabyte of data that doesn’t need to be on my Mac, as long as it’s available over a quick network connection.

VMware virtual machines are a little tricky, because they can use a lot of small random accesses to the disk, and that can be really slow when done over a network connection with a relatively high latency. The virtual disks can grow to be quite large though (I have a CentOS virtual machine to run weather models that uses almost 200GB). I’ll have to do some testing to see how viable it would be to move these to the file server.

All this assumes that you want to go the network storage route. To me, this is an attractive option because a gigabit network is usually fast enough, and having all your noisy whirring hard drives in another room sounds… well… peaceful. If you really need a lot of fast local storage though, you’ll have to go the route of a Thunderbolt or USB 3 drive array. If you have big storage requirements right now, you most likely have one of these arrays already.

CPU/GPU Configurations

The new Mac Pro comes with a single socket Xeon CPU and dual socket AMD FirePro GPUs. This is reverse from the old Mac Pro, which had 2 CPU sockets and a single graphics card (in its standard configuration). The new Mac Pro certainly is being geared more toward video and scientific professionals that use the enhanced graphics power.

With 12 cores in a single Xeon, I don’t think the single socket CPU is a big issue. My current Mac Pro has 8 cores across 2 sockets, and other than when I’m compiling or doing video conversion, I have never come close to maxing all the cores out. Typical apps just aren’t there yet. You’re much better off having 4-6 faster cores than 8-12 slower cores. Fortunately, Apple gives you that option in the new Mac Pro. A lot of people have complained about paying for the extra GPU though. FirePro GPUs aren’t cheap, and a lot of people are wondering why there isn’t an option to just have a single GPU to save on cost.

I think the reason for this is the professional nature of the Mac Pro. The new design isn’t really user expandable when it comes to the graphics processors, so Apple decided to include as much GPU power as they thought would be reasonably desired by their pro customers. The new Mac Pro supports up to three 4K displays, or up to six Thunderbolt displays. A lot of professionals use dual displays, and it’s increasingly common to have three or more displays. With dual GPUs this isn’t a problem in the new Mac Pro, while if they just configured a single GPU the display limit would be comparable to the iMac. Personally, I have 2 graphics cards in my Mac Pro, and have used up to 3 displays. Currently I only use 2 displays though, so I could go either way on this issue. I do like the idea of having each display on it’s own GPU though, as that will just help everything feel snappier. This is especially true once 4K displays become standard on the desktop. That’s a lot of pixels to push, and the new Mac Pro is ready for it.

External Expansion

I’ve seen people comment on the lack of Firewire in the new Mac Pro. This, in my opinion, is a non-issue. Even Firewire 800 is starting to feel slow when compared to modern USB 3 or Thunderbolt storage. If you have a bunch of Firewire disks, then just buy a $30 dongle to plug into one of the Thunderbolt ports. Otherwise you should be upgrading to Thunderbolt or USB 3 drives. USB 3 enclosures are inexpensive and widely available.

Outside that, the ports are very similar to the old Mac Pro. One port I would have liked to see in the new Mac Pro was 10G ethernet. The cost per port of 10G is coming down rapidly, and with moving storage out onto the network, it would have been nice to have the extra bandwidth 10G ethernet offers. Apple introduced gigabit ethernet on Macs well before it was a common feature on desktop computers as a whole. Perhaps there will be a Thunderbolt solution to this feature gap sometime down the road.

Power Consumption and Noise

This alone is a good reason to upgrade from a current Mac Pro. The new Mac Pro will only use around 45W of power at idle, which isn’t much more than a Mac Mini and is about half of the idle power consumption of the latest iMacs (granted, the LCD in the iMac uses a lot of that). My 2009 Mac Pro uses about 200W of power at idle. Assuming you keep your Mac Pro on all the time, and are billed a conservative $0.08 per kilowatt hour, you can save about $100/year just by upgrading. That takes some of the sting out of the initial upgrade cost for sure.

Using less energy means needing less cooling. The new Mac Pro only has a single fan in it, and it’s reportedly very quiet. Typically the unit only makes about 12dB of noise, compared to around 25dB in the current Mac Pro. With perceived volume doubling for every 3dB increase, the new Mac Pro is about 16 times quieter than the old one. Surely the lack of a spinning HD helps here as well.

Overall

Overall the new Mac Pro is a slick new package, but you already knew that. It isn’t for everybody, but it fits the needs of the professional customer pretty well moving forward. Personally, I haven’t decided if I will buy one yet. My Mac Pro is almost 5 years old at this point, and while it still does a good job as a development machine, I’m starting to feel its age. However, I haven’t decided whether I will replace it with a new Mac Pro, the latest iMac, or even a Retina MacBook Pro in a form of docked configuration. There are benefits and drawbacks to each configuration, so I’m going to wait until I can get my hands on each machine and take them for a spin.