*Coder Blog

Life, Technology, and Meteorology

Category: Macintosh (page 1 of 5)

My Desktop Storage Needs Help

Back when I purchased this 2013 Mac Pro, I wrote about needing a desktop RAID system for storage. The Mac Pro at the time came with a base configuration of a 256GB SSD, and peaked with a 1TB model that was about $1000 more. I decided to stick with the base storage configuration and then I purchased a Pegasus2 desktop RAID box and put in four 3TB drives in a RAID 5 setup.

As time went on, the drives started to die, and I replaced them with 4TB models, so I currently have about 12TB of usable storage on my desktop, with about 2/3rds of it occupied. With respect to capacity, it’s doing just fine, though it’s starting to fill up quicker since I switched to recording video in 4K with my iPhone and 5D Mark IV.

While capacity has been okay, recently I’m running into a limitation of this setup with respect to speed. I’m blaming SSDs. SSDs are very common now in desktops and ubiquitous in laptops because of their performance benefits. The problem is apps are starting to be coded with the assumption they are being run on flash storage. This is especially problematic with syncing apps like Dropbox.

Let me explain. I currently have a two-Mac working environment. I use my desktop most of the time, but I also have a 13″ MacBook Pro that I’ll work on while on the road. I use syncing tools like Dropbox and Resilio to keep files up-to-date between the two devices.

One app I use frequently is Lightroom. Its important to me that I have my photos available on both Macs, but a Lightroom library has to be stored locally. So I put my Lightroom library on Dropbox, and the RAW photo masters are in a separate directory on my desktop Mac. If I’m going to be editing photos on my laptop, I’ll generate Smart Previews for those albums on my desktop and be able to develop them on the MacBook Pro. This works really well.

The problem I’m running into recently is during Lightroom imports. Whenever I import photos into Lightroom, Dropbox detects the changes in the library and kicks off indexing/syncing. This puts more I/O load on the disk that is already busy from the import, and things grind to a halt. The sync/import activity even keeps me from playing a single 1080p HD movie from the library…on a current Mac Pro…with RAID storage. The disk just can’t keep up. It’s even worse if something like a Time Machine backup happens to be running, or if Backblaze is trying to push files offsite.

Clearly something needs to change. The problem is that I would like something faster but still with enough capacity to grow. I like the Pegasus box a lot, so it’s tempting to just buy 4 SSDs to use in that. However, SSDs are still pretty cost prohibitive in larger capacities. I certainly couldn’t afford 4TB models to retain my current storage capacity. Even four 2TB SSDs would be a good chunk of change. I could buy some 1TB models and have 3TB of usable space in a RAID 5, but that would require a complete restructuring of where I store data. I would have to keep only my recent data on the desktop, and archive bigger data like photos and videos to a NAS with more storage. Then backups become an issue, etc.

With the new Mac Pro just around the corner (rumored for 2019 release), I’ll be upgrading to a new Mac somewhat soon, but not immediately. So I haven’t decided if I should do something about storage now or wait. But every couple of days I have plenty of extra time to plan my next steps while waiting for my disks to stop thrashing.

Setting up a Small Desktop RAID System

With the exodus of mass internal storage hitting even the top end of the line in the 2013 Mac Pro, a lot more people are going to start looking for external solutions for their storage needs. Many will just buy an external hard drive or two, but others like myself will start to consider larger external storage arrays. One of the best solutions for people who need 5-15GB of storage is a 4 disk RAID 5 system. As I mentioned in a previous post, I went with a Pegasus2, and set it up in a RAID 5. This brings up a lot of questions about individual RAID settings though, so I thought I would put together a primer on typical RAID settings you should care about when purchasing a Pegasus or comparable desktop RAID system.

Stripe Size
Stripe size is probably the setting that has one of the biggest impacts on performance of your RAID. A lot of people will run a benchmark or two with different stripe sizes and incorrectly determine that bigger stripe sizes are faster, and use them. In reality, the best performing stripe size highly depends on your workload.

A quick diversion to RAID theory is required before we can talk about stripe sizing. With RAID 5, each drive is split up into blocks of a certain size called stripes. In a 4 disk RAID 5, 3 disks will have real data in their stripes, and the 4th disk will have parity data in it’s stripe (in reality, the parity stripes in a RAID 5 alternate between drives, so not all the parity is on the same disk). The parity stripe allows a disk to fail while still keeping your array online. You give up 25% of the space to gain a certain amount of redundancy.

When you read data from the volume, the RAID will determine which disk your data is on, read the stripe and return the requested data. This is pretty straightforward, and the impact of stripe size during reading is minimal.

However, when writing data to the disk, stripe size can make a big performance difference. Here’s what happens every time you change a file on disk:

  1. Your Mac sends the file to the RAID controller to write the change to the volume.
  2. The RAID controller reads the stripe of data off the disk where the data will reside.
  3. The RAID controller updates the contents of the stripe and writes it back to the disk.
  4. The RAID controller then reads the stripes of data in the same set from the other disks in the volume.
  5. The RAID controller recalculates the parity stripe.
  6. The parity slice is written to the final disk in the volume.

This results in 3 stripe reads, and 4 stripe writes every time you write even the smallest file to the disk. Most RAIDs will default to a 128KB stripe size, and will typically give you a stripe size range anywhere from 32KB to 1MB. In the example above, assuming a 128KB stripe size, even a change to a 2KB file will result in almost 1MB of data being read/written to the disks. If a 1MB stripe size is used instead of 128KB, then 7MB of data would be accessed on the disks just to change that same 2KB file. So as you can see, the stripe size greatly determines the amount of disk I/O required to perform even simple operations.

So why not just choose the smallest stripe size? Well, hard drives are really good at reading contiguous blocks of data quickly. If you are reading/writing large files, grouping those accesses into larger stripe sizes will greatly increase the transfer rate.

In general, if you use mostly large files (video, uncompressed audio, large images), then you want a big stripe size (512KB – 1MB). If you have mostly very small files, then you want a small stripe size (32KB – 64KB). If you have a pretty good mix between the two, then 128KB – 256KB is your best bet.

Read Ahead Cache
A lot of RAID systems will give you the option of enabling a read ahead cache. Enabling this can dramatically increase your read speeds, but only in certain situations. In other situations, it can increase the load on your hard drives without any benefit.

Let’s talk about what happens in the read cache when read ahead is disabled. The read cache will only store data that you recently requested from the RAID volume. If you request the same data again, then the cache will already have that data ready for you on demand without requiring any disk I/O. Handy.

Now how is it different when read ahead caching is enabled? Well, with read ahead caching, the RAID controller will try and guess what data you’ll want to see next. It does this by reading more data off the disks than you request. So for example, if your Mac reads the first part of a bigger file, the RAID controller will read the subsequent bytes of that file into cache, assuming that you might need them soon (if you wanted to read the next part of the big file, for example).

This comes in handy in some situations. Like I mentioned earlier, hard drives are good at reading big contiguous blocks of data quickly. So if you are playing a big movie file, for instance, the RAID controller might read the entire movie into cache as soon as the first part of the file is requested. Then as you play the movie, the cache already has the data you need available. The subsequent data is not only available more quickly, but the other disks in your RAID volume are also free to handle other requests.

However, the read ahead results in wasted I/O. A lot of times, you won’t have any need for the subsequent blocks on the disk. For instance, if you are reading a small file that is entirely contained in a single stripe on the volume, there is no point in reading the next stripe. It just puts more load on the physical disks and takes more space in the cache, without any benefit.

Personally, I enable read ahead caching. It’s not always a win-win, but it can greatly speed up access times when working with bigger files (when the speed is needed most).

Write Back Cache
There are two write cache modes: write through, and write back. Your choice here can have a dramatic impact on the write speed to your RAID. Here’s how each mode works.

Write Through: When writing data to the disk, the cache is not used. Instead, OS X will tell the RAID to write the data to the drive, and the RAID controller waits for the data to be completely written to the drives before letting OS X know the operation was completed successfully.

Write Back: This uses the cache when writing data to the disk. In this case, OS X tells the RAID to write a given block of data to the disk. The RAID controller saves this block quickly in the cache and tells OS X the write was successful immediately. The data is not actually written to the disks until some time later (not too much later, just as soon as the disks can seek to the right location and perform the write operation).

Enabling the write back cache is “less safe” than write through mode. The safety issue comes into play during a power outage. If the power goes out between the time that the RAID told OS X the data was written, and the time when the data is actually on the disks themselves, data corruption could take place.

More expensive RAID systems, like the Pegasus2, have a battery-backed cache. The benefit here is that if a power outage happens as described above, the battery should power the cache until the power goes back on and the RAID controller can finish writing the cache to disks. This effectively overcomes the drawback of enabling write back caching.

Another potential drawback for enabled write back caching is a performance hit to the read speed. The reason for this is that there is less cache available for reading (because some is being used for writes). The hit should be pretty minor though, and only applicable when a lot of write operations are in progress. Otherwise, the amount of data in the write back cache will be minimal.

The big advantage of using a write back cache is speed though.  When write back caching is enabled, OS X doesn’t have to wait for data to be written to the disks, and can move on to other operations.  This performance benefit can be substantial, and gives the RAID controller more flexibility to optimize the order of write operations to the disks based on the locations of data being written.  Personally, I enable write back caching.


That about covers it.  Small desktop RAID systems are a nice way to get a consolidated block of storage with some a little redundancy and a lot more performance than just a stack of disks can provide.  I hope this overview has helped you choose the options that are best for your desktop RAID system. In the end, there is no right answer to the settings everyone should use. Choose the settings that best fit your workload and performance/safety requirements.

Pegasus2 Impressions

With the lack of drive bays in the new Mac Pro, Apple is definitely leaning toward external storage with its future models.  My Mac Pro won’t arrive until next month, but in the mean time I had to figure out what kind of storage system I was going to buy.

As I mentioned in a previous post, I had considered using my local file server as a large storage pool.  After trying it out for the past couple months, I wanted something that was a bit faster and more reliable though.  I decided to look at my direct attached storage (DAS) options.  Specifically, I was looking at Thunderbolt enclosures.

My data storage requirements on my desktop machine are currently between 3-4TB of active data, so single disk options weren’t going to cut it.  I need at least 2 disks in a striped RAID 0 at a minimum.  I’m not particularly comfortable with RAID 0 setups, because any one of the drives can fail and you would lose data.  However, with good automatic Time Machine backups, that shouldn’t be too much of an issue.  Ideally I want something with 3-4 drives that included a built-in hardware RAID 5 controller though.  This way, I would have a little bit of redundancy.  It wouldn’t be a replacement for good backups, but if I disk went offline, I could keep working until a replacement arrives.

The only 3 disk enclosure I found was the Caldigit T3.  This looks like a really slick device, and I was pretty close to ordering one.  The main drawback of the unit is that it doesn’t support RAID 5.  I would have to either have a 2 disk RAID 0 with an extra drive for Time Machine, or a 3 disk RAID 0 (which is pretty risky) to support the amount of storage I need.  I decided this wasn’t going to work for me.

Once you get into the 4 disk enclosures, the prices start to go up.  There are two options I considered here.  First is the Areca ARC-5026.  Areca earned a good reputation by manufacturing top-end RAID cards for enterprise.  The 5026 is a 4 bay RAID enclosure with Thunderbolt and USB 3 ports on the back.  The drawback is that it’s pretty expensive ($799 for just the enclosure), and it doesn’t exactly have a nice look to it.  It reminds me of a beige-box PC, and I wasn’t sure I wanted something like that sitting on my desk.

The other option I looked at was a Promise Pegasus2.  It’s also a 4 disk RAID system (with 6 and 8 disk options).  They offer a diskless version that is less expensive than the Areca.  It doesn’t support USB 3 like the Areca, but it does support Thunderbolt 2 instead of Thunderbolt 1.  And the case is sharp.  Between the faster host interface and the cost savings, I decided to get the Pegasus.

The diskless model took about 2 weeks to arrive.  The outside of the box claimed it was the 8TB R4 model, so Promise isn’t making a separate box for the diskless version.  I suspect that Apple twisted Promise’s arm a little bit to get them to release this model.  Apple knew there was going to be some backlash from Mac Pro upgraders who needed an external replacement for their previous internal drives.  Apple promoted Promise products back when the xServe RAID was retired, and I imagine Apple asked Promise to return the favor here.  The only place you can buy the diskless R4 is the Apple Store.  It isn’t sold at any other Promise retailers.

Since the enclosure doesn’t include any drives, I decided on Seagate 3TB Barracuda disks.  They are on the Promise supported drive list and I generally find Seagate to make the most reliable hard drives from past experience.  With a RAID 5, I would have about 9TB of usable space.  More than I need right now, but it’s a good amount to grow into.  Installing the hard drives was pretty straightforward: eject each tray, attach each drive with the set of 4 screws, and latch them back in.  Then I plugged it into my Mac with the included 3 foot black Thunderbolt cable and turned it on.

This being the diskless version, the default setup is to mount all four disks as if there was no RAID.  This is counter to the Pegasus models that include drives, where the default configuration is a RAID 5.  This module instead uses this pass-through mode (JBOD), so you can take drives right out of your old computer and use them with the new enclosure.  I had to jump through a few hoops, but getting the RAID setup wasn’t too bad.  I had to download the Promise Utility from their website first.  Once you install the software, you can open up the utility and then do the advanced configuration to setup a new RAID volume.  The default settings for creating a RAID 5 weren’t ideal.  Here’s what you should use for a general case…

Stripe Size:  128KB
Sector Size:  512 bytes
Read Cache Mode:  Read Ahead
Write Cache Mode:  Write Back

The Pegasus2 has 512MB of RAM, which is used for caching.  It’s a battery-backed cache, so using Write Back mode instead of Write Through should be okay for most cases.  Only use Write Through if you really want to be ultra-safe with your data and don’t care about the performance hit.

Once you get the RAID setup, it starts syncing the volume.  The initial sync took about 8 hours to complete.  The RAID controller limits the rebuild speed to 100MB/sec per disk.  This is a good idea in general because you can use the device during the rebuild and it let’s you have some bandwidth to start using the volume right away.  However, it makes me wonder how much time could be saved if there wasn’t a limit (I found no way to disable or increase the limit using their software).

Drive noise is low to moderate.  The documentation claims there are two fans, one big one for the drives and one small one for the power supply.  Looking through the power supply vent though, it doesn’t look like there’s actually a fan there.  Maybe it’s further inside and that is just a vent.  The bigger fan spins at around 1100-1200rpm (this is while doing the rebuild, but idle is no lower than 1000rpm).  It’s definitely not loud, but it’s not really quiet either.  Sitting about 2 feet away from the Pegasus, it makes slightly less noise as my old Mac Pro (I keep the tower on my desk about 3 feet away).  The noise from the Pegasus is a bit higher pitch though.  When the new Mac Pro gets here, I’ll have the Pegasus further away from me, so I’ll wait to fully judge the amount of noise at that point.

Overall I’m very happy with the system so far.  Initial benchmarks are good.  Since I don’t have the new Mac Pro yet, I’m testing on a 2011 MacBook Air over a Thunderbolt 1 connection.  Using the AJA System Test, I saw rates of around 480MB/sec reads and 550MB/sec writes.  Switching to BlackMagic, the numbers bounced around a lot more, but it came up with results around 475MB/sec reads and 530MB/sec writes.  With RAID 5 having notoriously slow writes because of the parity calculation, I’m a little surprised the Pegasus writes faster than it reads.  The RAID controller must be handling the parity calculation and caching well.  It will be interesting to see if benchmarks improve at all when connected to the new Mac Pro over Thunderbolt 2.

On the New Mac Pro

Apple talked more about the new Mac Pro at it’s special event today, giving more details on when it will start shipping (December) and how much it will cost ($2999 for the base model). They also covered some additional hardware details that weren’t mentioned previously and I thought I would offer my 2 cents on the package.


There’s been a lot of complaints about the lack of expansion in the new Mac Pro, particularly when it comes to storage. With the current Mac Pro able to host up to 4 hard drives and 2 DVD drives, the single PCIe SSD slot in the new Mac Pro can be considered positively anemic. This has been the biggest issue in my eyes. Right now in my Mac Pro, I have an SSD for the OS and applications, a 3TB disk with my Home directory on it, and a 3TB disk for Time Machine. That kind of storage just won’t fit in a new Mac Pro, which only has a single PCIe SSD slot.

I believe Apple’s thought here is that big storage doesn’t necessarily belong internally on your Mac anymore. Your internal drives should be able to host the OS, applications, and recently used documents, and that’s about it. Any archival storage should be external, either on an external hard drive, on a file server, or in the cloud. Once you start thinking in this mindset, the lack of hard drive bays in the new Mac Pro start to make sense.

Personally, if I decide to buy one, I’ll probably start migrating my media to a file server I host here in a rack and see just how much space I need for other documents. I already moved my iTunes library a couple months back (300GB), and if I move my Aperture photo libraries over, that will reduce my local data footprint by another 700-800GB (depending on how many current photo projects I keep locally). That’s an easy terabyte of data that doesn’t need to be on my Mac, as long as it’s available over a quick network connection.

VMware virtual machines are a little tricky, because they can use a lot of small random accesses to the disk, and that can be really slow when done over a network connection with a relatively high latency. The virtual disks can grow to be quite large though (I have a CentOS virtual machine to run weather models that uses almost 200GB). I’ll have to do some testing to see how viable it would be to move these to the file server.

All this assumes that you want to go the network storage route. To me, this is an attractive option because a gigabit network is usually fast enough, and having all your noisy whirring hard drives in another room sounds… well… peaceful. If you really need a lot of fast local storage though, you’ll have to go the route of a Thunderbolt or USB 3 drive array. If you have big storage requirements right now, you most likely have one of these arrays already.

CPU/GPU Configurations

The new Mac Pro comes with a single socket Xeon CPU and dual socket AMD FirePro GPUs. This is reverse from the old Mac Pro, which had 2 CPU sockets and a single graphics card (in its standard configuration). The new Mac Pro certainly is being geared more toward video and scientific professionals that use the enhanced graphics power.

With 12 cores in a single Xeon, I don’t think the single socket CPU is a big issue. My current Mac Pro has 8 cores across 2 sockets, and other than when I’m compiling or doing video conversion, I have never come close to maxing all the cores out. Typical apps just aren’t there yet. You’re much better off having 4-6 faster cores than 8-12 slower cores. Fortunately, Apple gives you that option in the new Mac Pro. A lot of people have complained about paying for the extra GPU though. FirePro GPUs aren’t cheap, and a lot of people are wondering why there isn’t an option to just have a single GPU to save on cost.

I think the reason for this is the professional nature of the Mac Pro. The new design isn’t really user expandable when it comes to the graphics processors, so Apple decided to include as much GPU power as they thought would be reasonably desired by their pro customers. The new Mac Pro supports up to three 4K displays, or up to six Thunderbolt displays. A lot of professionals use dual displays, and it’s increasingly common to have three or more displays. With dual GPUs this isn’t a problem in the new Mac Pro, while if they just configured a single GPU the display limit would be comparable to the iMac. Personally, I have 2 graphics cards in my Mac Pro, and have used up to 3 displays. Currently I only use 2 displays though, so I could go either way on this issue. I do like the idea of having each display on it’s own GPU though, as that will just help everything feel snappier. This is especially true once 4K displays become standard on the desktop. That’s a lot of pixels to push, and the new Mac Pro is ready for it.

External Expansion

I’ve seen people comment on the lack of Firewire in the new Mac Pro. This, in my opinion, is a non-issue. Even Firewire 800 is starting to feel slow when compared to modern USB 3 or Thunderbolt storage. If you have a bunch of Firewire disks, then just buy a $30 dongle to plug into one of the Thunderbolt ports. Otherwise you should be upgrading to Thunderbolt or USB 3 drives. USB 3 enclosures are inexpensive and widely available.

Outside that, the ports are very similar to the old Mac Pro. One port I would have liked to see in the new Mac Pro was 10G ethernet. The cost per port of 10G is coming down rapidly, and with moving storage out onto the network, it would have been nice to have the extra bandwidth 10G ethernet offers. Apple introduced gigabit ethernet on Macs well before it was a common feature on desktop computers as a whole. Perhaps there will be a Thunderbolt solution to this feature gap sometime down the road.

Power Consumption and Noise

This alone is a good reason to upgrade from a current Mac Pro. The new Mac Pro will only use around 45W of power at idle, which isn’t much more than a Mac Mini and is about half of the idle power consumption of the latest iMacs (granted, the LCD in the iMac uses a lot of that). My 2009 Mac Pro uses about 200W of power at idle. Assuming you keep your Mac Pro on all the time, and are billed a conservative $0.08 per kilowatt hour, you can save about $100/year just by upgrading. That takes some of the sting out of the initial upgrade cost for sure.

Using less energy means needing less cooling. The new Mac Pro only has a single fan in it, and it’s reportedly very quiet. Typically the unit only makes about 12dB of noise, compared to around 25dB in the current Mac Pro. With perceived volume doubling for every 3dB increase, the new Mac Pro is about 16 times quieter than the old one. Surely the lack of a spinning HD helps here as well.


Overall the new Mac Pro is a slick new package, but you already knew that. It isn’t for everybody, but it fits the needs of the professional customer pretty well moving forward. Personally, I haven’t decided if I will buy one yet. My Mac Pro is almost 5 years old at this point, and while it still does a good job as a development machine, I’m starting to feel its age. However, I haven’t decided whether I will replace it with a new Mac Pro, the latest iMac, or even a Retina MacBook Pro in a form of docked configuration. There are benefits and drawbacks to each configuration, so I’m going to wait until I can get my hands on each machine and take them for a spin.

Living in a Sandboxed World

No matter what your view on Apple’s new sandboxing requirement for the Mac App Store, if you want to keep updating your MAS apps, you’re going to need to sandbox them. I was able to sandbox Seasonality Core pretty easily. I don’t access any files outside of Application Support and the prefs file. My entitlements just require outgoing network connections, which was pretty easy to enable in the target settings.

However, I distribute two versions of Seasonality Core. One is the Mac App Store application, and the other is a version for my pre-Mac App Store customers. The question arose: should I sandbox the non-Mac App Store application? I wanted the answer to this question to be yes, but unfortunately the serial number licensing framework I am using kept me from doing this. So I was forced to sandbox the Mac App Store version, but keep the non-Mac App Store version outside the sandbox. Crap.

You might be wondering what the big deal is here. Can’t my Mac App Store customers just use one app, and pre-Mac App Store customers use the other one? Well, yes, but there are a few situations where some customers might use both versions of the app.

If someone uses both the Mac App Store version and the non-Mac App Store version, things go south quickly. The first time the sandboxed Mac App Store version is run, all of Seasonality Core’s data files will be migrated into the sandbox. That means the next time the non-Mac App Store version is opened, it won’t be able to see any of the past data Seasonality Core has collected. That’s not good.

So how did I get around this? After taking a quick poll on Twitter, it sounded like the best option for me would be to have the non-Mac App Store version look reach inside my app’s sandbox if it existed. To do this, I just had to build some extra code into the method that returns my Application Support path. Here’s the new implementation:

+ (NSString *) seasonalityCoreSupportPath { NSFileManager *fm = [NSFileManager defaultManager]; #ifndef MAC_APP_STORE // Check if ~/Library/Containers/BundleID/Data/Library/Application Support/Seasonality Core exists. NSString *sandboxedAppSupportPath = [NSString pathWithComponents: [NSArray arrayWithObjects:@"~", @"Library", @"Containers", [[NSBundle mainBundle] bundleIdentifier], @"Data", @"Library", @"Application Support", @"Seasonality Core", nil] ]; sandboxedAppSupportPath = [sandboxedAppSupportPath stringByExpandingTildeInPath]; BOOL isDir; if ([fm fileExistsAtPath:sandboxedAppSupportPath isDirectory:&isDir]) { // We found a sandboxed Application Support directory, return it. if (isDir) return sandboxedAppSupportPath; } #endif NSArray *appSupportURLs = [fm URLsForDirectory:NSApplicationSupportDirectory inDomains:NSUserDomainMask]; NSString *appSupportDirectory = nil; if (appSupportURLs.count > 0) { NSURL *firstPath = [appSupportURLs objectAtIndex:0]; appSupportDirectory = [firstPath path]; } return [appSupportDirectory stringByAppendingPathComponent:@"Seasonality Core"]; }

The new code only runs if the MAC_APP_STORE isn’t defined (these are project definitions I have set elsewhere for the different builds). We check to see if there is a sandbox for the app, and if so it will return the sandboxed directory. Otherwise it returns the standard Application Support directory.

This is a pretty complete solution, except that I wanted to make sure the user’s preferences were saved between the two app versions as well. NSUserDefaults won’t know to check for the existence of a sandbox. Daniel Jalkut gracefully offered this solution, which I have since adapted into my own code as follows:

+ (BOOL) gsImportNewerPreferencesForBundle:(NSString *)bundleName fromSandboxContainerID:(NSString *)containerID { BOOL didMigrate = NO; NSArray *libraryFolders = NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES); if (libraryFolders.count) { // Get a path to our app's preference file. NSString *prefsFile = [NSString pathWithComponents:[NSArray arrayWithObjects: [libraryFolders objectAtIndex:0], @"Preferences", bundleName, nil ]]; prefsFile = [prefsFile stringByAppendingPathExtension:@"plist"]; // Get a path to the same preference file in the given sandbox container. NSString *containerPrefsFile = [NSString pathWithComponents:[NSArray arrayWithObjects: [libraryFolders objectAtIndex:0], @"Containers", containerID, @"Data", @"Library", @"Preferences", bundleName, nil ]]; containerPrefsFile = [containerPrefsFile stringByAppendingPathExtension:@"plist"]; NSFileManager* fm = [NSFileManager defaultManager]; if ([fm fileExistsAtPath:containerPrefsFile]) { NSDate *prefsModDate = [[fm attributesOfItemAtPath:prefsFile error:nil] objectForKey:NSFileModificationDate]; NSDate *containerModDate = [[fm attributesOfItemAtPath:containerPrefsFile error:nil] objectForKey:NSFileModificationDate]; if ((prefsModDate == nil) || ([prefsModDate compare:containerModDate] == NSOrderedAscending)) { // Copy the file. [fm copyItemAtPath:containerPrefsFile toPath:prefsFile error:nil]; // Reset so the next call to [NSUserDefaults standardUserDefaults] // recreates an object to the new prefs file. [NSUserDefaults resetStandardUserDefaults]; NSLog(@"Found newer preferences in %@ - importing", containerPrefsFile); didMigrate = YES; } } } return didMigrate; }

I call the above preferences migration code directly from main(), so it executes before the any part of the main app might hit NSUserDefaults. Works pretty well thus far.

When the App Store thinks an app is installed when it really isn’t

I was trying to figure out a problem where the Mac App Store incorrectly thought an application was installed on my Mac. For the life of me, I couldn’t figure out why it thought that app was installed when it wasn’t. I tried deleting caches, restarting the Mac, Spotlighting for all apps with that name, all to no avail.

It ended up the problem was from the LaunchServices database. The App Store checks LaunchServices to see which apps are installed. Apparently LaunchServices still had a record of an application bundle even though it had been deleted. Here’s how to use the Terminal to check and see which apps are in the LaunchServices database:

A/Support/lsregister -dump

If you search through that verbose output and find an app that isn’t really there, you should rebuild the LaunchServices database. You can do that with the following command:

/A/Support/lsregister -kill -r -domain local -domain system -domain user

Hope this saves someone an hour or two of problem-solving…

A New Mac

Gaucho Desk - July 2009

I had been waiting for Nehalem Mac Pros to be released since the middle of last year, so I was pretty excited to hear Apple release them earlier this spring when I was in Thailand. Of course, I didn’t have time while abroad to actually put in an order, so I waited until I got back and ordered one in the beginning of April with these specs…

  • 8 x 2.26Ghz Nehalem Xeons
  • 12GB of 1066Mhz DDR3 RAM
  • Radeon 4870 graphics card with 512MB of VRAM
  • 2 1TB hard drives in a RAID 0 configuration
  • 1 640GB hard drive for Time Machine
  • It was a tough decision between the 8 core 2.26Ghz model and the 4 core 2.66Ghz model. The 4 core model was a bit cheaper. It is also faster at single-threaded processes, but with the drawback of only having half as many cores. Since compiling code is the primary job of this Mac, and compiling takes advantage of as many CPUs as you can throw at it, I decided to go for the 8 core model.

    I’m glad I did, because for compiling this machine is a beast. You can check out a screencast I captured showing XRG compiling subversion below. With 16 CPUs (8 real + 8 hyperthreading), I had to create a new CPU graph on XRG to show them more efficiently. The top shows a composite of all CPU activity, and the bottom shows a bar chart with immediate CPU usage for each of the 16 CPU cores.


    All in all, this Mac is more than 6 times faster at compiling than the Dual 2.5Ghz G5 it replaces, which definitely saves me quite a bit of time day in and day out while working.

    When ordering this Mac, I also ordered a second 24″ LCD. Now, having 2 24″ displays (an HP LP2475w and a Samsung 2493HM) makes usability a lot smoother. Plenty of space to spread out all the windows I’m working with. While coding, I can have Xcode take a full display, and then run the app on the second display, never having to worry about covering up the debugging interface while testing something.

    I posted two other photos of the machine. All in all, it’s a dream system for me. Here’s hoping this dream lasts for a very long time.

    Customer Service

    There have been two instances of excellent customer service that I’ve experienced recently. The service offered in both instances was so good, that I decided to blog about them.

    The first experience took place just before WWDC this year. Usually after a year of hammering on a laptop battery, I pick up a fresh battery before the conference, simply because it’s important that the laptop works all day while taking notes in the sessions. Usually I replace the battery with an Apple standard battery, but this year I decided to give a third party a shot. FastMac has a battery for the MacBook Pro that claims it will last longer than the Apple one, and it’s about $20-30 cheaper too. I ordered it and waited patiently for it to arrive.

    After hearing nothing for about a week (and WWDC getting dangerously close), I decided to give them a call. The person I talked to was apologetic, stating that they ran out of stock just before my order was placed. Bummer. Fortunately, FastMac did have new Apple batteries in stock, and not only did they offer to switch my order, they added rush shipping to make sure it would arrive before the conference, and knocked the price down to $10 less than they were charging for their own battery. The unit arrived with a day or two to spare, and overall it was a great example of a company going the extra mile.

    The second experience happened just a few days ago. I’m installing a new network here at the office, and part of that new network was a Cisco router. As usual, I ordered the new equipment from NewEgg. It arrived, and seemed to work okay out of the box, but for some reason I was unable to connect to the device using the ASDM or over the web configuration interface. I called up Cisco, and the tech I spoke with there spent an hour and a half on the phone with me trying to troubleshoot the issue. The nice thing was their use of WebEx to help troubleshoot, so they could share my Desktop here and work with the router themselves directly. In the end, it was determined that the router I received had a corrupted flash chip, because we were unable to write any new data to the flash disk.

    I went through NewEgg’s online exchange interface, and it was looking like I needed to pay to ship the damaged router back to them (shipping of the replacement device was free). I was a bit put off by this. While I agree it wasn’t NewEgg’s fault I received a bum router, I also shouldn’t pay extra for something that wasn’t my fault either. When calling up NewEgg to ask an unrelated question, the representative I was speaking to noticed that I was charged shipping to return the damaged device. Not only did he refund the return shipping amount, but he also put through an order for the new device to ship before they received the damaged one. To top it all off, he upgraded the shipping on the replacement to next-day air for free.

    In this last situation, both Cisco and NewEgg get major props for great service. The new router arrived and it’s worked perfectly from the get-go.

    CFNetwork Versions

    Sometimes it’s important to know what version of Mac OS X your users are running, especially when making decisions on what versions of OS X to support in future software releases. In the case of Seasonality 2.0, I have decided to take advantage of all the developer changes in Leopard, both to make Seasonality a better application, and to shorten my development time (thus giving me more time to work on additional features).

    Previously, I haven’t performed any OS statistics on user data. It would be easy to do, since Seasonality downloads forecast and image data from a web server here at Gaucho Software, but I haven’t written the code required. However, some of these Seasonality data requests are using the typical CFNetwork methods of downloading data, and these connections provide the current CFNetwork version in the HTTP UserAgent, and thus will show up in my web server logs.

    The problem is that I have been unable to find any kind mapping between CFNetwork versions and the corresponding version of Mac OS X. I decided to take it upon myself to generate (and hopefully maintain) such a list here. Most of this data is from viewing the Darwin source code on Apple’s web site, but some of these are just from personal observations, and some are educated guesses (marked with a question mark).

    HTTP UserAgent Version of Mac OS X
    CFNetwork/454.4 Mac OS X 10.6.0
    CFNetwork/438.14 Mac OS X 10.5.8
    CFNetwork/438.12 Mac OS X 10.5.7
    CFNetwork/422.11 Mac OS X 10.5.6
    CFNetwork/339.5 Mac OS X 10.5.5
    CFNetwork/330.4 Mac OS X 10.5.4
    CFNetwork/330 Mac OS X 10.5.3
    CFNetwork/221.5 Mac OS X 10.5.2
    CFNetwork/221.2 Mac OS X 10.5.2 Developer Seed?
    CFNetwork/220 Mac OS X 10.5.1?
    CFNetwork/217 Mac OS X 10.5?
    CFNetwork/129.22 Mac OS X 10.4.11
    CFNetwork/129.20 Mac OS X 10.4.9 – 10.4.10
    CFNetwork/129.18 Mac OS X 10.4.8
    CFNetwork/129.16 Mac OS X 10.4.7
    CFNetwork/129.13 Mac OS X 10.4.6
    CFNetwork/129.10 Mac OS X 10.4.4 – 10.4.5 (Intel)
    CFNetwork/129.9 Mac OS X 10.4.4 – 10.4.5 (PPC)
    CFNetwork/129.5 Mac OS X 10.4.3
    CFNetwork/128.2 Mac OS X 10.4.2
    CFNetwork/128 Mac OS X 10.4.0 – 10.4.1
    CFNetwork/4.0 Mac OS X 10.3 or earlier

    Edit: A more modern list of UserAgents can be found here.

    Signs that Apple is Becoming a Big Evil Corporation

    Over the past few years it seems that Apple is slowly selling its soul. It’s a bit unsettling, as many people look to Apple as an example of a virtuous company. That’s how it started anyway, just two guys in a garage designing cool computers. Now it seems with every record quarter, new kick-ass product, and innovation made in media, Apple sells itself out. Before I continue down this road, let me make it perfectly clear that I still think Apple has it’s virtues. Apple products are top-notch, and I’m happy the Mac market share is expanding at the rate it is, simply because more Mac users means more potential Seasonality users. The Mac platform is a good place to be right now. So before you send me that hate mail, just remember that I’m telling it as I see it, and this is how I’m seeing it.

    What does it mean to become a big, evil corporation? To me a big, evil corporation is one that is driven completely by profits, does not care what it sells, just that consumers buy it, and doesn’t care what it does to produce the products it sells. Actually, it pretty much is against everything that Apple stands for: free ideas, thinking different, and building tools that innovate to help you innovate. Imagine a bright, shiny, chrome shield of virtue; Apple’s coat of arms if you will. Here are a few things I’ve noticed keeping that shield from being shiny and new.

    The iTMS

    I spent some time thinking back to where it all started, and the best turning point I could think of was the introduction of the iTunes Music Store. I still remember the day this store opened. I remember downloading a fresh version of iTunes, and checking out the hip store that was so easy to shop at. 1-click shopping to purchase any of thousands of songs. Sure there was DRM, but there had to be or else none of the music companies would go for it. “Apple did good though! Only $0.99 a song, and you can play it on 5 devices, AND burn it to a CD!” DRM was certainly a bullet Apple had to bite, otherwise the iTMS would never exist as it does today. However, Apple as a corporation is about free thinking, selling products that you can use for endless innovation. DRM is most certainly against those ideals, being practically invented for the music and movie industries, which are the epitomes of big, evil business… Chink.


    Remember when .Mac came out and they weren’t charging for it? “Really, it was free?” Yep. Does it make sense for this service to be free today? Maybe not. There are certainly a lot of people who take advantage of .Mac, and someone has to pay for all that bandwidth and server hardware. It’s not really Apple’s responsibility to provide such a service for no cost to all Mac OS X users. So what’s my dig with .Mac? What I don’t like about it is how many features in Mac OS X and iLife especially are completely crippled for users who don’t subscribe. Why can’t I use any online-accessible directory as an export location for iWeb or iPhoto galleries? To add insult to injury, Apple really puts pressure on developers to make use of .Mac in their software applications. Really then, it’s developers who sell .Mac for Apple. Want to sync X, Y, and Z apps with each other? Sorry you can’t do that unless you have a .Mac account…

    The fact of the matter is, I wouldn’t even mind paying for .Mac if it was competitive in the hosting industry. For $100/year, you’re getting a paltry amount of online disk space (this was made somewhat better recently), a couple of email accounts, and some web space with a small helping of bandwidth. Compare to Google, who gives away disk space, email, and shared documents for free. .Mac is just another hook Apple uses to get more recurring money from customers, instead of being a solid innovation in the online sharing community like it should be. Sounds big evil corporationy to me… Chink.

    “Because of Accounting…”

    It seems we are seeing these paid hardware unlocks from Apple much more frequently. First, last year with MacBook Pros and 802.11n, and now with the recent iPod touch update. It’s blamed on some accounting rules, but really, since when can a company not decide to give something away for free? Why can only products sold on a subscription basis be given free feature updates? Is Leopard a subscription-based product? In the case of the MacBook Pro, the network card was already there, but you had to pay an extra $2 to use it at that speed. Why? Why can’t the users who bought those machines just find out their laptops are even more awesome than before? $2 is a pain in the butt. Katrina has one of these laptops, and we just never paid: not because of the money or the principle of the thing (the latter of which is certainly a valid reason to avoid a product for us), but because it was just another line-item on our todo list that gets lost below all the other important stuff.

    Now the big rage is this iPod touch update. 5 new apps for the iPod touch, only $20…what a deal!</sarcasm> Actually, it is a deal. Those 5 apps make the iPod touch twice as useful as it was before, useful enough that I bought one yesterday to replace my Dell PDA. Now I’m in the software industry, so I understand that paid updates are important. You can’t just give away free updates forever, because you have to pay for that continued development. Furthermore, when current iPod touch users purchased their devices, they paid for the current features, so it was worth it to them at the time. If the iPod touch didn’t fit your needs before, then why’d you buy it to begin with?

    On the other hand, these 5 applications have been on the iPhone for quite some time. They aren’t really new developments (though there are some new features in each of the apps), and the touch only came out back in September. If someone just bought a product from you less than 6 months ago, you shouldn’t be sticking them with an upgrade fee. Apple screwed up by crippling the iPod touch from the start “to protect iPhone sales,” they should be biting the bullet. It’s just a collection of bits anyway, nothing physical that would actually cost Apple money to offer.

    Admittedly, this brings up a tricky topic: upgrade fees. Now since I just bought an iPod touch, if Apple decides a year from now to release and charge for a big software update, how will I feel? Actually, that would be completely fine by me. In fact, I’d be happy if they were still upgrading the software on my device after a year. So where do you draw the line? Somewhere between 5 and 12 months I think. Seriously though, Apple here is releasing software version 1.1.3… This is a point release, what most of the industry would consider a bug-fix. To charge for a point release is absurd, so at the very least, the iPhone/iPod touch development team needs to get their version numbering in check.

    Upgrade fees are a fact of life, but these few select examples rub me the wrong way. I’ll be the first in line to buy a new version of OS X every year, but even noting that Apple has to point the finger at “accounting” is kind of a clue the company is going evil. Chink.

    iPhone nonSDK

    Just over a year ago, Apple dropped the bombshell that it’s new iPod phone would be running a “stripped down version of OS X.” I couldn’t believe it… I thought OS X was just too big for an embedded device, and I’m glad I was proven wrong. That means developing for the iPhone wouldn’t be much different than developing for the Mac. Awesome. Until developers asked Apple how we could go about writing software for the iPhone. Their answer: web apps… Great.

    Now to give Apple credit, an SDK is expected sometime next month, and I’m anxiously awaiting such an SDK, but they didn’t get it right from the start. The iPhone is an iPod, but it’s a lot more than that…it’s a mobile device, and developers expect to be able to write software for mobile devices. Actually, development of new software for mobile devices really drives the platforms forward. Not having this is a slap in the face, almost as big as DRM. I’m just hoping they get it right the second time around. It would be a shame if they place too many restrictions or force developers to get “approval” to write apps for the platform. The verdict is still out on this one, but it still strikes me as a big evil company lock-out. Chink (but you might be able to buff this one out).

    Time Capsule

    To wrap up my argument, I present the new Time Capsule announcement. If you’re unfamiliar, Time Capsule is basically a networked attached Mac hard drive, a NAS. Apple is marketing it as a Time Machine backup device, a hard drive any of the Macs on your network can use to backup files to. It’s certainly a product that Apple should produce, and it really seems like something they could do a nice job with. So what’s the problem?

    For this one, you need a little bit of background. You see, when Time Machine was originally announced at WWDC 2006 (!), Apple claimed that backing up over a network would be supported. To me, this was it. I like running servers, and setting a Time Machine server sounded like a nice idea to me. Even better, last year the NAS market took off, and I ended up purchasing a 1TB NAS from Micronet, expecting to have no problem backing up over Samba, NFS, or whatever other network protocol Mac OS X supported (webdav?). Fast-forward to October 2007 when Leopard was released, and what do we get? You can backup over the network to a Mac running OS X Leopard Server, and that’s it.

    So now you have all these NAS devices on the market, and none of them work with Time Machine, without the use of an unsupported hack. Supposedly, Time Machine requires backup to an HFS+ formatted device for it’s hard-link support. Well, my NAS is formatted ext3, which also supports hard-links. And why can’t Time Machine fall back to using a device without hard-link support and just take more disk space by writing more than one copy of the files? Or perhaps it would even be useful for users to have just a single backup copy of their files to a device that doesn’t fully support incremental backups with hard-links. It begs the question, was Time Machine built to truly bring backup to Mac OS X users at large, or was it designed into OS X to sell the upcoming Time Capsules? Chink.

    Of the Coat of Arms

    So our new Apple coat of arms is a little more battered than when it started. These are definitely areas that need improvement. Am I optimistic we’ll see these issues resolved? I try to be, but when the frequency of these events is increasing, it’s difficult to look at it with a positive note. Maybe it doesn’t matter… Apple still designs a lot of cool products, maybe that’s enough. This is true to some extent, as long as Apple continues to sell it’s products, it will continue to survive as a corporation, and we’ll still be Mac users. The drawback I think comes to customer loyalty. Apple is known as one of the strongest brands worldwide, simply because of their customer loyalty. Practices such as the above that step on their customers are sure to lessen their brand loyalty though. I won’t purchase or design products that use .Mac simply because I see it as a way Apple locks users into that service. I had no interest in purchasing the iPhone/iPod touch until it was announced I could write apps for it, and the verdict of whether that was a waste of my money is still out on that one. Steve Jobs noted 2007 as a great year in Apple history (indeed), and how much has happened in 2008 already. I sincerely hope I don’t have any more chinks to add to this list next January…

    « Older posts

    © 2020 *Coder Blog

    Theme by Anders NorenUp ↑