*Coder Blog

Life, Technology, and Meteorology

Category: Uncategorized (page 1 of 3)

Fixing BufferBloat

For years I’ve had issues with with my internet connection crapping out whenever I’m uploading data. The connection will be fine when browsing the web and downloading data at the rated speed of 60 mbps. However, whenever I tried to upload a large file, it would saturate my upload link and slow every other network connection to a crawl. Websites wouldn’t load, DNS wouldn’t resolve, and ping times would be measured in seconds.

My theory for a long time was that the upload bandwidth was becoming so saturated, that TCP ACKs for incoming data wouldn’t get sent out in a reasonable amount of time. So for a long time I was looking for a way to prioritize TCP SYN/ACK packets. However, I never ended up figuring out how to do this.

A few nights ago while looking for a different solution, I stumbled across the idea of BufferBloat causing issues when saturating an internet link. Apparently, modern networking equipment has a lot more connection buffer memory than older equipment. This seems like a good thing, with memory prices being so cheap it makes sense to include a large buffer to help keep the network link saturated. The increased buffer could be in your router, cable modem, or any number of components at your ISP. Unfortunately, this can cause problems when your link is slow enough to fill the buffer quickly.

When TCP connections are created, the networking hardware starts spooling up the connection speed by gauging the TCP ACK packet response times. The faster the ACK packets arrive, the faster the network connection appears to be to the network interface, so the interface tries to send data even faster until the link is saturated.

The problem is that networking equipment between the network interface on your computer and your destination may be buffering a lot of network packets. In my case, I have a 5 mbps upload link, so either my modem or router is buffering enough data while I’m uploading a large file that TCP ACK packets are taking several seconds to arrive back. During that time, the packets are just sitting in the buffer waiting to be sent. Once the bandwidth to send the packets is available, they transmit relatively quickly, but from the standpoint of my computer the response time is very slow. This kills the connection.

The fix is to limit the amount of outgoing bandwidth on your router using QoS. What you want to do is to limit your bandwidth to about 0.5 – 1 mbps less than your connection can handle. On the ZyXel ZyWALL 110, this is done through the BWM (bandwidth management) screen in the configuration. First, enable BWM with the checkbox at the top of the page. Then add a new rule:

Configuration
Enable: checked
Description: Outgoing Bandwidth Limit
BWM Type: Shared

Criteria
User: any
Schedule: none
Incoming Interface: lan1
Outgoing Interface: wan1
Source: any
Destination: any
DSCP Code: any
Service Type: Service Object
Service Object: any

DSCP Marking
Inbound Marking: preserve
Outbound Marking: preserve

Bandwidth Shaping
Inbound: 0 kbps (disabled), Priority 4, Maximum 0 kbps (disabled)
Outbound: 4000 kbps, Priority 4, Maximum 4000 kbps

802.1P Marking
Priority Code: 0
Interface: none

Log: no

The key above is in the Bandwidth Shaping section. Set your outbound guaranteed bandwidth and bandwidth limit to 0.5 – 1 mbps below your maximum upload speed. Here I set mine to 4000 kbps, which is a megabit less than my internet upload speed of 5 mbps.

Once I did this, my connection improved dramatically. I take a slight hit in upload rate for the single upload connection, but overall my internet connection is a lot more responsive for other connections and devices while uploading. If you think you might be experiencing this same issue, try running a speed test over at DSL Reports on your internet connection. Their speed test will check and give you a report on BufferBloat. Before the fix, my upload BufferBloat was around 600 ms. After the fix the BufferBloat is down to only 20 ms.

It’s been a hell of a week

Last year, I dropped a chunk of change on a Pegasus2 RAID box to store all my data on. It has been a great device since then, until earlier this week when a drive died in the RAID. Now, the Pegasus was configured as a RAID 5, so it should have just kept chugging away until I was able to replace the disk. That’s why I spent the money on it, after all, so a drive failure wouldn’t keep me from getting work done. Unfortunately, the Pegasus instead started to fail itself, throwing I/O errors to the system for days before finally failing the disk out of the volume. I can’t count how much time I spent trying to figure out where the hell the sporadic beach balls were coming from, including reinstalling the OS, twice.

I’ve been going back and forth with Promise about it, and we still haven’t gotten anywhere. The Pegasus did finally kick the drive, so now the volume is accessible again, but I have 0 confidence in the device itself or the content that is currently on there. I have backups, so at least that isn’t a problem.

But now I’m in the middle of damage control. Yesterday I ordered a new disk for the failed one in the Pegasus, shipping it next day air. Of course I ordered the 5900 RPM drive instead of the 7200 RPM one, which is partly my stupid mistake for not realizing and partly due to Seagate not being explicit with the spindle speed in their products anymore. Back that goes to Amazon today (which as the silver lining of this story actually refunded me the entire cost of the drive plus the extra I paid for next day shipping, amazing customer service and kudos to Amazon!). The correct drive is on its way to me and will be here tomorrow.

Beyond that, I ordered a new 6TB drive to store all my data until I’m satisfied that the Pegasus is back online and reliable again. That is set to arrive here tomorrow as well.

Like I said, a hell of a week. The Apple TV Dev Kit arrived in the middle of all this, but unfortunately I haven’t had a development Mac or the time to work with it yet. In the end, here’s how I’m seeing the score:

Me: -1 (for purchasing the wrong drive)
Apple: +1 (for shipping the Dev Kits so quickly)
Seagate: -5 (for no longer listing drive RPM in their product specs)
Amazon: +10 (for going above and beyond in customer service)
Pegasus: -20 (for dying in the exact situation I spent extra money to avoid)
Promise: ? (still don’t know how they are going to handle my Pegasus failure)

Setting up a Small Desktop RAID System

With the exodus of mass internal storage hitting even the top end of the line in the 2013 Mac Pro, a lot more people are going to start looking for external solutions for their storage needs. Many will just buy an external hard drive or two, but others like myself will start to consider larger external storage arrays. One of the best solutions for people who need 5-15GB of storage is a 4 disk RAID 5 system. As I mentioned in a previous post, I went with a Pegasus2, and set it up in a RAID 5. This brings up a lot of questions about individual RAID settings though, so I thought I would put together a primer on typical RAID settings you should care about when purchasing a Pegasus or comparable desktop RAID system.

Stripe Size
Stripe size is probably the setting that has one of the biggest impacts on performance of your RAID. A lot of people will run a benchmark or two with different stripe sizes and incorrectly determine that bigger stripe sizes are faster, and use them. In reality, the best performing stripe size highly depends on your workload.

A quick diversion to RAID theory is required before we can talk about stripe sizing. With RAID 5, each drive is split up into blocks of a certain size called stripes. In a 4 disk RAID 5, 3 disks will have real data in their stripes, and the 4th disk will have parity data in it’s stripe (in reality, the parity stripes in a RAID 5 alternate between drives, so not all the parity is on the same disk). The parity stripe allows a disk to fail while still keeping your array online. You give up 25% of the space to gain a certain amount of redundancy.

When you read data from the volume, the RAID will determine which disk your data is on, read the stripe and return the requested data. This is pretty straightforward, and the impact of stripe size during reading is minimal.

However, when writing data to the disk, stripe size can make a big performance difference. Here’s what happens every time you change a file on disk:

  1. Your Mac sends the file to the RAID controller to write the change to the volume.
  2. The RAID controller reads the stripe of data off the disk where the data will reside.
  3. The RAID controller updates the contents of the stripe and writes it back to the disk.
  4. The RAID controller then reads the stripes of data in the same set from the other disks in the volume.
  5. The RAID controller recalculates the parity stripe.
  6. The parity slice is written to the final disk in the volume.

This results in 3 stripe reads, and 4 stripe writes every time you write even the smallest file to the disk. Most RAIDs will default to a 128KB stripe size, and will typically give you a stripe size range anywhere from 32KB to 1MB. In the example above, assuming a 128KB stripe size, even a change to a 2KB file will result in almost 1MB of data being read/written to the disks. If a 1MB stripe size is used instead of 128KB, then 7MB of data would be accessed on the disks just to change that same 2KB file. So as you can see, the stripe size greatly determines the amount of disk I/O required to perform even simple operations.

So why not just choose the smallest stripe size? Well, hard drives are really good at reading contiguous blocks of data quickly. If you are reading/writing large files, grouping those accesses into larger stripe sizes will greatly increase the transfer rate.

In general, if you use mostly large files (video, uncompressed audio, large images), then you want a big stripe size (512KB – 1MB). If you have mostly very small files, then you want a small stripe size (32KB – 64KB). If you have a pretty good mix between the two, then 128KB – 256KB is your best bet.

Read Ahead Cache
A lot of RAID systems will give you the option of enabling a read ahead cache. Enabling this can dramatically increase your read speeds, but only in certain situations. In other situations, it can increase the load on your hard drives without any benefit.

Let’s talk about what happens in the read cache when read ahead is disabled. The read cache will only store data that you recently requested from the RAID volume. If you request the same data again, then the cache will already have that data ready for you on demand without requiring any disk I/O. Handy.

Now how is it different when read ahead caching is enabled? Well, with read ahead caching, the RAID controller will try and guess what data you’ll want to see next. It does this by reading more data off the disks than you request. So for example, if your Mac reads the first part of a bigger file, the RAID controller will read the subsequent bytes of that file into cache, assuming that you might need them soon (if you wanted to read the next part of the big file, for example).

This comes in handy in some situations. Like I mentioned earlier, hard drives are good at reading big contiguous blocks of data quickly. So if you are playing a big movie file, for instance, the RAID controller might read the entire movie into cache as soon as the first part of the file is requested. Then as you play the movie, the cache already has the data you need available. The subsequent data is not only available more quickly, but the other disks in your RAID volume are also free to handle other requests.

However, the read ahead results in wasted I/O. A lot of times, you won’t have any need for the subsequent blocks on the disk. For instance, if you are reading a small file that is entirely contained in a single stripe on the volume, there is no point in reading the next stripe. It just puts more load on the physical disks and takes more space in the cache, without any benefit.

Personally, I enable read ahead caching. It’s not always a win-win, but it can greatly speed up access times when working with bigger files (when the speed is needed most).

Write Back Cache
There are two write cache modes: write through, and write back. Your choice here can have a dramatic impact on the write speed to your RAID. Here’s how each mode works.

Write Through: When writing data to the disk, the cache is not used. Instead, OS X will tell the RAID to write the data to the drive, and the RAID controller waits for the data to be completely written to the drives before letting OS X know the operation was completed successfully.

Write Back: This uses the cache when writing data to the disk. In this case, OS X tells the RAID to write a given block of data to the disk. The RAID controller saves this block quickly in the cache and tells OS X the write was successful immediately. The data is not actually written to the disks until some time later (not too much later, just as soon as the disks can seek to the right location and perform the write operation).

Enabling the write back cache is “less safe” than write through mode. The safety issue comes into play during a power outage. If the power goes out between the time that the RAID told OS X the data was written, and the time when the data is actually on the disks themselves, data corruption could take place.

More expensive RAID systems, like the Pegasus2, have a battery-backed cache. The benefit here is that if a power outage happens as described above, the battery should power the cache until the power goes back on and the RAID controller can finish writing the cache to disks. This effectively overcomes the drawback of enabling write back caching.

Another potential drawback for enabled write back caching is a performance hit to the read speed. The reason for this is that there is less cache available for reading (because some is being used for writes). The hit should be pretty minor though, and only applicable when a lot of write operations are in progress. Otherwise, the amount of data in the write back cache will be minimal.

The big advantage of using a write back cache is speed though.  When write back caching is enabled, OS X doesn’t have to wait for data to be written to the disks, and can move on to other operations.  This performance benefit can be substantial, and gives the RAID controller more flexibility to optimize the order of write operations to the disks based on the locations of data being written.  Personally, I enable write back caching.

Wrap-up

That about covers it.  Small desktop RAID systems are a nice way to get a consolidated block of storage with some a little redundancy and a lot more performance than just a stack of disks can provide.  I hope this overview has helped you choose the options that are best for your desktop RAID system. In the end, there is no right answer to the settings everyone should use. Choose the settings that best fit your workload and performance/safety requirements.

Flooding Moby Dick

This weekend, a pretty heavy storm hit the California coast. One city hit particularly hard was Santa Barbara, where two restaurants at different beaches several miles apart were flooded by waves. Luckily, there were only minor injuries. The event caught me by surprise because of the coastal layout of that region.  You see, the Santa Barbara coast in general faces south.  So you don’t get a whole lot of big waves hitting the region.  That makes an event like this especially rare.  Even during the El Nino year of 1997-98, when strong storms battered the coast all winter, we never saw anything quite like this.

The most surprising incident of the two was the Moby Dick restaurant at Sterns Wharf. Here’s a frame from a YouTube video taken by someone in the restaurant as it hit.  Click the image to see the full video on YouTube.

And a news article talking about what happened at (KEYT.com).

The interesting thing about this destruction is where it happened. Sterns Wharf is actually on a beach facing southeast, so for swells coming in from the Pacific along the west to be strong enough to wrap around the coast and strike a beach facing southeast this hard is quite astounding.

Let’s take a look at the swell map from that morning to see what was actually happening. CDIP has a nice view of the swell state that morning:

Swell Map from CDIP

I’ve marked Sterns Wharf on that map. As you can see, the swell was coming from directly west, which is just about the worse possible case. Any northwestern component to a swell would force it to not only wrap around the peninsula in Santa Barbara but also around Point Conception. Any southwestern component to the swell would result in the Channel Islands blocking Santa Barbara from getting hit. A swell coming from exactly west can slot right through to the Santa Barbara area, perhaps even resulting in a higher tide because of the channeling of the water between the coast and the islands off shore.

You may think a westerly swell direction would be normal, but usually the swell in this area of California comes from the northwest.  This is due to the strongest winds of storms like these typically being further north, off the coast of Oregon and Washington.

From the video, it sounds like this event happened at an abnormally high tide of 6 feet (high tides are usually between 3-5 feet), and a 12 foot swell was actually reaching the coast in downtown Santa Barbara. Whenever you have a combined effect of high tide and high swells like this, disaster is sure to follow.

Hopefully Moby Dick can get things cleaned up there before too long. There are definitely a few restaurant patrons who will have a story to tell for quite some time.

Pegasus2 Impressions

With the lack of drive bays in the new Mac Pro, Apple is definitely leaning toward external storage with its future models.  My Mac Pro won’t arrive until next month, but in the mean time I had to figure out what kind of storage system I was going to buy.

As I mentioned in a previous post, I had considered using my local file server as a large storage pool.  After trying it out for the past couple months, I wanted something that was a bit faster and more reliable though.  I decided to look at my direct attached storage (DAS) options.  Specifically, I was looking at Thunderbolt enclosures.

My data storage requirements on my desktop machine are currently between 3-4TB of active data, so single disk options weren’t going to cut it.  I need at least 2 disks in a striped RAID 0 at a minimum.  I’m not particularly comfortable with RAID 0 setups, because any one of the drives can fail and you would lose data.  However, with good automatic Time Machine backups, that shouldn’t be too much of an issue.  Ideally I want something with 3-4 drives that included a built-in hardware RAID 5 controller though.  This way, I would have a little bit of redundancy.  It wouldn’t be a replacement for good backups, but if I disk went offline, I could keep working until a replacement arrives.

The only 3 disk enclosure I found was the Caldigit T3.  This looks like a really slick device, and I was pretty close to ordering one.  The main drawback of the unit is that it doesn’t support RAID 5.  I would have to either have a 2 disk RAID 0 with an extra drive for Time Machine, or a 3 disk RAID 0 (which is pretty risky) to support the amount of storage I need.  I decided this wasn’t going to work for me.

Once you get into the 4 disk enclosures, the prices start to go up.  There are two options I considered here.  First is the Areca ARC-5026.  Areca earned a good reputation by manufacturing top-end RAID cards for enterprise.  The 5026 is a 4 bay RAID enclosure with Thunderbolt and USB 3 ports on the back.  The drawback is that it’s pretty expensive ($799 for just the enclosure), and it doesn’t exactly have a nice look to it.  It reminds me of a beige-box PC, and I wasn’t sure I wanted something like that sitting on my desk.

The other option I looked at was a Promise Pegasus2.  It’s also a 4 disk RAID system (with 6 and 8 disk options).  They offer a diskless version that is less expensive than the Areca.  It doesn’t support USB 3 like the Areca, but it does support Thunderbolt 2 instead of Thunderbolt 1.  And the case is sharp.  Between the faster host interface and the cost savings, I decided to get the Pegasus.

The diskless model took about 2 weeks to arrive.  The outside of the box claimed it was the 8TB R4 model, so Promise isn’t making a separate box for the diskless version.  I suspect that Apple twisted Promise’s arm a little bit to get them to release this model.  Apple knew there was going to be some backlash from Mac Pro upgraders who needed an external replacement for their previous internal drives.  Apple promoted Promise products back when the xServe RAID was retired, and I imagine Apple asked Promise to return the favor here.  The only place you can buy the diskless R4 is the Apple Store.  It isn’t sold at any other Promise retailers.

Since the enclosure doesn’t include any drives, I decided on Seagate 3TB Barracuda disks.  They are on the Promise supported drive list and I generally find Seagate to make the most reliable hard drives from past experience.  With a RAID 5, I would have about 9TB of usable space.  More than I need right now, but it’s a good amount to grow into.  Installing the hard drives was pretty straightforward: eject each tray, attach each drive with the set of 4 screws, and latch them back in.  Then I plugged it into my Mac with the included 3 foot black Thunderbolt cable and turned it on.

This being the diskless version, the default setup is to mount all four disks as if there was no RAID.  This is counter to the Pegasus models that include drives, where the default configuration is a RAID 5.  This module instead uses this pass-through mode (JBOD), so you can take drives right out of your old computer and use them with the new enclosure.  I had to jump through a few hoops, but getting the RAID setup wasn’t too bad.  I had to download the Promise Utility from their website first.  Once you install the software, you can open up the utility and then do the advanced configuration to setup a new RAID volume.  The default settings for creating a RAID 5 weren’t ideal.  Here’s what you should use for a general case…

Stripe Size:  128KB
Sector Size:  512 bytes
Read Cache Mode:  Read Ahead
Write Cache Mode:  Write Back

The Pegasus2 has 512MB of RAM, which is used for caching.  It’s a battery-backed cache, so using Write Back mode instead of Write Through should be okay for most cases.  Only use Write Through if you really want to be ultra-safe with your data and don’t care about the performance hit.

Once you get the RAID setup, it starts syncing the volume.  The initial sync took about 8 hours to complete.  The RAID controller limits the rebuild speed to 100MB/sec per disk.  This is a good idea in general because you can use the device during the rebuild and it let’s you have some bandwidth to start using the volume right away.  However, it makes me wonder how much time could be saved if there wasn’t a limit (I found no way to disable or increase the limit using their software).

Drive noise is low to moderate.  The documentation claims there are two fans, one big one for the drives and one small one for the power supply.  Looking through the power supply vent though, it doesn’t look like there’s actually a fan there.  Maybe it’s further inside and that is just a vent.  The bigger fan spins at around 1100-1200rpm (this is while doing the rebuild, but idle is no lower than 1000rpm).  It’s definitely not loud, but it’s not really quiet either.  Sitting about 2 feet away from the Pegasus, it makes slightly less noise as my old Mac Pro (I keep the tower on my desk about 3 feet away).  The noise from the Pegasus is a bit higher pitch though.  When the new Mac Pro gets here, I’ll have the Pegasus further away from me, so I’ll wait to fully judge the amount of noise at that point.

Overall I’m very happy with the system so far.  Initial benchmarks are good.  Since I don’t have the new Mac Pro yet, I’m testing on a 2011 MacBook Air over a Thunderbolt 1 connection.  Using the AJA System Test, I saw rates of around 480MB/sec reads and 550MB/sec writes.  Switching to BlackMagic, the numbers bounced around a lot more, but it came up with results around 475MB/sec reads and 530MB/sec writes.  With RAID 5 having notoriously slow writes because of the parity calculation, I’m a little surprised the Pegasus writes faster than it reads.  The RAID controller must be handling the parity calculation and caching well.  It will be interesting to see if benchmarks improve at all when connected to the new Mac Pro over Thunderbolt 2.

The Surf Bike

Marlinspike’s blog entry “The Worst” is a good read, and calls to mind some lessons learned while I was earning my degree at UCSB.

On the UCSB campus, everyone bikes…and I mean everyone. I would guess there is around 20,000 bikes on campus, to the point where biking anywhere means navigating a ton of traffic. While I spent my time there, I had 5 different bikes.

The first two were bikes from childhood that I used during my Freshman year. I put so many miles on them, that eventually even after repairing parts that broke, they were pretty well worn out to the point that I needed a new bike.

So I bought my third bike, a Raleigh something or other. It was a pretty sharp looking bike. Nothing overly expensive, but nice enough that it was stolen about a year after I bought it. Having it stolen broke my heart, because I made sure to always lock it with one of those U-locks, and it was taken from the bike racks just outside my dorm room.

I decided from then on out to never trust bike locks. My fourth bike was a Trek, and it was the first bike I had that let me really get into mountain biking (which I still enjoy as a hobby today). It was more expensive than any of my other bikes, and for that reason, I never locked it up anywhere. I stored it in my dorm room (and later inside my apartment) when I was at home. On campus, I worked in the Engineering building, so I was able to bike to work and park the bike in my office there, just walking to the rest of my classes. It worked out pretty well, but as Marlinspike would say, the bike owned me.

Then about halfway into my Junior year a bike showed up on the back patio of our apartment. It was at least 20-30 years old, and half rusted out. It was the ugliest damn bike I have ever seen. To this day, we have no idea where it came from. We left it there for a couple of weeks, to see if anyone would find it and reclaim their property. Nobody did, so I moved the bike around to the front of our apartment and parked it in the bike rack. No lock, nothing.

The bike became our apartment’s “surf bike”, because it was perfect for when we wanted to go out surfing. There weren’t bike racks to use at our local surf spots, so usually we had to spend a lot of time walking to the ocean. With the surf bike, we didn’t need to lock it up, so we just took it to the beach and left it there while we were out, and rode it back when we were done. It was liberating.

I really started to enjoy the care-free attitude of the surf bike, so a few months later I started to use it as my daily ride too. For over a year, I rode it to campus and back every day, never locking it anywhere, and nobody ever took it. There were a few squeaks in the gearing, but it never broke down on me. It really was the perfect college bike.

I used the bike all the way through the end of senior year. When it was time to move home for the summer, it didn’t feel right to take it with. So we left it there, for the next fortunate person to discover and love.

DSC_0451
Photo by Ryon Edwards

Apple’s Mythical iTV

I don’t know if Apple is planning to release an “iTV” at some point, but this is the wrong approach competitors should be taking. According to Chris Moseley, AV product manager at Samsung:

We’ve not seen what they’ve done but what we can say is that they don’t have 10,000 people in R&D in the vision category.

They don’t have the best scaling engine in the world and they don’t have world renowned picture quality that has been awarded more than anyone else.

TVs are ultimately about picture quality. Ultimately. How smart they are…great, but let’s face it that’s a secondary consideration. The ultimate is about picture quality and there is no way that anyone, new or old, can come along this year or next year and beat us on picture quality.

He makes a good point: TVs are about picture quality. The thing is, I’ve never seen another brand of computer monitor (perhaps high-end NEC displays, but those are on a different pricing level) hold a candle to the picture quality you get from any of Apple’s modern LCDs. Picture quality just isn’t going to be an issue if Apple ever ships an iTV.

The quote above reminded me a lot of a statement Palm’s then-CEO Ed Colligan made back in 2006 about the rumored iPhone:

“We’ve learned and struggled for a few years here figuring out how to make a decent phone. PC guys are not going to just figure this out. They’re not going to just walk in.

We all know how that turned out…

Using IOKit to Detect Graphics Hardware

After Seasonality Core 2 was released a couple of weeks ago, I received email from a few users reporting problems they were experiencing with the app. The common thread in all the problems was having a single graphics card (in this case, it was the nVidia 7300). When the application launched, there would be several graphics artifacts in the map view (which is now written in OpenGL), and even outside the Seasonality Core window. It really sounded like I was trying to use OpenGL to do something that wasn’t compatible with the nVidia 7300.

I’m still in the process of working around the problem, but I wanted to make sure that any work-around would not affect the other 99% of my users who don’t have this graphics card. So I set out to try and find a method of detecting which graphics cards are installed in a user’s Mac. You can use the system_profiler terminal command to do this:

system_profiler SPDisplaysDataType

But running an external process from within the app is slow, and it can be difficult to parse the data reliably. Plus, if the system_profiler command goes away, the application code won’t work. I continued looking…

Eventually, I found that I might be able to get this information from IOKit. If you run the command ioreg -l, you’ll get a lengthy tree of hardware present in your Mac. I’ve used IOKit in my code before, so I figured I would try to do that again. Here is the solution I came up with:

// Check the PCI devices for video cards.  
CFMutableDictionaryRef match_dictionary = IOServiceMatching("IOPCIDevice");

// Create a iterator to go through the found devices.
io_iterator_t entry_iterator;
if (IOServiceGetMatchingServices(kIOMasterPortDefault, 
                                 match_dictionary, 
                                 &entry_iterator) == kIOReturnSuccess) 
{
  // Actually iterate through the found devices.
  io_registry_entry_t serviceObject;
  while ((serviceObject = IOIteratorNext(entry_iterator))) {
    // Put this services object into a dictionary object.
    CFMutableDictionaryRef serviceDictionary;
    if (IORegistryEntryCreateCFProperties(serviceObject, 
                                          &serviceDictionary, 
                                          kCFAllocatorDefault, 
                                          kNilOptions) != kIOReturnSuccess) 
    {
      // Failed to create a service dictionary, release and go on.
      IOObjectRelease(serviceObject);
      continue;
    }
				
    // If this is a GPU listing, it will have a "model" key
    // that points to a CFDataRef.
    const void *model = CFDictionaryGetValue(serviceDictionary, @"model");
    if (model != nil) {
      if (CFGetTypeID(model) == CFDataGetTypeID()) {
        // Create a string from the CFDataRef.
        NSString *s = [[NSString alloc] initWithData:(NSData *)model 
                                            encoding:NSASCIIStringEncoding];
        NSLog(@"Found GPU: %@", s);
        [s release];
      }
    }
		
    // Release the dictionary created by IORegistryEntryCreateCFProperties.
    CFRelease(serviceDictionary);

    // Release the serviceObject returned by IOIteratorNext.
    IOObjectRelease(serviceObject);
  }

  // Release the entry_iterator created by IOServiceGetMatchingServices.
  IOObjectRelease(entry_iterator);
}

File Vault 2 on the 2011 MacBook Air

I recently upgraded to a 2011 MacBook Air. With the new Sandy Bridge processors having encryption routines built-in, I decided to try out File Vault 2 on Lion to see what kind of performance impact it would have while accessing the disk. Here’s the configuration of my MacBook Air for reference:

11″ MacBook Air (Summer 2011)
1.8Ghz Core i7
256GB SSD (Samsung model)

First the baseline. Before enabling File Vault, I did a quick test of writing a 10GB file and then reading it back.

Blade:~ mike$ time dd if=/dev/zero of=test bs=1073741824 count=10
10+0 records in
10+0 records out
10737418240 bytes transferred in 41.938686 secs (256026577 bytes/sec)
real 0m42.021s
user 0m0.005s
sys 0m6.827s
Blade:~ mike$ time dd if=test of=/dev/null bs=1073741824 count=10
10+0 records in
10+0 records out
10737418240 bytes transferred in 37.969485 secs (282790726 bytes/sec)
real 0m38.055s
user 0m0.005s
sys 0m5.069s

After enabling File Vault, the Mac restarts and it took about 50 minutes to finish the initial encryption.  While the encoding was taking place I was seeing roughly 95-120MB/sec transfer rates (190-240MB/sec combined read/write bandwidth), and it averaged about 45-50% CPU usage (% of a single core) during the encoding process.

So what how was performance with encryption enabled? Check this out…

Blade:~ mike$ time dd if=/dev/zero of=test bs=1073741824 count=10
10+0 records in
10+0 records out
10737418240 bytes transferred in 45.342448 secs (236807202 bytes/sec)

real 0m45.418s
user 0m0.001s
sys 0m7.721s
Blade:~ mike$ time dd if=test of=/dev/null bs=1073741824 count=10
10+0 records in
10+0 records out
10737418240 bytes transferred in 40.052954 secs (268080558 bytes/sec)

real 0m40.133s
user 0m0.001s
sys 0m4.032s

So read operations take a 5.5% hit, and write operations are 8.1% slower with File Vault 2 enabled. That’s a performance penalty I can live with for the extra peace of mind of my data being secure while traveling. Even better is the amount of CPU used was about the same whether encryption was enabled or disabled.

Performance summary:

File Vault Read Write
Bandwidth CPU Bandwidth CPU
Disabled 269.69 MB/s 5.074s 244.16 MB/s 6.832s
Enabled 255.66 MB/s 4.033s 225.84 MB/s 7.722s

XRG on a Quad G5

Today I was pleasantly surprised when Edward Miller sent some screenshots of XRG running on his new Quad G5 system. This is the coolest thing I’ve seen in awhile, so I thought I would post a crop of XRG’s CPU graph on a quad-processor system…

XRG was designed to work with n-processor systems years ago, but I couldn’t imagine seeing more than 2 processors in a Mac for a very long time. At the time it wasn’t really something I could test either…it worked on 2 processors, but who knew what would happen when there were more than 2 CPUs. It’s really cool to see this feature in action on 4 cores, and I can’t wait to see it on 8 core Macs.

Older posts

© 2017 *Coder Blog

Theme by Anders NorenUp ↑