*Coder Blog

Life, Technology, and Meteorology

Page 2 of 25

File Server Upgrade

Last month, the RAID card in my file server died.  I tried to replace the card with a newer model, but found that not all PCI Express cards match well with all motherboards.  The motherboard was old enough that the new card simply wouldn’t work with it.  Being that the server components (other than the drives) were almost 10 years old, I decided it was time to rebuild the internal components.

I already had a solid base from the old file server.  The case is a Norco RPC-4020.  It’s a 4U enclosure with 20 drive bays.  The most I’ve ever used was 12 bays, but with the increasing size of modern drives, I am whittling it down to 8.  The drives I have are pretty modern, so this build doesn’t factor in any additional drive cost.  Other than the drives though, the rest of the server’s guts needed a good refresh.  Here’s what I put in there:

Motherboard:  Asus Z87-Pro
I went with this Asus because it had a good balance of performance and economy (and Asus’ reliability).  The board has 8 SATA ports, which is great for a file server when you are trying to stuff a bunch of disks in there.  I also liked how the board used heatsinks instead of fans for cooling.  Less moving parts to wear out.  Finally, this board has plenty of PCIe slots in case I want to add RAID/HBA cards for more drives, or a 10GBASE-T Ethernet card down the line.

CPU:  Intel Core i5-4570S
This is one of the low power models in the Haswell (4th generation) line.  TDP is a moderate 65 watts.  I was debating between this chip and the 35 watt Core i3-4330T.  If this server just served files, then I would have bought the Core i3, but I also use the box to host a moderately-sized database and do some server-side development.  The Core i5 chip is a quad core instead of a dual core, and I decided it would be worth it to step up.  You’ll notice that a GPU isn’t included in the list here, and that’s because I’m just using the embedded GPU.  One less component to worry about.

Memory:  2x4GB Crucial Ballistix Sport DDR3-1600
I’ve never been into over-clocking, so I just went with whatever memory ran at the CPU’s native 1600Mhz.  Crucial is always a safe bet when it comes to memory.  This particular memory has a relatively low CL9 latency.

Power Supply:  Antec EA-550 Platinum 550 watt
The power supply is a make-or-break part of a server, especially when you have a lot of disks.  I wanted something that was very efficient, while also supplying plenty of power.  This power supply is 93% efficient, meaning a lot more energy is making it to the computer components themselves instead of being wasted in the form of heat.  The one drawback of this power supply is that it’s a 4 rail unit and all the Molex/SATA power connectors are on a single rail.  So it’s not quite ideal for servers with a lot of disks (you need enough to cover the power spike as the disks spin up), but it handles 8 drives just fine with some room to grow.

Boot Drive:  USB 3 internal motherboard header and flash drive
I really wanted the OS to stay off the data drives this time around.  The best way I found to do that is to use the USB 3 header built in to most modern motherboards.  Typically this header is for cases that have USB 3 ports on the front, but my case only has a single USB 2 port on the front so this header was going unused.  I found a small Lian Li adapter to convert the 20 pin port on the motherboard to 2 internal USB 3 ports.  Then I picked up a 128GB PNY Turbo USB 3 flash drive on sale.  The motherboard has no problem booting off the USB drive, and while latency is higher, raw throughput of this particular flash drive is pretty good.

The Lian Li adapter is great because I don’t have to worry about the flash drive coming unplugged from the back of the case.  It’s inside the server, where it won’t be messed with.

Once I had all the components installed, I had to cable everything up.  You use about a million tie-wraps when cleaning up the cabling, but it looks nice in the end.  The cables are nowhere near as elegant as the cabling inside a Mac, but for a PC I think it turned out pretty good.  Here’s a shot of the inside of the server:

The power savings over the old server components were pretty dramatic.  The old system had a standard 550 watt power supply and was using an Athlon X2 CPU.  Typically, the load would hover between 180-240 watts.  This new server idles at 80 watts and will occasionally break 100 watts when it’s being stressed a little bit.  It’s great to get all this extra performance while using less than half the power.

Overall, it turned out being a great build.  Component cost was less than $600 (not including the case or drives), while still using quality parts.  Looking forward to this one lasting another 10 years.

GSGradientEditor

A fairly significant feature in Seasonality Pro is the ability to edit the gradients used to show weather data on a map.  When looking around for some sample open source gradient editors online, I didn’t come across anything I could really use.  So I decided to write my own and offer it under an MIT license.  I posted the source code (link below) on GitHub.  Here’s what it looks like:

I’ve included a lot of documentation as well as a sample Xcode project to show how to use it over on the GitHub page:

GSGradientEditor on GitHub

I looked at quite a few different graphics apps when working on the UI.  I wanted to see not only how other implementations looked, but how they worked.  With iOS 7 being more gesture-centric, I wanted to make sure that interaction with GSGradientEditor was intuitive.  I found the Inkpad app most helpful during this process.  In the end, I like how GSGradientEditor turned out.

Enjoy!

GSShapefile

It’s been awhile since I’ve open sourced any code (cough…XRG…cough) and I thought it was about time to contribute something new.

This code is a small collection of classes that will parse ESRI Shapefiles. As I’m getting further into the development of Seasonality Pro (which you can follow at the new Seasonality: Behind the Scenes blog), I thought it would be important to be able to show Shapefile data on a map. There are a few basic implementations out there (see iOS-Shapefile and Cocoa Shapefile), but I wanted a more modern code design, and something that was flexible enough to add expanded support in the future. So I dug up the Shapefile spec and got started.

The result after hacking on it for a few days is GSShapefile. It should work on both the Mac and iOS platforms, as long as you have ARC enabled in your project. GSShapefile takes an NSData object as its input, so it doesn’t make any assumption of whether the data is coming from a local file or somewhere online. After the file is parsed, you can retrieve the shape records and points associated with each shape. It really should be pretty easy to integrate with your own code.

I hope somebody finds it helpful.

On the New Mac Pro

Apple talked more about the new Mac Pro at it’s special event today, giving more details on when it will start shipping (December) and how much it will cost ($2999 for the base model). They also covered some additional hardware details that weren’t mentioned previously and I thought I would offer my 2 cents on the package.

Storage

There’s been a lot of complaints about the lack of expansion in the new Mac Pro, particularly when it comes to storage. With the current Mac Pro able to host up to 4 hard drives and 2 DVD drives, the single PCIe SSD slot in the new Mac Pro can be considered positively anemic. This has been the biggest issue in my eyes. Right now in my Mac Pro, I have an SSD for the OS and applications, a 3TB disk with my Home directory on it, and a 3TB disk for Time Machine. That kind of storage just won’t fit in a new Mac Pro, which only has a single PCIe SSD slot.

I believe Apple’s thought here is that big storage doesn’t necessarily belong internally on your Mac anymore. Your internal drives should be able to host the OS, applications, and recently used documents, and that’s about it. Any archival storage should be external, either on an external hard drive, on a file server, or in the cloud. Once you start thinking in this mindset, the lack of hard drive bays in the new Mac Pro start to make sense.

Personally, if I decide to buy one, I’ll probably start migrating my media to a file server I host here in a rack and see just how much space I need for other documents. I already moved my iTunes library a couple months back (300GB), and if I move my Aperture photo libraries over, that will reduce my local data footprint by another 700-800GB (depending on how many current photo projects I keep locally). That’s an easy terabyte of data that doesn’t need to be on my Mac, as long as it’s available over a quick network connection.

VMware virtual machines are a little tricky, because they can use a lot of small random accesses to the disk, and that can be really slow when done over a network connection with a relatively high latency. The virtual disks can grow to be quite large though (I have a CentOS virtual machine to run weather models that uses almost 200GB). I’ll have to do some testing to see how viable it would be to move these to the file server.

All this assumes that you want to go the network storage route. To me, this is an attractive option because a gigabit network is usually fast enough, and having all your noisy whirring hard drives in another room sounds… well… peaceful. If you really need a lot of fast local storage though, you’ll have to go the route of a Thunderbolt or USB 3 drive array. If you have big storage requirements right now, you most likely have one of these arrays already.

CPU/GPU Configurations

The new Mac Pro comes with a single socket Xeon CPU and dual socket AMD FirePro GPUs. This is reverse from the old Mac Pro, which had 2 CPU sockets and a single graphics card (in its standard configuration). The new Mac Pro certainly is being geared more toward video and scientific professionals that use the enhanced graphics power.

With 12 cores in a single Xeon, I don’t think the single socket CPU is a big issue. My current Mac Pro has 8 cores across 2 sockets, and other than when I’m compiling or doing video conversion, I have never come close to maxing all the cores out. Typical apps just aren’t there yet. You’re much better off having 4-6 faster cores than 8-12 slower cores. Fortunately, Apple gives you that option in the new Mac Pro. A lot of people have complained about paying for the extra GPU though. FirePro GPUs aren’t cheap, and a lot of people are wondering why there isn’t an option to just have a single GPU to save on cost.

I think the reason for this is the professional nature of the Mac Pro. The new design isn’t really user expandable when it comes to the graphics processors, so Apple decided to include as much GPU power as they thought would be reasonably desired by their pro customers. The new Mac Pro supports up to three 4K displays, or up to six Thunderbolt displays. A lot of professionals use dual displays, and it’s increasingly common to have three or more displays. With dual GPUs this isn’t a problem in the new Mac Pro, while if they just configured a single GPU the display limit would be comparable to the iMac. Personally, I have 2 graphics cards in my Mac Pro, and have used up to 3 displays. Currently I only use 2 displays though, so I could go either way on this issue. I do like the idea of having each display on it’s own GPU though, as that will just help everything feel snappier. This is especially true once 4K displays become standard on the desktop. That’s a lot of pixels to push, and the new Mac Pro is ready for it.

External Expansion

I’ve seen people comment on the lack of Firewire in the new Mac Pro. This, in my opinion, is a non-issue. Even Firewire 800 is starting to feel slow when compared to modern USB 3 or Thunderbolt storage. If you have a bunch of Firewire disks, then just buy a $30 dongle to plug into one of the Thunderbolt ports. Otherwise you should be upgrading to Thunderbolt or USB 3 drives. USB 3 enclosures are inexpensive and widely available.

Outside that, the ports are very similar to the old Mac Pro. One port I would have liked to see in the new Mac Pro was 10G ethernet. The cost per port of 10G is coming down rapidly, and with moving storage out onto the network, it would have been nice to have the extra bandwidth 10G ethernet offers. Apple introduced gigabit ethernet on Macs well before it was a common feature on desktop computers as a whole. Perhaps there will be a Thunderbolt solution to this feature gap sometime down the road.

Power Consumption and Noise

This alone is a good reason to upgrade from a current Mac Pro. The new Mac Pro will only use around 45W of power at idle, which isn’t much more than a Mac Mini and is about half of the idle power consumption of the latest iMacs (granted, the LCD in the iMac uses a lot of that). My 2009 Mac Pro uses about 200W of power at idle. Assuming you keep your Mac Pro on all the time, and are billed a conservative $0.08 per kilowatt hour, you can save about $100/year just by upgrading. That takes some of the sting out of the initial upgrade cost for sure.

Using less energy means needing less cooling. The new Mac Pro only has a single fan in it, and it’s reportedly very quiet. Typically the unit only makes about 12dB of noise, compared to around 25dB in the current Mac Pro. With perceived volume doubling for every 3dB increase, the new Mac Pro is about 16 times quieter than the old one. Surely the lack of a spinning HD helps here as well.

Overall

Overall the new Mac Pro is a slick new package, but you already knew that. It isn’t for everybody, but it fits the needs of the professional customer pretty well moving forward. Personally, I haven’t decided if I will buy one yet. My Mac Pro is almost 5 years old at this point, and while it still does a good job as a development machine, I’m starting to feel its age. However, I haven’t decided whether I will replace it with a new Mac Pro, the latest iMac, or even a Retina MacBook Pro in a form of docked configuration. There are benefits and drawbacks to each configuration, so I’m going to wait until I can get my hands on each machine and take them for a spin.

Seasonality Updates

I thought now might be a good time to post an update about how development is progressing in the family of Seasonality apps.

Seasonality Core

Seasonality Core 2.4 is going to be released sometime in the next week. This update has some nice improvements. One is an update to Particle Mode that makes it look much more impressive. It’s the same feature showing the same data, but in a cooler way. The mapping code is also gaining other improvements like allowing wrap-around at +/- 180° (New Zealand users rejoice!). The graphs aren’t being ignored, with a new hover bubble layout to make it easier to inspect the data. The hover bubble will also expand when you hold down the Option key on your keyboard and show all the conditions at the hovered time. It’s a really nice way to look at what’s happening at a certain time.

As far as future plans for Seasonality Core. The next major update will most likely be Seasonality Core 3.0. It’s too early to discuss features, but there are a couple of areas that I think need improvement. One is increasing the number of supported locations and making locations easier to search and configure. This is a lot of work, that requires server-side changes as well, so it’s hard to say when this will be ready. The second change I would like to make to Seasonality Core is to bring back some of the customizability from Seasonality Go. With Seasonality Go, it’s great how you can customize your own screen layouts. I would love for Seasonality Core to be able to do this as well.

Seasonality Go

Seasonality Go 2.2 was just released last month. We worked on the user interface a lot to start the transition to iOS 7, and I think it looks a lot nicer now. Another big new feature was the ability to select a color theme. Just head into the Tools (wrench) menu under Settings to choose a color that looks best to you. Beyond these visual changes, lots of optimizations were made to the code behind the scenes. The app runs a lot more smoothly now, especially when switching screens or switching between Seasonality Go and other apps.

The next major update to Seasonality Go will most likely have the same location changes I discussed above in Seasonality Core. I’m also planning to continue improving the interface to show less clutter and more weather. This will bring back some of the look and feel improvements I’ve been working on in Seasonality Pro.

Seasonality Pro

Seasonality Pro will be an iPad weather app for professional meteorologists. I’ve had the project on my mind for several years now, and over the past several months I’m finally finding more time to work on it.

I have been receiving a lot of questions about how Seasonality Pro development is progressing. It’s certainly taken me longer to complete than I was originally expecting. During the past several months I have been splitting my time a lot between Seasonality Core, Seasonality Go, and Seasonality Pro. There are quite a few features I’ve added to Seasonality Core and Seasonality Go recently that provide major underlaying functionality that will be used in Seasonality Pro. It has been a good way of making progress on Pro, while still providing updates to the other apps. Now that a lot of the foundation code is ready for Seasonality Pro, I’ve recently starting to switch gears. Instead of working on Seasonality Pro indirectly through features added to the other apps, I am now spending a lot more time directly working on the interface and layout of Seasonality Pro. Version 1.0 is still a ways off, but it’s looking good so far and solid progress is being made.

As always, if you would like to provide feedback about any of the Seasonality apps, please send me an email. There are email links in the Help menus in both Seasonality Core and Seasonality Go.

Storyboard Change

This Storyboard change bit me this morning. I was trying to do some initial calculations from my view geometry before it is displayed at app launch. Previously, when using XIB files to define an interface, you would be able to see your view geometry as early as viewWillAppear: in your UIViewController.

With Storyboards, view geometry will not be ready until later. In my case, within viewWillAppear: the view bounds returned all zeros. Took a bit of Googling to find the answer, but eventually came across this question at StackOverflow:

Gettting [sp] subViews’ frames in Storyboard

In short, if you need your view geometry as early as possible, use viewDidLayoutSubviews in your UIViewController.

The intersection of two NSIndexSets

If you are looking for code to compare two NSIndexSets and return their intersection, this is for you. The algorithm is pretty easy, so I decided to wrap it together and post it to Gist. It’s implemented as a category on NSIndexSet.

NSIndexSet+GSIndexSetAddditions

A New Lens

I’ve been into photography for about 10 years now. Back in 2008 I took the step from a point and shoot up to a DSLR and bought a Canon 450D (aka the Rebel XSi). It’s been a great camera over the years, but for quite some time I have been feeling like I have outgrown it. I wanted a new camera with better low-light capabilities and all the extra options like micro focus adjustments and more megapixels to work with. Last week, I found a pretty sweet deal on a Canon 5D Mark II, so I ordered one.

Part of this upgrade was bittersweet though. For the past couple of years, my favorite lens has been the EF-S 17-55mm f2.8 IS. Though it’s not an L series lens, the image quality and sharpness is incredible. It’s just an excellent standard zoom lens for Canon crop sensor cameras. Unfortunately, with the 5D being a full frame camera, the 17-55mm simply won’t work with it.

I had to decide what to do for a new walk-around lens. I thought about going the all-primes route. I love my 50mm prime, and the image quality of primes is top-notch. But primes don’t fit my style of photography as well, so I decided to stick with the zooms. I was left with a few options. If I wanted to keep an f2.8 aperture, I could grab a 24-70mm f2.8L II, or buy the older 24-70mm f2.8L on the used market. The first version of that lens is pretty heavy. The second version is lighter, and crazy sharp, but costs over twice as much. If I decided that f4 would be a good enough aperture, then I could go for the just-announced 24-70mm f4L IS, or the 24-105mm f4L IS (which is the kit lens of the 5D Mark II). Finally, Tamron has a nice 24-70mm f2.8 VC lens, which is the only full frame lens in this zoom range that offers both a wide f2.8 aperture and image stabilization.

While the Tamron was tempting, I decided to stick with the Canon options. I also threw out the 24-70mm f2.8L II pretty quickly, because it’s well outside my budget.

That left the used 24-70mm f2.8L and the f4 zooms. Each of these lenses come with a compromise. The 24-70mm f2.8L doesn’t have IS, and is a bigger and heavier lens. I wasn’t sure I wanted to carry around that extra weight all the time. The 24-70mm f4L IS is a new lens, and is rumored to be sharper than both the others, but costs about $500 more. The 24-105mm f4L IS has the extra zoom range, but isn’t as sharp as the 24-70mm f4L IS. So I could go for speed, sharpness, or range. This is a pretty tough decision for a lot of photographers.

I decided to go for the 24-105mm f4L IS. From the reviews, it sounded to be about as sharp as the 24-70mm f2.8, while being a good amount less weight to carry around and offering more of a zoom range. That extra zoom range was a big deal to me, because often I’ll pack my 70-200mm lens while traveling, and with the extra reach of the 24-105, I might be able to leave that longer lens at home from now on.

After making my decision, I wanted to do an image comparison between the new 24-105mm lens and my current 17-55mm. So I setup a quick composition, mounted my XSi on a tripod, and started setting things up. The camera settings I decided on were JPEG images (to remove any raw processing variables), mirror lock-up enabled, triggered with a 2 second timer, and IS was disabled on both lenses. I manual focused using live view on the digit on the 1 ball and the exposure ended up being 0.5 seconds with an f4 aperture at IOS 100. These JPEGs are straight out of the camera, with the exception of the white balance. I chose the white balance settings from one of the photos in Aperture and stamped all the other photos with it so that all the images would start with the same color settings.

Test Setup

I took photos at f2.8, f4.0, 17mm, 24mm, 55mm, and 105mm. Of course both lenses weren’t capable of all those combinations, but I did every shot I could. Below I’m going to focus more on the 24mm and 55mm focal lengths, because those overlap between the two lenses.

24mm: On the 24-105mm lens, 24mm is as wide as it gets, so the 17-55mm has an advantage of not being all the way at the end of its zoom range. The 17-55mm is also stopped down at f4, giving it an even greater advantage here. So how did the 24-105mm lens do? Well, while both lenses produced similar results, the 17-55mm was a bit sharper and the 24-105mm lens produced richer colors.

17-55mm f4.0 at 21mm

55mm: Here the 17-55mm is all the way at the long end of its zoom range, while the 24-105mm lens is pretty much in the middle. The 17-55mm still has the advantage of being stopped down though. Here, the 17-55mm lens was still sharper, but not by as much as the 24mm frame. The color of the 24-105mm lens was still better, but not by as much as the 24mm frame. This comparison was probably the closest result, and I had to look pretty hard to find any difference at all between the frames.

24-105mm f4.0 at 55mm

Zooming the 24-105mm all the way out to 105mm resulted in a softer image. Some other reviews online mention the 24-105mm lens starts to get soft at around 90mm, and my 105mm shot definitely agreed with that.

How about some uneven comparisons between the two lenses? I looked at 24mm using f2.8 on the 17-55mm lens, and f4 on the 24-105mm. Here, the sharpness is about the same. If I had to pick a winner in this situation, the 17-55mm would be it, but not by much. Again, color on the 24-105mm was nicer.

Comparing the two at 55mm with the 17-55mm lens set at f2.8 and the 24-105mm lens set at f4 was the outlying result. Here the 24-105mm lens was sharper. Not by much, but it was noticeable. The 24-105 also retained its nicer color rendering.

The results were close enough (and only visible by pixel-peeping) that I don’t think I’ll notice the difference day to day with the new 24-105mm lens. But what an impressive showing by the 17-55mm. It’s just amazingly sharp, and I’ve loved using it over the past couple of years. If you’re looking for the best image quality on a standard zoom out of a crop sensor camera, you won’t find much better than the 17-55mm f2.8 IS.

What about other factors? Well the biggest difference in these two lenses is the build-quality. I didn’t take this as much into account when I originally bought the 17-55 (back then I compared it to the 24-105mm as well and decided I wanted the wider field of view). When I unpacked the 24-105mm lens earlier today, I was floored at how much more solid it felt. The zoom rings are really smooth, and the whole thing just feels really solid. I was expecting a difference between the two, but the build quality differences were much greater than I was expecting. The 24-105mm lens is also dust and weather resistant, so you’ll be in better shape if you are out in the elements with this one.

The other difference is the maximum aperture. With f2.8, the 17-55mm lens is much more flexible. It won’t play as big of a role in this switch for me though. Full frame cameras will typically be more sensitive to light because of the bigger image sensor. So I could easily bump my ISO up one stop on the 5D to get the equivalent light of the f2.8 on my 450D (and still have shots with less noise than on the 450D).

I have posted full resolution copies of all the photos to my Flickr account so you can compare them for yourself. You really can’t go wrong with either lens. They both have great image quality, one’s a little wider angle, ones a little longer, but overall their similarities far outnumber their differences.

The Surf Bike

Marlinspike’s blog entry “The Worst” is a good read, and calls to mind some lessons learned while I was earning my degree at UCSB.

On the UCSB campus, everyone bikes…and I mean everyone. I would guess there is around 20,000 bikes on campus, to the point where biking anywhere means navigating a ton of traffic. While I spent my time there, I had 5 different bikes.

The first two were bikes from childhood that I used during my Freshman year. I put so many miles on them, that eventually even after repairing parts that broke, they were pretty well worn out to the point that I needed a new bike.

So I bought my third bike, a Raleigh something or other. It was a pretty sharp looking bike. Nothing overly expensive, but nice enough that it was stolen about a year after I bought it. Having it stolen broke my heart, because I made sure to always lock it with one of those U-locks, and it was taken from the bike racks just outside my dorm room.

I decided from then on out to never trust bike locks. My fourth bike was a Trek, and it was the first bike I had that let me really get into mountain biking (which I still enjoy as a hobby today). It was more expensive than any of my other bikes, and for that reason, I never locked it up anywhere. I stored it in my dorm room (and later inside my apartment) when I was at home. On campus, I worked in the Engineering building, so I was able to bike to work and park the bike in my office there, just walking to the rest of my classes. It worked out pretty well, but as Marlinspike would say, the bike owned me.

Then about halfway into my Junior year a bike showed up on the back patio of our apartment. It was at least 20-30 years old, and half rusted out. It was the ugliest damn bike I have ever seen. To this day, we have no idea where it came from. We left it there for a couple of weeks, to see if anyone would find it and reclaim their property. Nobody did, so I moved the bike around to the front of our apartment and parked it in the bike rack. No lock, nothing.

The bike became our apartment’s “surf bike”, because it was perfect for when we wanted to go out surfing. There weren’t bike racks to use at our local surf spots, so usually we had to spend a lot of time walking to the ocean. With the surf bike, we didn’t need to lock it up, so we just took it to the beach and left it there while we were out, and rode it back when we were done. It was liberating.

I really started to enjoy the care-free attitude of the surf bike, so a few months later I started to use it as my daily ride too. For over a year, I rode it to campus and back every day, never locking it anywhere, and nobody ever took it. There were a few squeaks in the gearing, but it never broke down on me. It really was the perfect college bike.

I used the bike all the way through the end of senior year. When it was time to move home for the summer, it didn’t feel right to take it with. So we left it there, for the next fortunate person to discover and love.

DSC_0451
Photo by Ryon Edwards

Overhead while using GCD

Today I spent some time optimizing the Particle Mode simulation code in Seasonality Core. While doing some measurements, I discovered that quite a bit of time was spent in GCD code while starting new tasks. I use dispatch_apply to iterate through the particles and run the position and color calculations for the next frame. In the tests below, I was simulating approximately 200,000 particles on the Macs, and 11,000 particles on the iPad.

I decided to try breaking the tasks up into fewer blocks, and run the dispatch_apply for groups of around 50 particles instead of running it for each particle. After making this change, the simulation ran in up to 59% less CPU time than before. Here are some informal numbers, just by looking at Activity Monitor and roughly estimating:

CPU Usage
Device   Before   After   Time Savings
Mac Pro (2009, Oct 2.26Ghz Xeon)   390%   160%   59%
Retina MBP (2012, Quad 2.6Ghz i7)   110%   90%   18%
MacBook Air (2011, Duo 1.8Ghz i7)   130%   110%   15%
 
iPad 3 (fewer particles)   85%   85%   0%

As you can see, the benefits from the new code running on the Mac Pro are substantial. In my earlier code, I was somewhat suspicious of why the simulation took so many more resources on the Mac Pro than on the laptops. Clearly the overhead in thread creation was a lot higher on the older Xeon CPU. This brings the Mac Pro’s processing times closer to what the other more modern processors can accomplish.

Perhaps an even more surprising result is the lack of a speedup on the iPad. While measuring both runs, the two versions averaged about the same usage. Perhaps if I had a more formal way to measure the processing time, a small difference might become apparent, but overall the difference was minimal. I’m guessing that Apple has built logic into the A-series CPUs that allows for a near 0 cost in context switching. Makes you wonder how much quicker something like this would run if Apple built their own desktop-class CPUs.

« Older posts Newer posts »

© 2017 *Coder Blog

Theme by Anders NorenUp ↑