*Coder Blog

Life, Technology, and Meteorology

20 Years of XRG

Today hits an important milestone for one of my apps: it has been 20 years since I published the first public release of XRG.  XRG 0.1.1 was released on October 24, 2002 and was built for OS X 10.2, but also supposedly worked on OS X 10.1.  The release DMG file was a whopping 169 KB.

XRG was the project I used to teach myself how to develop Cocoa apps using Objective-C.  Over the years, the code has been refactored several times over, but I’m sure there is still some original code scattered about the project.  When XRG was introduced, modern Objective-C features didn’t exist yet.  Garbage Collection was a fad that I was thankful to never use, and ARC wasn’t even a figment of anyone’s imagination, so retain/release calls were everywhere.  Of course I switched to ARC when it became available, and I’ve also refactored the code over the years to use Objective-C literals, dot notation for method calls, auto-synthesized object parameters, and object subscripting.  

The one thing I haven’t done is add any Swift code to the project.  I decided years ago that XRG will remain my love letter to the Objective-C language.  Objective-C was the perfect language for the OS X versions of the day, and there’s just something about writing Obj-C code that feels comforting.  I certainly wouldn’t use it for a new codebase for a multitude of reasons, but keeping XRG written entirely in Objective-C helps keep me proficient in the language, and also avoids all the headaches of a hybrid Obj-C/Swift codebase.  To be honest, as long as Objective-C remains a first class citizen on macOS, I’m not sure I ever will refactor it in Swift.

Most importantly, I can thank XRG for jump-starting my career in Mac and iOS development.  If I hadn’t created XRG, I wouldn’t have been introduced to the Indie developer scene that inspired me to create Seasonality.  If I hadn’t created Seasonality, it’s very likely I wouldn’t be an iOS developer today.  It just boggles my mind to consider what path I’d be on if I hadn’t started working on XRG.

Twenty years later, and it’s still the first app I install on a new Mac.  Since I use it all the time, I’ve continued to maintain and improve it when time allows.  It has been an amazing ride…here’s to the next decade.

Fan Noise on the 14″ MacBook Pro M1 Max

For some reason, the noise a computer makes can really affect my experience using it.  And since most of the time I’m using my computer for work, it can affect my work attitude tremendously.  Yesterday I posted my initial impressions of fan noise on the new 14” MacBook Pro M1 Max.  At that point, I only had a couple hours to work on it, and mostly just installed some software updates and apps.  After using it a second day, I’ve had a chance to push the hardware a little bit more and get a better gauge on the fan noise.  I’m posting this for folks who may be on the fence between the 14” and the 16” due to noise and thermals.

In my office, I typically use a laptop in clamshell mode, hooked up to a Pro Display XDR.  This will trigger the fans to run at around 2500 rpm at idle on the M1 Max.  I can’t hear any fan noise until they start spinning above 3000rpm, so at 2500rpm they are inaudible to me.  This is in my relatively warm office, around 75°F/24°C.  Day-to-day tasks that use the GPU, such as Mission Control, feel absolutely fluid on the 6K display, which I couldn’t say about my previous 2019 MacBook Pro.

Let’s move on to tasks that present more of a load on the system.  I spend 90% of my day writing software.  What is the Mac like when building an Xcode project with 100k lines of mixed ObjC/Swift code?  To begin with, the build took just 17 seconds to complete, compared to 61 seconds on a 2019 16” MacBook Pro Core i9.  This is going to save me a ton of time every day, for sure.  While working on some larger projects, I can spend more than an hour every day just waiting for Xcode to compile.  Getting 45 minutes of time back every day is a game changer.

Returning to the thermals…the thing is, 17 seconds isn’t long enough to raise the fan speed at all on the M1 Max.  So I had to repeatedly build and clean the project for several minutes to get a gauge on how it would do on a bigger project.  The fan speed started increasing slowly after a couple of minutes, eventually reaching between 4000-5000 rpm. They are definitely audible at that point, though it’s a much less objectionable sound than the higher pitch from the 2019 16” MacBook Pro.  After I stopped compiling, the fan speed dropped back down below the audible 3000 rpm threshold after a couple of minutes.  

Next I moved onto Final Cut Pro, and exported a 15 minute 4K video.  Again after a few minutes this pushed the fans up to around 5000 rpm. Background rendering also raised the fan speed, but if I was just scrubbing the timeline and doing minor edits, the fan speed kept below 3000 rpm.  

Moving on to gaming, which admittedly I don’t usually have much time for these days.  I opened Cities: Skylines and left it running on a smaller town.  It hit the GPU around 65% and CPU around 300% on default graphics settings at 1920×1080 in a window.  Fan speeds settled around 4400 rpm, so the fans are audible…though difficult to pick out above the sound of the game (which was relatively low at the time).

To compare it to a few other Macs I have experience with: I would say the 14” MacBook Pro M1 Max is similar in noise level to the 2018 Mac mini i7.  It’s much quieter than the 2019 MacBook Pro 16” i9/5500M.  It’s quieter at idle, but louder under load than the 2013 Mac Pro.  Performance, of course, beats all these other Macs significantly…It’s worlds faster.

Overall, I’m impressed at the thermal headroom of this Mac.  I haven’t heard the fans when doing most day to day work… Keep the CPU below 200% usage on the efficiency cores and you’ll never hear the fans.  If I start pushing the CPU or GPU, it has to be for more than a few minutes before the fans really start needing to spool.  And typically my work will come in quick bursts (compiling), giving it plenty of time to cool before the fans even start to ramp up.  

That being said, I’m still not totally sure if I’ll keep this model or return it for the 16”.  The 16” is not the size I want, but my understanding is that it’s rare for the fans on that Mac to raise above 2000 rpm even under full load.  So I have to weigh whether it’s worth the $200 cost difference and larger size/weight while traveling to eliminate some fan noise.  I’m leaning toward keeping the 14” but still have some time in the return window before I need to decide.  Either option offers a fantastic machine on which to do some real work.

Memory Bandwidth in Apple’s M1

The memory bandwidth on the new Macs is impressive. Benchmarks peg it at around 60GB/sec–about 3x faster than a 16” MBP. Since the M1 CPU only has 16GB of RAM, it can replace the entire contents of RAM 4 times every second. Think about that…

Some say we’re moving into a phase where we don’t need as much RAM, simply because as SSDs get faster there is less of a bottleneck for swap. Indeed, SSDs have made significant strides, especially with the newest Samsung 980 NVMe drives pushing 5-7GB/sec. This is closer to the memory bandwidth than we’ve ever been with consumer-grade hardware, and you’re only running about a third of the speed of main memory in a 16” MBP.  However, with the huge jump in performance on the M1, the SSD is back to being an order of magnitude slower than main memory.

So we’re left with the question: will SSD performance increase faster than memory bandwidth? And at what point does the SSD to RAM speed ratio become irrelevant?

Theoretically, SSD swap is “fast enough” if it can load data from a backgrounded app into main memory before the user notices a delay when clicking an icon in their Dock. Once this threshold is reached, there’s not much of a distinction between an app being open or not.

I do believe that a limited amount of RAM is becoming less of an issue as time goes on. As I’m writing this, I’m 5GB into swap on my 16” MacBook Pro with 32GB of memory. In years past, a Mac 5GB into swap would have felt like it was crawling. However, today I haven’t noticed a single hiccup, and honestly wouldn’t even be aware of the swap usage if XRG wasn’t sitting on my desktop telling me so.

Would I buy a Mac with 16GB of RAM to use as a primary development machine today? No, probably not. While I don’t typically notice a speed decrease due to swap usage, I don’t think that storage threshold has been reached quite yet. However, I’m looking forward to Apple’s answer to the higher-end market and am confident they have M-series chips with more RAM in the pipeline.

My Desktop Storage Needs Help

Back when I purchased this 2013 Mac Pro, I wrote about needing a desktop RAID system for storage. The Mac Pro at the time came with a base configuration of a 256GB SSD, and peaked with a 1TB model that was about $1000 more. I decided to stick with the base storage configuration and then I purchased a Pegasus2 desktop RAID box and put in four 3TB drives in a RAID 5 setup.

As time went on, the drives started to die, and I replaced them with 4TB models, so I currently have about 12TB of usable storage on my desktop, with about 2/3rds of it occupied. With respect to capacity, it’s doing just fine, though it’s starting to fill up quicker since I switched to recording video in 4K with my iPhone and 5D Mark IV.

While capacity has been okay, recently I’m running into a limitation of this setup with respect to speed. I’m blaming SSDs. SSDs are very common now in desktops and ubiquitous in laptops because of their performance benefits. The problem is apps are starting to be coded with the assumption they are being run on flash storage. This is especially problematic with syncing apps like Dropbox.

Let me explain. I currently have a two-Mac working environment. I use my desktop most of the time, but I also have a 13″ MacBook Pro that I’ll work on while on the road. I use syncing tools like Dropbox and Resilio to keep files up-to-date between the two devices.

One app I use frequently is Lightroom. Its important to me that I have my photos available on both Macs, but a Lightroom library has to be stored locally. So I put my Lightroom library on Dropbox, and the RAW photo masters are in a separate directory on my desktop Mac. If I’m going to be editing photos on my laptop, I’ll generate Smart Previews for those albums on my desktop and be able to develop them on the MacBook Pro. This works really well.

The problem I’m running into recently is during Lightroom imports. Whenever I import photos into Lightroom, Dropbox detects the changes in the library and kicks off indexing/syncing. This puts more I/O load on the disk that is already busy from the import, and things grind to a halt. The sync/import activity even keeps me from playing a single 1080p HD movie from the library…on a current Mac Pro…with RAID storage. The disk just can’t keep up. It’s even worse if something like a Time Machine backup happens to be running, or if Backblaze is trying to push files offsite.

Clearly something needs to change. The problem is that I would like something faster but still with enough capacity to grow. I like the Pegasus box a lot, so it’s tempting to just buy 4 SSDs to use in that. However, SSDs are still pretty cost prohibitive in larger capacities. I certainly couldn’t afford 4TB models to retain my current storage capacity. Even four 2TB SSDs would be a good chunk of change. I could buy some 1TB models and have 3TB of usable space in a RAID 5, but that would require a complete restructuring of where I store data. I would have to keep only my recent data on the desktop, and archive bigger data like photos and videos to a NAS with more storage. Then backups become an issue, etc.

With the new Mac Pro just around the corner (rumored for 2019 release), I’ll be upgrading to a new Mac somewhat soon, but not immediately. So I haven’t decided if I should do something about storage now or wait. But every couple of days I have plenty of extra time to plan my next steps while waiting for my disks to stop thrashing.

Distributing load across multiple volumes

When it was time to implement a new online service to store observations for tens of thousands of weather stations and make that data available to Seasonality users, I had a lot to think about with respect to the hardware configuration of the weather servers. The observations service requires a lot of disk I/O (not to mention storage space), but it’s pretty light on processor and memory requirements. I had spare cycles on the current weather servers, so I didn’t see the need to buy all new equipment. However, I wanted to be careful because I didn’t want the increased disk load to slow down the other services running on the servers.

Let’s back up a bit and talk about what kind of setup I currently have on the weather servers for Seasonality. I have a couple of servers in geographically diverse locations, each running VMware ESX with multiple virtual machines. Each virtual machine (VM) handles different types of load. For instance, one VM handles dynamic data like the weather forecasts, while a different VM serves out static data like the websites and map tiles. These VMs are duplicated on each server, so if anything goes down there is always a backup.

One of the servers is a Mac Mini. It had an SSD and a hard drive splitting the load. With the new observations service in the pipeline, I replaced the hard drive with a second SSD to prepare for the upgrade. With this particular server being marked as a backup most of the time, I didn’t have any load issues to worry about.

The other server is a more purpose-built Dell rack mount, with enterprise hardware and SAS disks, and this is the box that I lean on more for performance. Before the observations server I had two RAID mirrors setup on this server. One RAID was on a couple of 15K RPM disks and handled all the dynamic VMs that needed the extra speed, like the forecast server and the radar/satellite tile generator. The other RAID was on a couple of more typical 7200 RPM disks and hosted VMs for the base map tiles, email, development, etc. There were two more disk bays that I could put to use, but I had to decide the best way to use them.

One option was to fill the extra two disk bays with 7200 RPM disks, and expand the slower RAID to be a bit more spacious, and probably increase the speed a reasonable amount as well. The other option was to add two disks that didn’t match any of the other RAIDs, effectively adding a 3rd mirrored RAID to the mix.

I decided on the later option, because I really wanted to make sure any bottlenecks would be isolated to the observations server. For the price/performance, I settled on 10K RPM disks to get some of the speed of the faster spindles, while not breaking the bank like 15K or SSDs. The observations service would be run completely on the new RAID, so it wouldn’t interfere with any of the current services running on the other volumes. So far it has worked beautifully, without any hiccups.

My point here is that it’s not always the best idea to create a single big volume and throw all your load at it. Sometimes that setup works well because of its simplicity and the extra speed you might get out of it. However, with most server equipment having enough memory and CPU cycles to create several virtual machines, usually the first limitation you will run into is a disk bottleneck. When splitting the load between multiple RAID volumes, you not only make it easier to isolate problem services that might be using more than their fair share, but you also limit the extent of any problems that do arise while still retaining the benefit of shared hardware.

Fixing BufferBloat

For years I’ve had issues with with my internet connection crapping out whenever I’m uploading data. The connection will be fine when browsing the web and downloading data at the rated speed of 60 mbps. However, whenever I tried to upload a large file, it would saturate my upload link and slow every other network connection to a crawl. Websites wouldn’t load, DNS wouldn’t resolve, and ping times would be measured in seconds.

My theory for a long time was that the upload bandwidth was becoming so saturated, that TCP ACKs for incoming data wouldn’t get sent out in a reasonable amount of time. So for a long time I was looking for a way to prioritize TCP SYN/ACK packets. However, I never ended up figuring out how to do this.

A few nights ago while looking for a different solution, I stumbled across the idea of BufferBloat causing issues when saturating an internet link. Apparently, modern networking equipment has a lot more connection buffer memory than older equipment. This seems like a good thing, with memory prices being so cheap it makes sense to include a large buffer to help keep the network link saturated. The increased buffer could be in your router, cable modem, or any number of components at your ISP. Unfortunately, this can cause problems when your link is slow enough to fill the buffer quickly.

When TCP connections are created, the networking hardware starts spooling up the connection speed by gauging the TCP ACK packet response times. The faster the ACK packets arrive, the faster the network connection appears to be to the network interface, so the interface tries to send data even faster until the link is saturated.

The problem is that networking equipment between the network interface on your computer and your destination may be buffering a lot of network packets. In my case, I have a 5 mbps upload link, so either my modem or router is buffering enough data while I’m uploading a large file that TCP ACK packets are taking several seconds to arrive back. During that time, the packets are just sitting in the buffer waiting to be sent. Once the bandwidth to send the packets is available, they transmit relatively quickly, but from the standpoint of my computer the response time is very slow. This kills the connection.

The fix is to limit the amount of outgoing bandwidth on your router using QoS. What you want to do is to limit your bandwidth to about 0.5 – 1 mbps less than your connection can handle. On the ZyXel ZyWALL 110, this is done through the BWM (bandwidth management) screen in the configuration. First, enable BWM with the checkbox at the top of the page. Then add a new rule:

Configuration
Enable: checked
Description: Outgoing Bandwidth Limit
BWM Type: Shared

Criteria
User: any
Schedule: none
Incoming Interface: lan1
Outgoing Interface: wan1
Source: any
Destination: any
DSCP Code: any
Service Type: Service Object
Service Object: any

DSCP Marking
Inbound Marking: preserve
Outbound Marking: preserve

Bandwidth Shaping
Inbound: 0 kbps (disabled), Priority 4, Maximum 0 kbps (disabled)
Outbound: 4000 kbps, Priority 4, Maximum 4000 kbps

802.1P Marking
Priority Code: 0
Interface: none

Log: no

The key above is in the Bandwidth Shaping section. Set your outbound guaranteed bandwidth and bandwidth limit to 0.5 – 1 mbps below your maximum upload speed. Here I set mine to 4000 kbps, which is a megabit less than my internet upload speed of 5 mbps.

Once I did this, my connection improved dramatically. I take a slight hit in upload rate for the single upload connection, but overall my internet connection is a lot more responsive for other connections and devices while uploading. If you think you might be experiencing this same issue, try running a speed test over at DSL Reports on your internet connection. Their speed test will check and give you a report on BufferBloat. Before the fix, my upload BufferBloat was around 600 ms. After the fix the BufferBloat is down to only 20 ms.

It’s been a hell of a week

Last year, I dropped a chunk of change on a Pegasus2 RAID box to store all my data on. It has been a great device since then, until earlier this week when a drive died in the RAID. Now, the Pegasus was configured as a RAID 5, so it should have just kept chugging away until I was able to replace the disk. That’s why I spent the money on it, after all, so a drive failure wouldn’t keep me from getting work done. Unfortunately, the Pegasus instead started to fail itself, throwing I/O errors to the system for days before finally failing the disk out of the volume. I can’t count how much time I spent trying to figure out where the hell the sporadic beach balls were coming from, including reinstalling the OS, twice.

I’ve been going back and forth with Promise about it, and we still haven’t gotten anywhere. The Pegasus did finally kick the drive, so now the volume is accessible again, but I have 0 confidence in the device itself or the content that is currently on there. I have backups, so at least that isn’t a problem.

But now I’m in the middle of damage control. Yesterday I ordered a new disk for the failed one in the Pegasus, shipping it next day air. Of course I ordered the 5900 RPM drive instead of the 7200 RPM one, which is partly my stupid mistake for not realizing and partly due to Seagate not being explicit with the spindle speed in their products anymore. Back that goes to Amazon today (which as the silver lining of this story actually refunded me the entire cost of the drive plus the extra I paid for next day shipping, amazing customer service and kudos to Amazon!). The correct drive is on its way to me and will be here tomorrow.

Beyond that, I ordered a new 6TB drive to store all my data until I’m satisfied that the Pegasus is back online and reliable again. That is set to arrive here tomorrow as well.

Like I said, a hell of a week. The Apple TV Dev Kit arrived in the middle of all this, but unfortunately I haven’t had a development Mac or the time to work with it yet. In the end, here’s how I’m seeing the score:

Me: -1 (for purchasing the wrong drive) Apple: +1 (for shipping the Dev Kits so quickly) Seagate: -5 (for no longer listing drive RPM in their product specs) Amazon: +10 (for going above and beyond in customer service) Pegasus: -20 (for dying in the exact situation I spent extra money to avoid) Promise: ? (still don’t know how they are going to handle my Pegasus failure)

Waiting for Review

Pro_waitingreview

You have no idea how surreal it is for me to see this right now.  For non-developers, this is what you see after submitting an application to Apple to review for the App Store.  It always feels satisfying to click that final Submit button, but this time is a little more special for me.  You see, Seasonality Pro has been the longest project I have ever worked on.

The ideas for Seasonality Pro started spinning in my head before the iPad even came out.  In the fall of 2008, I was taking a synoptic meteorology course and thought how cool it would be to have an app that would show model data in a beautiful way that would be easy to use and offer completely customizable maps.  Over the following couple of years, iPhones got faster, the iPad came out, and the idea of what the app could be solidified in my mind.

But I didn’t work on it…  The task was just too large.  Where would the data come from?  What data formats would I have to parse?  Where could I get the necessary custom maps and how do I draw them?  How do you draw contours, and shaded layers, and calculate derived layers from several model data fields?  How would it perform?  There were just too many unknowns; I couldn’t start working on it.

And then I could…  Over time enough of the pieces fell into place that I started an Xcode project in September of 2013.  A month later, I had my first base map plotted.  And the pieces started coming together faster when I started working on it full time in 2014.  By the end of the summer, I had a pretty good app going (basic plots, etc), but there were still so many details left to be done.  I had to take a few weeks break and spend some time updating my other apps before I could finish up Pro.

A few weeks turned into a few months, but by November 2014 I was back on it.  I presented my work at the American Meteorological Society annual meeting in January 2015.  The reception was good.  It was a relief to finally show it to people who were in the target market and see their eyes light up.  The project was even closer to being finished, but I still hadn’t run a beta.

The beta started in late January.  Lots of bugs were squashed, and lots of adjustments were made to improve the feel of the app.   The beta stretched for months longer than a usual beta.  It was a complex app (close to 100,000 lines of code for even this first version), and finishing it felt like a big mountain to climb with the last 20% of the work taking 80% of the time.

So now we’re into May, but it’s done.  Seasonality Pro 1.0 has been submitted.  A labor of love for so many years, finally being realized.  Will I make back the investment put into it?  It’s hard to say.  A lot of people think it would be crazy to work on an app longer than a few months, not knowing if it was going to make it in the App Store.  For me though, these are the types of projects worth working on.  Bringing a product like this to market advances the field of meteorology, and it’s not something that just anyone (or any company) can do.  With millions of apps on the store, there is nothing else like it.

Here’s hoping for a speedy app review…

Setting up a Small Desktop RAID System

With the exodus of mass internal storage hitting even the top end of the line in the 2013 Mac Pro, a lot more people are going to start looking for external solutions for their storage needs. Many will just buy an external hard drive or two, but others like myself will start to consider larger external storage arrays. One of the best solutions for people who need 5-15GB of storage is a 4 disk RAID 5 system. As I mentioned in a previous post, I went with a Pegasus2, and set it up in a RAID 5. This brings up a lot of questions about individual RAID settings though, so I thought I would put together a primer on typical RAID settings you should care about when purchasing a Pegasus or comparable desktop RAID system.

Stripe Size Stripe size is probably the setting that has one of the biggest impacts on performance of your RAID. A lot of people will run a benchmark or two with different stripe sizes and incorrectly determine that bigger stripe sizes are faster, and use them. In reality, the best performing stripe size highly depends on your workload.

A quick diversion to RAID theory is required before we can talk about stripe sizing. With RAID 5, each drive is split up into blocks of a certain size called stripes. In a 4 disk RAID 5, 3 disks will have real data in their stripes, and the 4th disk will have parity data in it’s stripe (in reality, the parity stripes in a RAID 5 alternate between drives, so not all the parity is on the same disk). The parity stripe allows a disk to fail while still keeping your array online. You give up 25% of the space to gain a certain amount of redundancy.

When you read data from the volume, the RAID will determine which disk your data is on, read the stripe and return the requested data. This is pretty straightforward, and the impact of stripe size during reading is minimal.

However, when writing data to the disk, stripe size can make a big performance difference. Here’s what happens every time you change a file on disk:

  1. Your Mac sends the file to the RAID controller to write the change to the volume.
  2. The RAID controller reads the stripe of data off the disk where the data will reside.
  3. The RAID controller updates the contents of the stripe and writes it back to the disk.
  4. The RAID controller then reads the stripes of data in the same set from the other disks in the volume.
  5. The RAID controller recalculates the parity stripe.
  6. The parity slice is written to the final disk in the volume.

This results in 3 stripe reads, and 4 stripe writes every time you write even the smallest file to the disk. Most RAIDs will default to a 128KB stripe size, and will typically give you a stripe size range anywhere from 32KB to 1MB. In the example above, assuming a 128KB stripe size, even a change to a 2KB file will result in almost 1MB of data being read/written to the disks. If a 1MB stripe size is used instead of 128KB, then 7MB of data would be accessed on the disks just to change that same 2KB file. So as you can see, the stripe size greatly determines the amount of disk I/O required to perform even simple operations.

So why not just choose the smallest stripe size? Well, hard drives are really good at reading contiguous blocks of data quickly. If you are reading/writing large files, grouping those accesses into larger stripe sizes will greatly increase the transfer rate.

In general, if you use mostly large files (video, uncompressed audio, large images), then you want a big stripe size (512KB – 1MB). If you have mostly very small files, then you want a small stripe size (32KB – 64KB). If you have a pretty good mix between the two, then 128KB – 256KB is your best bet.

Read Ahead Cache A lot of RAID systems will give you the option of enabling a read ahead cache. Enabling this can dramatically increase your read speeds, but only in certain situations. In other situations, it can increase the load on your hard drives without any benefit.

Let’s talk about what happens in the read cache when read ahead is disabled. The read cache will only store data that you recently requested from the RAID volume. If you request the same data again, then the cache will already have that data ready for you on demand without requiring any disk I/O. Handy.

Now how is it different when read ahead caching is enabled? Well, with read ahead caching, the RAID controller will try and guess what data you’ll want to see next. It does this by reading more data off the disks than you request. So for example, if your Mac reads the first part of a bigger file, the RAID controller will read the subsequent bytes of that file into cache, assuming that you might need them soon (if you wanted to read the next part of the big file, for example).

This comes in handy in some situations. Like I mentioned earlier, hard drives are good at reading big contiguous blocks of data quickly. So if you are playing a big movie file, for instance, the RAID controller might read the entire movie into cache as soon as the first part of the file is requested. Then as you play the movie, the cache already has the data you need available. The subsequent data is not only available more quickly, but the other disks in your RAID volume are also free to handle other requests.

However, the read ahead results in wasted I/O. A lot of times, you won’t have any need for the subsequent blocks on the disk. For instance, if you are reading a small file that is entirely contained in a single stripe on the volume, there is no point in reading the next stripe. It just puts more load on the physical disks and takes more space in the cache, without any benefit.

Personally, I enable read ahead caching. It’s not always a win-win, but it can greatly speed up access times when working with bigger files (when the speed is needed most).

Write Back Cache There are two write cache modes: write through, and write back. Your choice here can have a dramatic impact on the write speed to your RAID. Here’s how each mode works.

Write Through: When writing data to the disk, the cache is not used. Instead, OS X will tell the RAID to write the data to the drive, and the RAID controller waits for the data to be completely written to the drives before letting OS X know the operation was completed successfully.

Write Back: This uses the cache when writing data to the disk. In this case, OS X tells the RAID to write a given block of data to the disk. The RAID controller saves this block quickly in the cache and tells OS X the write was successful immediately. The data is not actually written to the disks until some time later (not too much later, just as soon as the disks can seek to the right location and perform the write operation).

Enabling the write back cache is “less safe” than write through mode. The safety issue comes into play during a power outage. If the power goes out between the time that the RAID told OS X the data was written, and the time when the data is actually on the disks themselves, data corruption could take place.

More expensive RAID systems, like the Pegasus2, have a battery-backed cache. The benefit here is that if a power outage happens as described above, the battery should power the cache until the power goes back on and the RAID controller can finish writing the cache to disks. This effectively overcomes the drawback of enabling write back caching.

Another potential drawback for enabled write back caching is a performance hit to the read speed. The reason for this is that there is less cache available for reading (because some is being used for writes). The hit should be pretty minor though, and only applicable when a lot of write operations are in progress. Otherwise, the amount of data in the write back cache will be minimal.

The big advantage of using a write back cache is speed though.  When write back caching is enabled, OS X doesn’t have to wait for data to be written to the disks, and can move on to other operations.  This performance benefit can be substantial, and gives the RAID controller more flexibility to optimize the order of write operations to the disks based on the locations of data being written.  Personally, I enable write back caching.

Wrap-up

That about covers it.  Small desktop RAID systems are a nice way to get a consolidated block of storage with some a little redundancy and a lot more performance than just a stack of disks can provide.  I hope this overview has helped you choose the options that are best for your desktop RAID system. In the end, there is no right answer to the settings everyone should use. Choose the settings that best fit your workload and performance/safety requirements.

Apple’s Rumored 12″ Notebook

The rumors are growing for a new 12 inch MacBook Air.  According to MacRumors, this laptop would be slimmer than the current MacBook Airs, fanless, and come in the same silver/gold/space gray color variations as the iPhone.

Sounds like an clamshell iPad with a keyboard to me.

The new notebook has a much thinner design the appears to sacrifice many of the usual ports seen on Apple’s current notebooks and may adopt the new reversible 

USB Type C

 connector that has seen its specifications recently finalized.

The MacBook Air has very few ports to begin with (video, USB, headphone and an SD card slot on the 13″).   If you are sacrificing many of the usual ports, you end up with no ports at all, like the iPad.

Interestingly, the report raises some questions about charging on the notebook, indicating that the usual MagSafe port has been removed in favor of a new, unspecified charging method.

Hmm, like a Lightning cable?  It’s reversible too.

In line with previous rumors, the machine is reportedly fanless, suggesting it will adopt an ultra low-power processor such as the 

Broadwell-Y Core M processors

 recently announced by Intel.

The A8 is another ultra low-power processor…

Many people prefer the iPad as a productivity machine.  With a standard keyboard attached, you can definitely get some serious work done.  A 12″ iPad with a permanent keyboard attached sounds like a great little mobile computer.

« Older posts

© 2024 *Coder Blog

Theme by Anders NorenUp ↑