*Coder Blog

Life, Technology, and Meteorology

Category: Seasonality (page 1 of 6)

Waiting for Review

Pro_waitingreview

You have no idea how surreal it is for me to see this right now.  For non-developers, this is what you see after submitting an application to Apple to review for the App Store.  It always feels satisfying to click that final Submit button, but this time is a little more special for me.  You see, Seasonality Pro has been the longest project I have ever worked on.

The ideas for Seasonality Pro started spinning in my head before the iPad even came out.  In the fall of 2008, I was taking a synoptic meteorology course and thought how cool it would be to have an app that would show model data in a beautiful way that would be easy to use and offer completely customizable maps.  Over the following couple of years, iPhones got faster, the iPad came out, and the idea of what the app could be solidified in my mind.

But I didn’t work on it…  The task was just too large.  Where would the data come from?  What data formats would I have to parse?  Where could I get the necessary custom maps and how do I draw them?  How do you draw contours, and shaded layers, and calculate derived layers from several model data fields?  How would it perform?  There were just too many unknowns; I couldn’t start working on it.

And then I could…  Over time enough of the pieces fell into place that I started an Xcode project in September of 2013.  A month later, I had my first base map plotted.  And the pieces started coming together faster when I started working on it full time in 2014.  By the end of the summer, I had a pretty good app going (basic plots, etc), but there were still so many details left to be done.  I had to take a few weeks break and spend some time updating my other apps before I could finish up Pro.

A few weeks turned into a few months, but by November 2014 I was back on it.  I presented my work at the American Meteorological Society annual meeting in January 2015.  The reception was good.  It was a relief to finally show it to people who were in the target market and see their eyes light up.  The project was even closer to being finished, but I still hadn’t run a beta.

The beta started in late January.  Lots of bugs were squashed, and lots of adjustments were made to improve the feel of the app.   The beta stretched for months longer than a usual beta.  It was a complex app (close to 100,000 lines of code for even this first version), and finishing it felt like a big mountain to climb with the last 20% of the work taking 80% of the time.

So now we’re into May, but it’s done.  Seasonality Pro 1.0 has been submitted.  A labor of love for so many years, finally being realized.  Will I make back the investment put into it?  It’s hard to say.  A lot of people think it would be crazy to work on an app longer than a few months, not knowing if it was going to make it in the App Store.  For me though, these are the types of projects worth working on.  Bringing a product like this to market advances the field of meteorology, and it’s not something that just anyone (or any company) can do.  With millions of apps on the store, there is nothing else like it.

Here’s hoping for a speedy app review…

GSGradientEditor

A fairly significant feature in Seasonality Pro is the ability to edit the gradients used to show weather data on a map.  When looking around for some sample open source gradient editors online, I didn’t come across anything I could really use.  So I decided to write my own and offer it under an MIT license.  I posted the source code (link below) on GitHub.  Here’s what it looks like:

I’ve included a lot of documentation as well as a sample Xcode project to show how to use it over on the GitHub page:

GSGradientEditor on GitHub

I looked at quite a few different graphics apps when working on the UI.  I wanted to see not only how other implementations looked, but how they worked.  With iOS 7 being more gesture-centric, I wanted to make sure that interaction with GSGradientEditor was intuitive.  I found the Inkpad app most helpful during this process.  In the end, I like how GSGradientEditor turned out.

Enjoy!

Seasonality Updates

I thought now might be a good time to post an update about how development is progressing in the family of Seasonality apps.

Seasonality Core

Seasonality Core 2.4 is going to be released sometime in the next week. This update has some nice improvements. One is an update to Particle Mode that makes it look much more impressive. It’s the same feature showing the same data, but in a cooler way. The mapping code is also gaining other improvements like allowing wrap-around at +/- 180° (New Zealand users rejoice!). The graphs aren’t being ignored, with a new hover bubble layout to make it easier to inspect the data. The hover bubble will also expand when you hold down the Option key on your keyboard and show all the conditions at the hovered time. It’s a really nice way to look at what’s happening at a certain time.

As far as future plans for Seasonality Core. The next major update will most likely be Seasonality Core 3.0. It’s too early to discuss features, but there are a couple of areas that I think need improvement. One is increasing the number of supported locations and making locations easier to search and configure. This is a lot of work, that requires server-side changes as well, so it’s hard to say when this will be ready. The second change I would like to make to Seasonality Core is to bring back some of the customizability from Seasonality Go. With Seasonality Go, it’s great how you can customize your own screen layouts. I would love for Seasonality Core to be able to do this as well.

Seasonality Go

Seasonality Go 2.2 was just released last month. We worked on the user interface a lot to start the transition to iOS 7, and I think it looks a lot nicer now. Another big new feature was the ability to select a color theme. Just head into the Tools (wrench) menu under Settings to choose a color that looks best to you. Beyond these visual changes, lots of optimizations were made to the code behind the scenes. The app runs a lot more smoothly now, especially when switching screens or switching between Seasonality Go and other apps.

The next major update to Seasonality Go will most likely have the same location changes I discussed above in Seasonality Core. I’m also planning to continue improving the interface to show less clutter and more weather. This will bring back some of the look and feel improvements I’ve been working on in Seasonality Pro.

Seasonality Pro

Seasonality Pro will be an iPad weather app for professional meteorologists. I’ve had the project on my mind for several years now, and over the past several months I’m finally finding more time to work on it.

I have been receiving a lot of questions about how Seasonality Pro development is progressing. It’s certainly taken me longer to complete than I was originally expecting. During the past several months I have been splitting my time a lot between Seasonality Core, Seasonality Go, and Seasonality Pro. There are quite a few features I’ve added to Seasonality Core and Seasonality Go recently that provide major underlaying functionality that will be used in Seasonality Pro. It has been a good way of making progress on Pro, while still providing updates to the other apps. Now that a lot of the foundation code is ready for Seasonality Pro, I’ve recently starting to switch gears. Instead of working on Seasonality Pro indirectly through features added to the other apps, I am now spending a lot more time directly working on the interface and layout of Seasonality Pro. Version 1.0 is still a ways off, but it’s looking good so far and solid progress is being made.

As always, if you would like to provide feedback about any of the Seasonality apps, please send me an email. There are email links in the Help menus in both Seasonality Core and Seasonality Go.

Overhead while using GCD

Today I spent some time optimizing the Particle Mode simulation code in Seasonality Core. While doing some measurements, I discovered that quite a bit of time was spent in GCD code while starting new tasks. I use dispatch_apply to iterate through the particles and run the position and color calculations for the next frame. In the tests below, I was simulating approximately 200,000 particles on the Macs, and 11,000 particles on the iPad.

I decided to try breaking the tasks up into fewer blocks, and run the dispatch_apply for groups of around 50 particles instead of running it for each particle. After making this change, the simulation ran in up to 59% less CPU time than before. Here are some informal numbers, just by looking at Activity Monitor and roughly estimating:

CPU Usage
Device   Before   After   Time Savings
Mac Pro (2009, Oct 2.26Ghz Xeon)   390%   160%   59%
Retina MBP (2012, Quad 2.6Ghz i7)   110%   90%   18%
MacBook Air (2011, Duo 1.8Ghz i7)   130%   110%   15%
 
iPad 3 (fewer particles)   85%   85%   0%

As you can see, the benefits from the new code running on the Mac Pro are substantial. In my earlier code, I was somewhat suspicious of why the simulation took so many more resources on the Mac Pro than on the laptops. Clearly the overhead in thread creation was a lot higher on the older Xeon CPU. This brings the Mac Pro’s processing times closer to what the other more modern processors can accomplish.

Perhaps an even more surprising result is the lack of a speedup on the iPad. While measuring both runs, the two versions averaged about the same usage. Perhaps if I had a more formal way to measure the processing time, a small difference might become apparent, but overall the difference was minimal. I’m guessing that Apple has built logic into the A-series CPUs that allows for a near 0 cost in context switching. Makes you wonder how much quicker something like this would run if Apple built their own desktop-class CPUs.

Using IOKit to Detect Graphics Hardware

After Seasonality Core 2 was released a couple of weeks ago, I received email from a few users reporting problems they were experiencing with the app. The common thread in all the problems was having a single graphics card (in this case, it was the nVidia 7300). When the application launched, there would be several graphics artifacts in the map view (which is now written in OpenGL), and even outside the Seasonality Core window. It really sounded like I was trying to use OpenGL to do something that wasn’t compatible with the nVidia 7300.

I’m still in the process of working around the problem, but I wanted to make sure that any work-around would not affect the other 99% of my users who don’t have this graphics card. So I set out to try and find a method of detecting which graphics cards are installed in a user’s Mac. You can use the system_profiler terminal command to do this:

system_profiler SPDisplaysDataType

But running an external process from within the app is slow, and it can be difficult to parse the data reliably. Plus, if the system_profiler command goes away, the application code won’t work. I continued looking…

Eventually, I found that I might be able to get this information from IOKit. If you run the command ioreg -l, you’ll get a lengthy tree of hardware present in your Mac. I’ve used IOKit in my code before, so I figured I would try to do that again. Here is the solution I came up with:

// Check the PCI devices for video cards.  
CFMutableDictionaryRef match_dictionary = IOServiceMatching("IOPCIDevice");

// Create a iterator to go through the found devices.
io_iterator_t entry_iterator;
if (IOServiceGetMatchingServices(kIOMasterPortDefault, 
                                 match_dictionary, 
                                 &entry_iterator) == kIOReturnSuccess) 
{
  // Actually iterate through the found devices.
  io_registry_entry_t serviceObject;
  while ((serviceObject = IOIteratorNext(entry_iterator))) {
    // Put this services object into a dictionary object.
    CFMutableDictionaryRef serviceDictionary;
    if (IORegistryEntryCreateCFProperties(serviceObject, 
                                          &serviceDictionary, 
                                          kCFAllocatorDefault, 
                                          kNilOptions) != kIOReturnSuccess) 
    {
      // Failed to create a service dictionary, release and go on.
      IOObjectRelease(serviceObject);
      continue;
    }
				
    // If this is a GPU listing, it will have a "model" key
    // that points to a CFDataRef.
    const void *model = CFDictionaryGetValue(serviceDictionary, @"model");
    if (model != nil) {
      if (CFGetTypeID(model) == CFDataGetTypeID()) {
        // Create a string from the CFDataRef.
        NSString *s = [[NSString alloc] initWithData:(NSData *)model 
                                            encoding:NSASCIIStringEncoding];
        NSLog(@"Found GPU: %@", s);
        [s release];
      }
    }
		
    // Release the dictionary created by IORegistryEntryCreateCFProperties.
    CFRelease(serviceDictionary);

    // Release the serviceObject returned by IOIteratorNext.
    IOObjectRelease(serviceObject);
  }

  // Release the entry_iterator created by IOServiceGetMatchingServices.
  IOObjectRelease(entry_iterator);
}

Creating Seasonality Map Tiles

In a weather app, maps are important. So important, that as a developer of weather apps, I’ve learned far more than I ever care to know about topography. When I originally created the maps for Seasonality, I had to balance download size with resolution. If I bumped up the resolution too far, then the download size would be too big for users on slower internet connections. If I used too low of a resolution, the maps would look crappy. I ended up settling on 21600×10800 pixel terrain map, which after decent image compression resulted in Seasonality being a 16-17 MB download. At the time, most apps were around 5 MB or less, so Seasonality was definitely a more substantial download.

That compromise was pretty good back in 2005, but now that a half-decade has passed it is time to revisit the terrain I am including in the app. Creating a whole new terrain image set is a whole lot of work though, so I thought I would share what goes into the process here.

First, you have to find a good source of map data. For Seasonality, I’ve always liked the natural terrain look. The NASA Blue Marble imagery is beautiful, and free to use commercially, so that was an easy decision. For the original imagery I used the first generation Blue Marble imagery. Now I am using the Blue Marble Next Generation for even higher resolution.

Next you have to decide how you are going to tile the image. I’ve chosen a pretty simple tiling method, where individual tiles are 512×512 pixels, and zoom levels change by a power of 2. Square tiles are best for OpenGL rendering, and while a larger (1024, or even 2048 pixel) tile would work, 512×512 pixel tiles are faster to load into memory and if downloading over the network it will transfer faster as well. From there, you have to figure out how many tiles will be at each zoom level. I’ve chosen to use a 4×2 tile grid as a base, so the smallest image of the entire globe will be 2048 x 1024 pixels and made up of 8 tiles. As the user zooms in further, they will hit 4096×2048, 8192×4096, 16384×8192 pixel zoom levels and so on. I’ve decided to provide terrain all the way up to 65536×32768 pixels.

Now that you have an idea of what tiles need to be provided, you need to actually create the images. This is the most time consuming part of the process. Things to consider include the image format and compression amounts to use on all the tiles, and these are dependent on the type of display you are trying to generate. Creating all the tiles manually would take forever, so it’s best to automate this process.

The Blue Marble imagery comes in 8 tiles of 21600×21600 each (the full set of images for every month of the year is around 25 GB). I start by creating the biggest tile zoom level and moving down from there. For my 65536×32768 zoom level, I’ll resize each of the 8 tiles into 16384×16384 pixel images. I use a simple Automator action in Mac OS X to do this. I created an action that takes the selected files in the Finder and creates copies of the images and resizes the copies to the specified resolution.

Now that I have 8 tiles at the correct resolution, I need to create the 512×512 tiles for the final product. For Seasonality, I also need to draw all the country/state borders at this point, because otherwise the maps are blank. I created a custom Cocoa app that will read in a map image with specified latitude/longitude ranges, draw the boundaries, and write out the tiled images to a folder. My app has the restriction of only handling a single image at a time, I’ll have to drag each of the 8 tiles in separately for each zoom level. It’s not ideal, but I don’t do this too often either. For the 65536×32768 zoom level, I end up with 8192 individual tile images. Smaller zoom levels result in far fewer tiles, but you can see why automation is helpful here.

It’s a lot of work, but in the end the results are great. For Seasonality, along with higher resolution terrain, I’m also bringing in the Blue Marble’s monthly images. If everything goes as planned, Seasonality will show the “average” terrain for every month of the year. Users will be able to see the foliage change as well as the snow line move throughout the seasons.

Packing in the inodes

The new forecast server I’m working on for Seasonality users is using the filesystem heirarchy as a form of database instead of PostgreSQL.  This will slow down the forecast generation code a bit, because I’m writing a ton of small files instead of letting Postgres optimize disk I/O.  However, reading from the database will be lightning fast, because filesystems are very efficient at traversing directory structures.

The problem I ran into was that I was quickly hitting the maximum number of files on the filesystem.  The database I’m working on creates millions of files to store its data in, and I was quickly running out of inodes.

Earlier today I installed a fresh copy of Ubuntu on a virtual machine where the final forecast server will reside.  Of course I forgot to increase the number of inodes before installing the OS on the new partition.  Unfortunately, there is no way to add more inodes to a Linux ext4 filesystem without reformatting the volume.  Luckily I caught the problem pretty early and didn’t get too far into the system setup.

To fix the issue, I booted off the Ubuntu install ISO again and chose the repair boot option.  Then I had it start a console without selecting a root partition (if you select a root partition, it will mount the partition and when I tried to unmount it, the partition was in use).  This let me format the partition with an increased number of inodes using the -N flag in mkfs:

mkfs.ext4 -N 100000000 /dev/sda1

That ought to be enough. 🙂  After that, I was able to install Ubuntu on the new partition (just making sure not to select to format that same partition again, wiping out your super-inode format).

The forecast server is coming along quite well.  I’m hoping to post more about how it all works in the near future.

Office Network Updates

Over the past several weeks, I’ve been spending a lot of time working on server-side changes. There are two main server tasks that I’ve been focusing on. The first task is a new weather forecast server for Seasonality users. I’ll talk more about this in a later post. The second task is a general rehash of computing resources on the office network.

Last year I bought a new server to replace the 5 year old weather server I was using at the time. This server is being coloed at a local ISPs datacenter. I ended up with a Dell R710 with a Xeon E5630 quad-core CPU and 12GB of RAM. I have 2 mirrored RAID volumes on the server. The fast storage is handled by 2 300GB 15000 RPM drives. I also have a slower mirrored RAID using 2 500GB 7200 RPM SAS drives that’s used mostly to store archived weather data. The whole system is running VMware ESXi with 5-6 virtual machines, and has been working great so far.

Adding this new server meant that it was time to bring the old one back to the office. For its time, the old server was a good box, but I was starting to experience reliability issues with it in a production environment (which is why I replaced it to begin with). The thing is, the hardware is still pretty decent (dual core Athlon, 4GB of RAM, 4x 750GB disks), so I decided I would use it as a development server. I mounted it in the office rack and started using it almost immediately.

A development box really doesn’t need a 4 disk RAID though. I currently have a Linux file server in a chassis with 20 drive bays. I can always use more space on the file server, so it made sense to consolidate the storage there. I moved the 4 750GB disks over to the file server (setup as a RAID 5) and installed just a single disk in the development box. This brings the total redundant file server storage up past 4 TB.

The next change was with the network infrastructure itself. I have 2 Netgear 8 port gigabit switches to shuffle traffic around the local network. Well, one of them died a few days ago so I had to replace it. I considered just buying another 8 port switch to replace the dead one, but with a constant struggle to find open ports and the desire to tidy my network a bit, I decided to replace both switches with a single 24 port Netgear Smart Switch. The new switch, which is still on its way, will let me setup VLANs to make my network management easier. The new switch also allows for port trunking, which I am anxious to try. Both my Mac Pro and the Linux file server have dual gigabit ethernet ports. It would be great to trunk the two ports on each box for 2 gigabits of bandwidth between those two hosts.

The last recent network change was the addition of a new wireless access point. I’ve been using a Linksys 802.11g wireless router for the last several years. In recent months, it has started to drop wireless connections randomly every couple of hours. This got to be pretty irritating on devices like laptops and the iPad where a wired network option really wasn’t available. I finally decided to break down and buy a new wireless router. There are a lot of choices in this market, but I decided to take the easy route and just get an Apple Airport Extreme. I was tempted to try an ASUS model with DD-WRT or Tomato Firmware, but in the end I decided I just didn’t have the time to mess with it. So far, I’ve been pretty happy with the Airport Extreme’s 802.11n performance over the slower 802.11g.

Looking forward to finalizing the changes above. I’ll post some photos of the rack once it’s completed.

Modeling a Storm

One fairly common project for a meteorology student to participate in after taking a few years of coursework is to do a case study poster presentation for a conference. With finishing up my synoptic scale course series this past spring, now would be a good time for me to work on a case study. What does a case study involve? Well, typically synoptic storms are fairly short-lived, lasting for 4-10 days. With a case study, you take a closer look at what was happening dynamically in the atmosphere during that storm, usually over a smaller region.

Picking a storm to look at for me was easy. Four years ago this October, I was visiting family in upstate New York and a very strong storm came through the region. Usually storms in October would drop rain, but this one was strong and cold enough to drop snow, and the results were disastrous. In Buffalo, 23 inches of snow fell in 36 hours. Buffalo is used to getting this much snow in the winter, but since the leaves hadn’t fallen off the trees yet, a lot more snow collected on all the branches. Thousands of tree limbs fell due to the extra weight, knocking out power for hundreds of thousands of people. Some homes didn’t have power restored for over a week. When I drove around town the next day, it was like a war zone, having to dodge tree branches and power lines even on main roads in the city.

So it was easy for me to pick this storm (I even wrote about it back then). Next we (I’m working with my professor and friend, Marty, on this project) needed to pick something about the storm to focus upon. I can’t just put a bunch of pictures up and say, “Hey, look at all the snow!” There has to be some content. For this case study, Marty thought it might be interesting to look at how different microphysical schemes would effect a forecast over that time period.

This was a really tough event to forecast. Meteorologists could tell it was going to be bad, but with the temperature just on the rain/snow boundary, it was difficult to figure out just how bad it would be and where it would hit the hardest. If temperatures were a couple degrees warmer and this event resulted in rain instead of snow, it would have been a bad storm, but there wouldn’t have been the same devastation as there was with snow.

Microphysical schemes dictate how a forecast model transitions water between states. A microphysics scheme would determine what physical traits would have to be present in the environment to result in water vapor condensing to form liquid and create clouds, freeze into ice, or collide with other ice/water/vapor to form snowflakes. Some schemes take more properties of the atmosphere and physics into account than others, or weight variables differently when calculating these state changes. If I look at which scheme did the best job forecasting this event, then meteorologists could possibly run a small model with that same scheme on the next storm before it hits, to give them a better forecast tool.

To test these schemes, I have to run a model multiple times (once with each scheme). To do that, I had to get a model installed on my computer. Models take a long time to run (NOAA has a few supercomputers for this purpose). I don’t have a supercomputer, but my desktop Mac Pro (8×2.26 Ghz Xeons, 12 GB RAM) is a pretty hefty machine that might just let me run the model in a reasonable amount of time. I’m using the WRF-ARW model with EMS tools, which is commonly used to model synoptic scale events in academia. This model will compile on Mac OS X, but after a week of hacking away at it, I still didn’t have any luck. I decided to install Linux on the Mac and run it there. First I tried Ubuntu on the bare metal. It worked, but it was surprisingly slow. Next I tried installing CentOS in VMware Fusion, and it was actually faster (20%) than Ubuntu on the bare machine. The only explanation for this I can think of is that the libraries the model is compiled against were built using better compiler optimizations in the CentOS distribution. So not only do I get a faster model run, but I also can use Mac OS X in the background while it’s running. Perfect.

Once the model is installed, I have to setup a control run using parameters generally used in the most popular forecast models. There are several decisions that have to be made at this stage. First, a good model domain needs to be specified. My domain covers a swath 1720×1330 kilometers over most of the Great Lakes area, centered just west of Buffalo. For this large of a storm, a 4 km grid spacing is a pretty good compromise between showing enough detail and not taking years for the model to run. For comparison, the National Weather Service uses a 12 km grid spacing over the whole US to run their NAM forecast model 4 times a day. To complete the area selection, we have to decide on how many vertical levels to use in the model. Weather doesn’t just happen at the earth’s surface, and here I set the model to look at 45 levels from the surface up through around 50,000 feet. (I say “around” here because in meteorology we look at pressure levels, not height specifically, and with constantly changing pressure in the atmosphere the height can vary. The top surface boundary the model uses is 100 millibars.)

In case you didn’t notice, this kind of domain is quite large in computing terms. There is a total of 5,676,000 grid points in 3 dimensions. When the model is running, it increments through time at 22 second intervals. The model will calculate what happens at each of those grid points in that 22 seconds, and then it starts all over again. Usually, the model only writes out data after every hour, and I think it’s pretty apparent why this is the case. If I configured the model to output all the data at every time, there would be more than 44 billion point forecasts saved for the 2 day forecast run. Each of these forecasts would tell what the weather would be like at a particular location in the domain at a particular time, and each forecast would have around 30-50 variables (like temperature, wind speed, vorticity, etc). If those variables were simple 32 bit floats, the model would output about 6 TB of data (yes, with a T) for a single run. Obviously this is far from reasonable, so we’ll stick to outputting data every hour which results in a 520MB data file each hour. Even though we are outputting a lot less data, the computer still has to process the 6 TB (and the hundreds of equations that derive that data), which is quite incredible if you think about it.

My Mac is executing the control run as I’m writing this. To give you an idea, it will take about 12 hours for the model run to finish with VMware configured to use 8 cores (the model doesn’t run as quickly when you use hyperthreading) and 6 GB of RAM. This leaves all the hyperthreading cores and 6 GB of RAM for me to do stuff on the rest of my Mac, and so far I don’t notice much of a slowdown at all which is great.

So what’s next? Well after getting a good control run, I have to go back and test and run the model again for each of the microphysics schemes (there are 5-7 of them) and then look through the data to see how the forecast changes with each scheme. I’m hoping that one of them will obviously result in a forecast that is very close to what happened in real life. After I have some results, I will write up the content to fill a poster and take it with me to the conference at the beginning of October. The conference is in Tucson, which is great because I will have a chance to see some old friends while I’m there.

What does this mean for Seasonality users? Well, learning how to run this model could help me improve international Seasonality forecasts down the line. I could potentially select small areas around the globe to run more precise models once or twice a day. With the current forecast using 25-50km grid spacing, running a 12 km spacing would greatly improve forecast accuracy (bringing it much closer to the forecast accuracy shown at US locations). There are a lot of obstacles to overcome first. I would need to find a reasonably sized domain that wouldn’t bring down my server while running. Something that finishes in 2-3 hours might be reasonable to run each day in the early morning hours. This would be very long term, but it’s certainly something I would like to look into.

Overall it’s been a long process, but it’s been a lot of fun and I’m looking forward to not only sifting through the data, but actually attending my first meteorology conference a couple of months from now.

The Story of Go

With Seasonality Go just released, I thought I would post an entry looking back at all that has led up to my latest application.

This story begins with a tale of developing Seasonality 2.0 for the Mac. Last year, I officially ended all my contract work to focus solely on Gaucho Software products. This has been a great change, but things have been slow going. I was working for many months on Seasonality 2.0, and by the end of last year it was very clear that I bit off more than I could chew. Seasonality 2 was supposed to be a fresh start with a brand new interface. I really think the app needed it, but never underestimate the time it takes to redo an application interface.

Probably the biggest amount of time has been spent on the new 3D globe view that will show radar, satellite, and other data. To say I spent months on this single view would be an understatement. Was it worth it? It is tough to say. Right now it doesn’t offer much more than the original map interface in Seasonality 1.x, but if you take into account the possibility for future expansion, then I think it was worthwhile code to work on. You can judge for yourself once the software is completed and released (this code is not used in Seasonality Go, not enough memory is available for all the required texture images on the iPad).

In January, I was getting ready to let a designer take a look at the interface to clean up the loose ends and take a shammy cloth to the app, when it happened. “It”, in this case, is the iPad. Apple announced the product, and suddenly I had to stop and rethink my development direction. Should I keep working on finishing up Seasonality 2.0, or should I set that on the back burner and go after the iPad as a new platform for Seasonality?

If you follow me on Twitter, you know the iPad won. It was a tough decision, because Seasonality 2 was almost in beta, and it was ever-so-tempting to head to that light at the end of the tunnel as quickly as possible. But the iPad really excited me as a new platform. I was already wearing my old MBP pretty thin, and was planning on replacing it with a MacBook Air, but the iPad was even smaller and lighter than the MBA, and it did 90% of what I needed it to at less than half the cost. It wasn’t tough to decide to purchase one as my new laptop, and if the iPad was going to be my new mobile platform, I wanted Seasonality to be there too.

I started working immediately on getting my weather and foundation frameworks compiling on the iPhone SDK. It took much less time than I anticipated to do this, which was a pleasant surprise. The views took a little bit longer, because UIKit on the iPhone/ iPad definitely has some distinct differences from Cocoa on the Mac. Overall though, it only took about a week to get the software running in a reasonable state on the iPad simulator.

It was at this point that I decided to talk to a UI designer. I enrolled the help of FJ de Kermadec. I had worked with FJ previously on the DynDNS Updater for the Mac, and I felt he really had what it took to knock a new Seasonality iPad interface out of the park. I was not disappointed. First, FJ worked on the Seasonality brand. We had to find a good way for the iPad version of Seasonality to fit into my product lineup. FJ came back with a complete branding document that described the future product lineup, complete with product names, icons, slogans, colors, fonts, and goals for each app. It was great, and I really got on board with the direction this project was taking. The new iPad app was to be called Seasonality Go, and it would be an iPad user’s go-to guide for weather on-the-go.

Next, FJ spent some time working on a document describing the features of Seasonality Go–how the software would look and behave. The plans set out by this document were ambitious, and it would be months before I would have the time to finish all of them, but the direction was solid and it was great to have this vision of a future Seasonality Go, even before I got a true start on the first version. We decided on a starting subset of functionality, and FJ got to work on a UI mockup for the features selected.

The mockup came in the middle of March, and I hit the ground running hard after that. The Seasonality Go design was nothing like the Seasonality Mac design, so I scrapped the view code from before and just built new views from the weather and foundation frameworks I have. Good progress was made, and by the time my iPad 3G arrived at the end of April, I actually had an app that was pretty usable to test on it.

I obviously wasn’t finishing the app for launch day of the iPad, but I didn’t want to wait too long to release the app. I decided I had to set a deadline, and I chose to finish it absolutely no later than WWDC. So from the end of April through the end of May, I spent pretty much 24/7 working on it. I would get up early, work all day, come up for dinner when Katrina got home from the office, spend an hour or two hanging out, then went back downstairs and work until 1-2am. I made a ton of progress, and by the third week of May, I had an app that was ready to beta test. The beta testers (thanks Elliot, Mary, and Jim!) hammered on it for about a week, and I had 1.0 ready a week later. I submitted it to the App Store on May 21st.

Looking back, it has been a long road, but I think this app really turned out to be a great piece of software. Altogether, it is around 45,000 lines of code, which is 3 times more than Seasonality 1.0 for the Mac. Now I will be getting back to finishing Seasonality 2.0. There are some amazing features there that didn’t make it into Seasonality Go, so stay tuned for the rest of what I have been working on…

Older posts

© 2017 *Coder Blog

Theme by Anders NorenUp ↑