Life, Technology, and Meteorology

Category: Gaucho Software (Page 2 of 8)

Modeling a Storm

One fairly common project for a meteorology student to participate in after taking a few years of coursework is to do a case study poster presentation for a conference. With finishing up my synoptic scale course series this past spring, now would be a good time for me to work on a case study. What does a case study involve? Well, typically synoptic storms are fairly short-lived, lasting for 4-10 days. With a case study, you take a closer look at what was happening dynamically in the atmosphere during that storm, usually over a smaller region.

Picking a storm to look at for me was easy. Four years ago this October, I was visiting family in upstate New York and a very strong storm came through the region. Usually storms in October would drop rain, but this one was strong and cold enough to drop snow, and the results were disastrous. In Buffalo, 23 inches of snow fell in 36 hours. Buffalo is used to getting this much snow in the winter, but since the leaves hadn’t fallen off the trees yet, a lot more snow collected on all the branches. Thousands of tree limbs fell due to the extra weight, knocking out power for hundreds of thousands of people. Some homes didn’t have power restored for over a week. When I drove around town the next day, it was like a war zone, having to dodge tree branches and power lines even on main roads in the city.

So it was easy for me to pick this storm (I even wrote about it back then). Next we (I’m working with my professor and friend, Marty, on this project) needed to pick something about the storm to focus upon. I can’t just put a bunch of pictures up and say, “Hey, look at all the snow!” There has to be some content. For this case study, Marty thought it might be interesting to look at how different microphysical schemes would effect a forecast over that time period.

This was a really tough event to forecast. Meteorologists could tell it was going to be bad, but with the temperature just on the rain/snow boundary, it was difficult to figure out just how bad it would be and where it would hit the hardest. If temperatures were a couple degrees warmer and this event resulted in rain instead of snow, it would have been a bad storm, but there wouldn’t have been the same devastation as there was with snow.

Microphysical schemes dictate how a forecast model transitions water between states. A microphysics scheme would determine what physical traits would have to be present in the environment to result in water vapor condensing to form liquid and create clouds, freeze into ice, or collide with other ice/water/vapor to form snowflakes. Some schemes take more properties of the atmosphere and physics into account than others, or weight variables differently when calculating these state changes. If I look at which scheme did the best job forecasting this event, then meteorologists could possibly run a small model with that same scheme on the next storm before it hits, to give them a better forecast tool.

To test these schemes, I have to run a model multiple times (once with each scheme). To do that, I had to get a model installed on my computer. Models take a long time to run (NOAA has a few supercomputers for this purpose). I don’t have a supercomputer, but my desktop Mac Pro (8×2.26 Ghz Xeons, 12 GB RAM) is a pretty hefty machine that might just let me run the model in a reasonable amount of time. I’m using the WRF-ARW model with EMS tools, which is commonly used to model synoptic scale events in academia. This model will compile on Mac OS X, but after a week of hacking away at it, I still didn’t have any luck. I decided to install Linux on the Mac and run it there. First I tried Ubuntu on the bare metal. It worked, but it was surprisingly slow. Next I tried installing CentOS in VMware Fusion, and it was actually faster (20%) than Ubuntu on the bare machine. The only explanation for this I can think of is that the libraries the model is compiled against were built using better compiler optimizations in the CentOS distribution. So not only do I get a faster model run, but I also can use Mac OS X in the background while it’s running. Perfect.

Once the model is installed, I have to setup a control run using parameters generally used in the most popular forecast models. There are several decisions that have to be made at this stage. First, a good model domain needs to be specified. My domain covers a swath 1720×1330 kilometers over most of the Great Lakes area, centered just west of Buffalo. For this large of a storm, a 4 km grid spacing is a pretty good compromise between showing enough detail and not taking years for the model to run. For comparison, the National Weather Service uses a 12 km grid spacing over the whole US to run their NAM forecast model 4 times a day. To complete the area selection, we have to decide on how many vertical levels to use in the model. Weather doesn’t just happen at the earth’s surface, and here I set the model to look at 45 levels from the surface up through around 50,000 feet. (I say “around” here because in meteorology we look at pressure levels, not height specifically, and with constantly changing pressure in the atmosphere the height can vary. The top surface boundary the model uses is 100 millibars.)

In case you didn’t notice, this kind of domain is quite large in computing terms. There is a total of 5,676,000 grid points in 3 dimensions. When the model is running, it increments through time at 22 second intervals. The model will calculate what happens at each of those grid points in that 22 seconds, and then it starts all over again. Usually, the model only writes out data after every hour, and I think it’s pretty apparent why this is the case. If I configured the model to output all the data at every time, there would be more than 44 billion point forecasts saved for the 2 day forecast run. Each of these forecasts would tell what the weather would be like at a particular location in the domain at a particular time, and each forecast would have around 30-50 variables (like temperature, wind speed, vorticity, etc). If those variables were simple 32 bit floats, the model would output about 6 TB of data (yes, with a T) for a single run. Obviously this is far from reasonable, so we’ll stick to outputting data every hour which results in a 520MB data file each hour. Even though we are outputting a lot less data, the computer still has to process the 6 TB (and the hundreds of equations that derive that data), which is quite incredible if you think about it.

My Mac is executing the control run as I’m writing this. To give you an idea, it will take about 12 hours for the model run to finish with VMware configured to use 8 cores (the model doesn’t run as quickly when you use hyperthreading) and 6 GB of RAM. This leaves all the hyperthreading cores and 6 GB of RAM for me to do stuff on the rest of my Mac, and so far I don’t notice much of a slowdown at all which is great.

So what’s next? Well after getting a good control run, I have to go back and test and run the model again for each of the microphysics schemes (there are 5-7 of them) and then look through the data to see how the forecast changes with each scheme. I’m hoping that one of them will obviously result in a forecast that is very close to what happened in real life. After I have some results, I will write up the content to fill a poster and take it with me to the conference at the beginning of October. The conference is in Tucson, which is great because I will have a chance to see some old friends while I’m there.

What does this mean for Seasonality users? Well, learning how to run this model could help me improve international Seasonality forecasts down the line. I could potentially select small areas around the globe to run more precise models once or twice a day. With the current forecast using 25-50km grid spacing, running a 12 km spacing would greatly improve forecast accuracy (bringing it much closer to the forecast accuracy shown at US locations). There are a lot of obstacles to overcome first. I would need to find a reasonably sized domain that wouldn’t bring down my server while running. Something that finishes in 2-3 hours might be reasonable to run each day in the early morning hours. This would be very long term, but it’s certainly something I would like to look into.

Overall it’s been a long process, but it’s been a lot of fun and I’m looking forward to not only sifting through the data, but actually attending my first meteorology conference a couple of months from now.

The Story of Go

With Seasonality Go just released, I thought I would post an entry looking back at all that has led up to my latest application.

This story begins with a tale of developing Seasonality 2.0 for the Mac. Last year, I officially ended all my contract work to focus solely on Gaucho Software products. This has been a great change, but things have been slow going. I was working for many months on Seasonality 2.0, and by the end of last year it was very clear that I bit off more than I could chew. Seasonality 2 was supposed to be a fresh start with a brand new interface. I really think the app needed it, but never underestimate the time it takes to redo an application interface.

Probably the biggest amount of time has been spent on the new 3D globe view that will show radar, satellite, and other data. To say I spent months on this single view would be an understatement. Was it worth it? It is tough to say. Right now it doesn’t offer much more than the original map interface in Seasonality 1.x, but if you take into account the possibility for future expansion, then I think it was worthwhile code to work on. You can judge for yourself once the software is completed and released (this code is not used in Seasonality Go, not enough memory is available for all the required texture images on the iPad).

In January, I was getting ready to let a designer take a look at the interface to clean up the loose ends and take a shammy cloth to the app, when it happened. “It”, in this case, is the iPad. Apple announced the product, and suddenly I had to stop and rethink my development direction. Should I keep working on finishing up Seasonality 2.0, or should I set that on the back burner and go after the iPad as a new platform for Seasonality?

If you follow me on Twitter, you know the iPad won. It was a tough decision, because Seasonality 2 was almost in beta, and it was ever-so-tempting to head to that light at the end of the tunnel as quickly as possible. But the iPad really excited me as a new platform. I was already wearing my old MBP pretty thin, and was planning on replacing it with a MacBook Air, but the iPad was even smaller and lighter than the MBA, and it did 90% of what I needed it to at less than half the cost. It wasn’t tough to decide to purchase one as my new laptop, and if the iPad was going to be my new mobile platform, I wanted Seasonality to be there too.

I started working immediately on getting my weather and foundation frameworks compiling on the iPhone SDK. It took much less time than I anticipated to do this, which was a pleasant surprise. The views took a little bit longer, because UIKit on the iPhone/ iPad definitely has some distinct differences from Cocoa on the Mac. Overall though, it only took about a week to get the software running in a reasonable state on the iPad simulator.

It was at this point that I decided to talk to a UI designer. I enrolled the help of FJ de Kermadec. I had worked with FJ previously on the DynDNS Updater for the Mac, and I felt he really had what it took to knock a new Seasonality iPad interface out of the park. I was not disappointed. First, FJ worked on the Seasonality brand. We had to find a good way for the iPad version of Seasonality to fit into my product lineup. FJ came back with a complete branding document that described the future product lineup, complete with product names, icons, slogans, colors, fonts, and goals for each app. It was great, and I really got on board with the direction this project was taking. The new iPad app was to be called Seasonality Go, and it would be an iPad user’s go-to guide for weather on-the-go.

Next, FJ spent some time working on a document describing the features of Seasonality Go–how the software would look and behave. The plans set out by this document were ambitious, and it would be months before I would have the time to finish all of them, but the direction was solid and it was great to have this vision of a future Seasonality Go, even before I got a true start on the first version. We decided on a starting subset of functionality, and FJ got to work on a UI mockup for the features selected.

The mockup came in the middle of March, and I hit the ground running hard after that. The Seasonality Go design was nothing like the Seasonality Mac design, so I scrapped the view code from before and just built new views from the weather and foundation frameworks I have. Good progress was made, and by the time my iPad 3G arrived at the end of April, I actually had an app that was pretty usable to test on it.

I obviously wasn’t finishing the app for launch day of the iPad, but I didn’t want to wait too long to release the app. I decided I had to set a deadline, and I chose to finish it absolutely no later than WWDC. So from the end of April through the end of May, I spent pretty much 24/7 working on it. I would get up early, work all day, come up for dinner when Katrina got home from the office, spend an hour or two hanging out, then went back downstairs and work until 1-2am. I made a ton of progress, and by the third week of May, I had an app that was ready to beta test. The beta testers (thanks Elliot, Mary, and Jim!) hammered on it for about a week, and I had 1.0 ready a week later. I submitted it to the App Store on May 21st.

Looking back, it has been a long road, but I think this app really turned out to be a great piece of software. Altogether, it is around 45,000 lines of code, which is 3 times more than Seasonality 1.0 for the Mac. Now I will be getting back to finishing Seasonality 2.0. There are some amazing features there that didn’t make it into Seasonality Go, so stay tuned for the rest of what I have been working on…

A New Mac

Gaucho Desk - July 2009

I had been waiting for Nehalem Mac Pros to be released since the middle of last year, so I was pretty excited to hear Apple release them earlier this spring when I was in Thailand. Of course, I didn’t have time while abroad to actually put in an order, so I waited until I got back and ordered one in the beginning of April with these specs…

 

  • 8 x 2.26Ghz Nehalem Xeons
  • 12GB of 1066Mhz DDR3 RAM
  • Radeon 4870 graphics card with 512MB of VRAM
  • 2 1TB hard drives in a RAID 0 configuration
  • 1 640GB hard drive for Time Machine

 

It was a tough decision between the 8 core 2.26Ghz model and the 4 core 2.66Ghz model. The 4 core model was a bit cheaper. It is also faster at single-threaded processes, but with the drawback of only having half as many cores. Since compiling code is the primary job of this Mac, and compiling takes advantage of as many CPUs as you can throw at it, I decided to go for the 8 core model.

I’m glad I did, because for compiling this machine is a beast. You can check out a screencast I captured showing XRG compiling subversion below. With 16 CPUs (8 real + 8 hyperthreading), I had to create a new CPU graph on XRG to show them more efficiently. The top shows a composite of all CPU activity, and the bottom shows a bar chart with immediate CPU usage for each of the 16 CPU cores.

Screencast

All in all, this Mac is more than 6 times faster at compiling than the Dual 2.5Ghz G5 it replaces, which definitely saves me quite a bit of time day in and day out while working.

When ordering this Mac, I also ordered a second 24″ LCD. Now, having 2 24″ displays (an HP LP2475w and a Samsung 2493HM) makes usability a lot smoother. Plenty of space to spread out all the windows I’m working with. While coding, I can have Xcode take a full display, and then run the app on the second display, never having to worry about covering up the debugging interface while testing something.

I posted two other photos of the machine. All in all, it’s a dream system for me. Here’s hoping this dream lasts for a very long time.

Letting Go…

Many people outside of the software development field (and some people in the field) may have the incorrect view that computer code is just cold, hard text written only to make a computer do something. While that may technically be correct, for people who genuinely enjoy coding the application code can be a warm, even living, being, constantly evolving over time to provide the user with an elegant means of accomplishing a task. When programming, I don’t think of myself necessarily as pumping out code. It’s more of a massaging of the project to get it to do something just right, and then a final smoothing of the bugs or gaps in the functionality to make it work perfectly.

Because of this almost art-like view of my career, it’s often difficult to stop working on a project. Then when you consider how many hundreds or thousands of hours you’ve invested in a project, walking away becomes next to impossible. However, I’ve reached a time in my career where I have decided to do just that.

//
//  MyWeatherAppDelegate.h
//  MyWeather
//
//  Created by Mike Piatek-Jimenez on 3/26/08.
//

Above is a copy of the code header for the first file to kick off the MyWeather Mobile project. March 26th, 2008: 4 months before the App Store opened, and only a few weeks after Apple released the iPhone SDK. After working with the team at Weather Central for almost 11 months, I’ve decided it’s time for me to let the project go. The reason for parting ways is not that I don’t enjoy working on the project. It’s more of a re-evaluation of priorities.

The thing is, I have a lot of ideas both for continuing my current Gaucho Software products, as well as ideas for entirely new projects I would like to bring to market. While consulting for the past 4 years, I keep finding myself looking back trying to figure out why I’m not able to be productive on my own apps. Sometimes I will go months without touching any Gaucho Software projects. I spent a good amount of time over the holidays reflecting on this problem, and I’ve determined that in order for me to continue working on Gaucho Software products in any productive form, continuing my consulting work just isn’t an option. So with Gaucho Software turning 5 years old this April 1st, I’ve decided to focus entirely on in-house apps from this point forward.

So with that, I hand over the reigns. Version 1.3 has already been uploaded to the App Store and is pending approval. Version 1.4 code is done and we are just waiting for some back-end features to be finished before the release next month. The team at Weather Central have been a joy to work with. Having the graphics, code, and data all merge together in an iPhone app is not a trivial task, but with this team it worked like magic. Graphics were readily available; the data pipes were overflowing; and all that was left was to write the code and bring it all together. I wish them the best of luck in continuing project development of the MyWeather Mobile application, as well as any other projects they decide to bring to the iPhone platform in the future…

Updated Gaucho Network

For quite some time now, I’ve been wanting to upgrade my office network, which doubles as my home network as well. From the business standpoint, I wanted some more reliable equipment along with some added security by enabling me to connect to the office network over a VPN when I’m on the road. From the home standpoint, I wanted to add a couple of ethernet outlets upstairs, mostly to enable the quick transfer of media from the file server downstairs, as wireless can be pretty slow.

A few weeks ago, I finally took the initiative and started looking at some equipment. For networking, no one is going to blame you for ordering Cisco equipment, so I started there. Their routers start at about $350-400 and move up from there pretty quickly, which is more than I originally was looking to spend, so I started looking at a few other brands. Brands like ZyXEL offer less-expensive business-grade equipement at about half the price, and I checked all the high-end equipment offered by consumer brands like Netgear and Linksys.

It didn’t take long to rule out the consumer equipment. While a lot of the features were there, I was constantly running into reviews complaining about reliability issues, and to me that was a key issue. Another common issue with consumer equipment was bandwidth capacity. A lot of them only handled around 15MBits, with some others moving up to 50-75MBits. VPN speeds were definitely slower, most of the time running around 10MBits because of the extra processing required to encrypt the packets. Ignoring VPN, these routers were faster than my network connection (10MBits), but I was looking more for something to handle up to 100MBits so it would grow with my connection for many years to come. Despite this limitation, a lot of them had gigabit connections on the WAN side. Not sure why…

While doing my research, I kept going back to look at the Cisco router. I was looking specifically at their ASA line of products. The ASA line replaces the older PIX routers, and there is quite a model spread from the 5505 for small office environments, all the way up to the 5580 for the enterprise. Even at the low-end, the 5505 was able to handle 150MBits of throughput for unencrypted traffic, and an impressive 100Mbits of VPN traffic bandwidth. All of the reviews said the device was rock-solid and never crashed. Setup seemed to be a bit more difficult, with a lot of it taking place on a command line, but I have some past experience with Cisco’s IOS and thought this would be a good time to brush up on my knowledge. Finally, with support for VLANs, an 8 port Cisco switch built-in with 2 power-over-ethernet ports, and an insane 10,000 simultaneous connections supported, it was hard not to like this device.

I ended up going for it, and shiny new 5505 is sitting on my desk. The device is a lot easier to configure than I originally expected. The device arrives with a dynamic configuration by default, so it just worked when I plugged it in to my network. There is an online Java application that is hosted on an HTTPS server. Configuring the VPN end-point and getting the iPhone to connect to it and split-tunnel all traffic through the router took all of 20 minutes. It’s taking me a little longer to configure my Mac to connect over the VPN, but I just need to spend some more time on it. I find it ironic that the iPhone is more prepared for the enterprise than the Mac is. Overall, I couldn’t be happier with my decision.

Switching gears a little bit here, from the home side of adding additional outlets, I bought a 24 port patch panel to punch down all the cabling on, and 500 feet of Cat 5e to wire it all up. Cat 6 was definitely a consideration but it cost twice as much, and with Cat 5e handling gigabit just fine I saw no need to spend the extra money. If 10-gigabit starts becoming standard, I’ll just upgrade the cabling in my office.

Dropping the lines from upstairs has been a bit more difficult than I was expecting. I naively expected to be able to look up the wall from the basement and see the outlet connection box from below. Of course, this isn’t the case, as each wall has a bottom 2×4 to complete that edge of the frame. I’m still working on finding the best way to send the wire through a small hole in the connection box, and target a small hole at the bottom of the wall frame.

I still have some work to do, but will try to update this with photos when the job is completed. Stay tuned.

New Disk

Having an application like Seasonality that relies upon online services requires those services to be reliable. This means any server I host has to be online as close to 100% of the time as possible. Website and email services are pretty easy to host out to a shared hosting provider for around $10-20/month. It’s inexpensive, and you can leave the server management to the hosting provider. For most software companies, this is as far as you need to go.

This also worked okay when Seasonality was simply grabbing some general data from various sources. As soon as I began supporting international locations, I stepped out of the bounds of shared hosting. The international forecasts need to be hosted on a pretty heavy-duty server. It pegs a CPU for about an hour to generate the forecasts, and the server updates the forecasts twice a day. Furthermore, the dataset is pretty large, so a fast disk subsystem is needed.

So I have a colocated server, which I’ve talked about before. It’s worked out pretty well until earlier this week when one of the 4 disks in the RAID died. Usually, when a disk in a RAID dies, the system should remain online and continue working (as long as you aren’t using RAID 0). In this situation, the server crashed though, and I was a bit puzzled as to why this occurred.

After doing some research, I found that the server most likely crashed because of an additional partition on the failed disk—a swap partition. When setting up the server, I configured swap across all four disks, with the hope that if I ever did go into swap a little bit it would be much faster than just killing a single disk with activity. The logic seemed good at the time, but looking back that was a really bad move. In the future, I’ll stick to having swap on just a single disk (probably the same one as the / partition) to reduce the chances of a system crash by 75%.

After getting a new disk overnighted from Newegg, I replaced the failed mechanism and added it back into the RAID, so the system is back up and running again.

This brings up the question of how likely something like this will happen in the future. The server is about 2 and a half years old, so disk failures happening at this age is reasonable, especially considering the substantial load on the disks on this server (blinky lights, all day long). At this point, I’m thinking of just replacing the other 3 disks. That way, I will have scheduled downtime instead of unexpected downtime. With the constantly dropping cost of storage, I’ll be able to replace the 300Gb disks with 750Gb models. It’s not that I actually need the extra space (the current 300s are only about half full), but I need at least 4 mechanisms to get acceptable database performance.

In the future, I will probably look toward getting hot-swappable storage. I’ve had to replace 2 disks now since I built the server, and to have the option of just sliding one disk out and replacing it with a new drive without taking the server offline is very appealing.

Catchup

Wow, I think this is the first time I’ve opened MarsEdit in months. Looks like my last post was back in February, so I figure an update here is long overdue. I don’t have any particular topic to talk about today, so this post will be a catchup of everything happening here in the past 3 months.

The biggest change has been a new consulting gig I picked up back in March. Clint posted on Twitter about a contract position for an iPhone developer on the Ars Technica Job Board. The kicker is that the job was to code a weather application. I had been curious about iPhone coding, but didn’t have time in my development schedule to fit another pet project. On the other hand, if I could learn iPhone development while getting paid, I could definitely shift some projects around. Being a weather app, this job matchup was too good to pass up; so I sent in my resume one morning back in March. That afternoon, the company got in touch with me for an interview, and the following week I flew out to their headquarters to get up to speed on the project.

The development cycle for this app was pretty quick. With the first deadline of a working demo only 3 weeks from the day I started, I really booked it and started pumping out code. My life was pretty much coding, from time I woke up until going to bed. A rough, but fairly good demo was completed, with 10k lines of code in those first 3 weeks. I had about a week off, which incidentally was the same week of my 30th birthday. It was great to take a little bit of time off, party with some friends, and enjoy life.

Then the second stage of the project kicked in, which needed to be completed in only 2 more weeks time. The second stage was definitely slower, so I was able to sleep a little bit more, and see Katrina from time to time. 🙂 The resulting stage 2 app was pretty polished. The company I’m working with has a few contacts at Apple, so they arranged to demo it in Cupertino. That was a couple of weeks ago and from what I heard, the demo went pretty well. All the work definitely paid off. You should be seeing this product hit the market some time this summer. I’ll definitely post more about this when the time comes.

Our Moke After all that work and Katrina’s semester coming to a close, we decided to take off on a vacation. We found a great deal on airfare and hotel down to Barbados, so we decided to jump on it. We spent last week on the south coast of the island soaking up the sun, learning the culture, having a blast driving around in our little moke (see photo), and just getting some good R&R. There’s not a ton of stuff to do on the island, but definitely enough to keep you occupied for a week or two. We toured one of the 14 Concorde jets in existence, visited some caves, walked a historical museum, snorkled with some sea turtles, and enjoyed some excellent food.

With a constant 15 mph trade wind, the surf on Barbados was better than any other Caribbean island I’ve visited. Furthermore, our hotel room opened up onto the beach, so I was able to walk about 50 feet from our patio and paddle out to bodyboard. Needless to say, several surf sessions took place that week.

With summer finally finding it’s way to central Michigan, the mountain biking season has now begun. Bodyboard being a fairly difficult activity in Michigan, mountain biking has become my main form of exercise. For the past 10 years, I’ve been riding a Trek hardtail. I’ve put over 3000 miles on it, and the gears are almost completely shot. So I was posed with a decision of either spending a couple hundred bucks on a new set of cogs, bearings, and a chain, or breaking down and purchasing a whole new bike.

I had been looking at getting a full suspension bike for the past few years, so I started visiting bike shops around here to ride some different models. I had hit every bike shop in a 30 mile radius, without any luck. Finally, while we were down in Lansing for the day, I checked a few bike shops down there and found my new ride. Of course the bike shop didn’t have the right frame size, so I had to order it.

New Bike

A week later, it arrived, and I picked it up the day after we got back from Barbados. So far, I love it. It’s a Trek Fuel EX 5.5 complete with disc brakes, 3-5 inches of adjustable travel in front, and 5 inches of travel in back. Clipless pedals were not included so I swapped mine out from the old bike. I also added a seat pack (with tools to fix a flat and a few other necessities) and installed a new speedometer. My previous bike was so old, that even with the full suspension upgrade and a much beefier frame, this bike is lighter than my last. This weekend will be the first time I take it on the trail…definitely looking forward to it.

Looking toward the summer, I’ll be headed out to WWDC in San Francisco next month. A lot of good parties are starting to fall into place, so it should be a fun week. After that, we’re heading over to camp in Yosemite for a few days before coming home and spending the rest of the summer here working.

Indie Marketing @ Macworld

Macworld Expo San Francisco is one of the largest, if not the largest, Mac user event of the year. For an indie Mac developer, if there is one conference (other than WWDC) that should be attended, this is it. So why haven’t I attended in previous years? I asked myself that same question last year after hearing about all the indie get-togethers and bar nights.

The Good Ol’ Days

The last time I attended Macworld Expo was back in 2001, just after I graduated from UCSB and before starting a job in Tucson, AZ. A lot has changed in my career in these past 7 years. For one, 7 years ago I hadn’t yet developed any software for the Mac platform. Though I was an avid Mac user, at that time I was programming mostly for Unix, and occasionally on Windows (against my will).

But that was years ago…I started programming for Mac OS X in 2002, so the question remains, why haven’t I been attending Macworld? I think it may have something to do with the conditions of which I have attended Macworld in the past. You see, the first year I attended Macworld Expo was back in 1990. The Mac IIfx was the big new machine at the time, and with the costs of such a machine nearing $10,000, only a few companies had that kind of hardware at their booth. Mac IIcx and IIci’s were more common, as was the Mac Portable—which was new at the time. I attended Macworld every year after that until 1997, when it didn’t make sense to take time off from classes at UCSB to do so. To me, attending the expo was a fun event; almost like going to an amusement park. Yeah…I was most definitely a Mac geek.

Perspective

The thing is, I never saw Macworld as a business event…it was strictly for fun. And now that I’m living in Michigan, it didn’t make sense to spend the money to attend a “fun” event. It wasn’t until I started talking to other developers who had attended the conference that I realized just how much I was missing by not attending.

Will I have a booth? No. How about one of those ADC developer kiosks? Nope. Why not? Well, this year I just want to re-learn the ropes of the conference. Paul Kafasis has written a nice series of articles on exhibiting at Macworld, but it’s been such a long time, I really want to get a recent perspective on what the conference is like before plunking down $10k to become an exhibitor. So this year, Gaucho Software will be at Macworld as a Quasi-Exhibitor.

What does this mean? Well, it means that I’ll have a lot of similar materials as a company exhibiting would, except for the actual 10×10 foot real-estate on the show floor. First, I designed a different Seasonality t-shirt for each day on the show floor and had Zazzle print them up. Second, I designed a flyer and ordered 1000 copies from SharpDots. Finally, I put in an order through PensXpress for 200 Seasonality pens to give away at the show. Let me elaborate a bit to explain my reasoning for each of these…

1. T-Shirts

I started designing and ordering the first Gaucho Software T-Shirts about 18 months ago for WWDC 2006. Thanks to outfits like Zazzle and CafePress, it’s now easy to print a custom design on a t-shirt of pretty much any color and style. At the time, I just threw the words Gaucho Software across the front and a big logo across the back. It was beneficial to wear at developer conferences like WWDC and C4, because it would give people a better idea of who I was before actually meeting them. Did it increase sales? No…but that’s okay, it was cool to have the shirts all the same.

For WWDC 2007, I designed a t-shirt highlighting Seasonality and I wore it on a day when there was an event at the SF Apple Store in the evening. Surprisingly enough, I found a nice little spike in sales during the day or two after wearing that shirt. Hey, if that one t-shirt helped sales, wearing a different Seasonality shirt each day of Macworld should help too…

2. Flyers

The decision to design a flyer to hand out at the show was easy, but going through the details of actually designing it was much more difficult. First, I had to choose a size. I decided to go with a half-sheet, or 8.5×5.5 inches. I chose this size because I didn’t want the flyer to get lost in the shuffle. I remember getting tens, even hundreds, of flyers every day I attended in previous years. A full page flyer would require a lot of content, and would be more difficult to hand out to people. Going with a size that is as wide as normal page but not as tall, will keep it from getting lost, but still make it easier to hand out.

The design was a bit tricky. I’m used to designing interfaces on-screen using the RGB colorspace. Designing for print is different. First, you have to deal with color limitation in the CMYK colorspace. Seasonality uses a lot of blues…which CMYK wreaked havoc upon. I had to choose a screenshot carefully to make sure it still looked good. Next, I had to deal with the print design being a fixed entity. Application (and to some extent, web) interfaces are dynamic. I needed to find a good way to portray information in a non-changing medium. Finally, I needed to make sure all the necessary information was on the flyer somewhere. I was pretty close to printing a design without any kind of URL to note where to purchase Seasonality. Incredible, yes… That would have made the flyers next to useless. I spent hours designing the flyer, and it took a second viewer only a few minutes to notice the lack of any kind of link. Moral of the story is, have someone check your work before shipping it off to print.

3. Pens

The pens I ordered was a last-minute idea that I think will be pretty cool. Macworld exhibitors usually give away some kind of trinket, and I thought it would be cool to do the same. Most trinkets aren’t often used after the conference ends, and I didn’t want to give someone a trinket they would just end up throwing out afterwards. A pen will hopefully remain useful for most attendees after the conference ends.

Another thing I didn’t want to do was skimp out, so I decided to go for a metal casing instead of plastic. Of course, some plastic pens are very nice, but you can’t tell that by looking at a picture on a website. I figured with a metal pen, it would at least have a decent weight and feel to it. At the same time, I didn’t want a pen that was too expensive either. There’s no way I would get enough sales to cover the costs of handing out pens at $10+ a piece. I ended up finding a nice metallic pen with laser engraving for $1 each at PensXpress. Their turn-around time was pretty quick, and I’m pleased with the results.

4. Profit?

After all this work, I’m not exactly sure what to expect at this point. Obviously, I hope I make enough in sales to pay for all of these materials and my trip costs, but it’s not so much the money I’m looking for here. What I would really like is increased mind-share. Thus far, all of my marketing has been directed towards Mac users who frequent news and download websites. There are certainly a lot of users who fit into this category, but what about users who don’t spend their free time online? I’m hoping to meet a lot of these other users at Macworld, and hopefully it will give me a chance to widen Seasonality’s audience.

If you’re planning to attend Macworld, be sure to look for the guy in the Seasonality shirt and stop to say hello… 🙂

Using Compressed Textures in OpenGL

I’m not sure if it’s just me, but for some reason OpenGL coding involves a lot of trial and error before getting a feature such as lighting, blending, or texture mapping to work correctly. The past few days I have been working on adding texture compression to my OpenGL map test project. Ultimately, this code will be merged with the rest of the Seasonality source tree, and it’s going to look pretty cool.

Most OpenGL developers will use regular images and possibly compress them when loading them as a texture on the GPU. This is fairly straightforward, and just involves changing one line of code when loading the texture. Note that this is a huge gain when it comes to graphics memory savings, as I was using about 128MB of VRAM when texture compression was disabled and only around 30MB with compression enabled. I wanted to accomplish something a bit more difficult though. I’m going to be using several thousand textures, so I would like to have OpenGL compress them the first time Seasonality is launched, and then save the compressed images back to disk so concurrent launches will not require the re-compression of the imagery.

The problem I ran into was not enough developers are using this technique to speed up their application, so sample code was scarce. I found some in a book I bought awhile back called “More OpenGL Game Programming,” but the code was written for Windows, and it didn’t work on Mac OS X. So I dove deep into the OpenGL API reference and hacked my way through it. The resulting code is a simplification of the method I’m using. It should integrate with your OpenGL application, but I can’t guaranty this completely because it is excerpted from my project. If you’re having a problem integrating it though, post a comment or send me an email.

First, we have some code that will check for a compressed texture file on disk. If the compressed file doesn’t exist, then we are being launched for the first time and should create a compressed texture file.

- (bool) setupGLImageName:(NSString *)imageName
         toTextureNumber:(unsigned int)textureNumber
{
   GLint width, height, size;
   GLenum compressedFormat;
   GLubyte *pData = NULL;

   // Attempt to load the compressed texture data.
   if (pData = LoadCompressedImage("/path/to/compressed/image", &width, &height,
       &compressedFormat, &size))
   {
      // Compressed texture was found, image bytes are in pData.
      // Bind to this texture number.
      glBindTexture(GL_TEXTURE_2D, textureNumber);

      // Define how to scale the texture.
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

      // Create the texture from the compressed bytes.
      glCompressedTexImage2D(GL_TEXTURE_2D, 0, compressedFormat,
                             width, height, 0, size, pData);


      // Define your texture edge handling, here I'm clamping.
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP);
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
      // Free the buffer (allocated in LoadCompressedImage)
      free(pData);
      return YES;
   }
   else {
      // A compressed texture doesn't exist yet, run the standard texture code.
      NSImage *baseImage = [NSImage imageNamed:imageName];
      return [self setupGLImage:baseImage toTextureNumber:textureNumber];
   }
}

Next is the code to load a standard texture. Here we get the bitmap image rep and compress the texture to the GPU. Next we’ll grab the compressed texture and write it to disk.

- (bool) setupGLImage:(NSImage *)image
         toTextureNumber:(unsigned int)textureNumber
{
   NSData *imageData = [image TIFFRepresentation];
   NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithData:imageData];
   // Add your own error checking here.

   NSSize size = [rep size];
   // Again, more error checking.  Here we aren't using
   // MIPMAPs, so make sure your dimensions are a power of 2.

   int bpp = [rep bitsPerPixel];

   // Bind to the texture number.
   glBindTexture(GL_TEXTURE_2D, textureNumber);
   glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

   // Define how to scale the texture.
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

   // Figure out what our image format is (alpha?)
   GLenum format, internalFormat;
   if (bpp == 24) {
      format = GL_RGB;
      internalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT;
   }
   else if (bpp == 32) {
      format = GL_RGBA;
      internalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
   }

   // Read in and compress the texture.
   glTexImage2D(GL_TEXTURE_2D, 0, internalFormat,
                size.width, size.height, 0,
                format, GL_UNSIGNED_BYTE, [rep bitmapData]);

   // If our compressed size is reasonable, write the compressed image to disk.
   GLint compressedSize;
   glGetTexLevelParameteriv(GL_TEXTURE_2D, 0,
                            GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
                            &compressedSize);
   if ((compressedSize > 0) && (compressedSize < 100000000)) {
      // Allocate a buffer to read back the compressed texture.
      GLubyte *compressedBytes = malloc(sizeof(GLubyte) * compressedSize);

      // Read back the compressed texture.
      glGetCompressedTexImage(GL_TEXTURE_2D, 0, compressedBytes);

      // Save the texture to a file.
      SaveCompressedImage("/path/to/compressed/image", size.width, size.height,
                          internalFormat, compressedSize, compressedBytes);

      // Free our buffer.
      free(compressedBytes);
   }

   // Define your texture edge handling, again here I'm clamping.
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

   // Release the bitmap image rep.
   [rep release];

   return YES;
}

Finally we have a few functions to write the file to disk and read it from the disk. These functions were pulled almost verbatim from the OpenGL book. In the first code block above we called LoadCompressedImage to read the texture data from the disk. In the second code block, we called SaveCompressedImage to save the texture to disk. Nothing really special is going on here. We write some parameters to the head of the file, so when we go to read it back in we have the details. Bytes 0-3 of the file are the image width, 4-7 is the image height, 8-11 is the format (GL_COMPRESSED_RGB_S3TC_DXT1_EXT or GL_COMPRESSED_RGBA_S3TC_DXT5_EXT), 12-15 is the size of the image data in bytes, and bytes 16+ are the image data.

void SaveCompressedImage(const char *path, GLint width, GLint height,
                         GLenum compressedFormat, GLint size, GLubyte *pData)
{
   FILE *pFile = fopen(path, "wb");
   if (!pFile)
      return;

GLuint info[4];

info[0] = width;
info[1] = height;
info[2] = compressedFormat;
info[3] = size;

fwrite(info, 4, 4, pFile);
fwrite(pData, size, 1, pFile);
fclose(pFile);
}

GLubyte * LoadCompressedImage(const char *path, GLint *width, GLint *height,
GLenum *compressedFormat, GLint *size)
{
FILE *pFile = fopen(path, “rb”);
if (!pFile)
return 0;
GLuint info[4];

fread(info, 4, 4, pFile);
*width = info[0];
*height = info[1];
*compressedFormat = info[2];
*size = info[3];

GLubyte *pData = malloc(*size);
fread(pData, *size, 1, pFile);
fclose(pFile);
return pData;
// Free pData when done…
}

Hopefully this will save someone development time in the future. If you catch any errors, let me know.

A Weather Developer's Journey Begins!

A few months ago an idea came to me for a new domain name I should pick up. I wasn’t sure what I was going to do with the domain yet, or if I would even use it at all, but I wanted to jump on it because it was available. It struck me as a good name, and I was a bit surprised that it was still available.

Zoom forward to a couple of weeks ago, when a different idea for a new website came to mind. Oftentimes I post detailed entries here specifically about the weather or very weather-tech related articles when I run into issues while developing Seasonality. The thing is, I’m not sure my typical reader is interested in these postings, and occasionally I will refrain from posting about weather-related issues simply because I don’t think it would fit well in the *Coder blog.

But wait a second…I bought that domain awhile back, maybe that would work. Actually, it ended up being perfect for my new site idea. So I started working on it off and on. I’ve been happily using WordPress to host this blog for quite some time, so I decided to use the same platform for the new site. I set things up, customized a theme, and wrote a posting or two. I think now it is finally ready to be revealed, and I wanted to share. The domain is weatherdeveloper.com, and the site is called “A Weather Developer’s Journey.” With the full-time development of Seasonality 2 coming soon here, I thought it would be a perfect time to start a site like this, as I’ll be spending a lot of time trying to overcome issues with data hosting, manipulation, and weather visualization.

If you’re at all interested in checking it out, please do so (Website, RSS Feed). I’ll most likely post links from here to the first few articles on the new site, just to get the ball rolling. The first article talks about finding and putting to use 90 meter elevation data.

« Older posts Newer posts »

© 2022 *Coder Blog

Theme by Anders NorenUp ↑