*Coder Blog

Life, Technology, and Meteorology

Category: OpenGL

Using IOKit to Detect Graphics Hardware

After Seasonality Core 2 was released a couple of weeks ago, I received email from a few users reporting problems they were experiencing with the app. The common thread in all the problems was having a single graphics card (in this case, it was the nVidia 7300). When the application launched, there would be several graphics artifacts in the map view (which is now written in OpenGL), and even outside the Seasonality Core window. It really sounded like I was trying to use OpenGL to do something that wasn’t compatible with the nVidia 7300.

I’m still in the process of working around the problem, but I wanted to make sure that any work-around would not affect the other 99% of my users who don’t have this graphics card. So I set out to try and find a method of detecting which graphics cards are installed in a user’s Mac. You can use the system_profiler terminal command to do this:

system_profiler SPDisplaysDataType

But running an external process from within the app is slow, and it can be difficult to parse the data reliably. Plus, if the system_profiler command goes away, the application code won’t work. I continued looking…

Eventually, I found that I might be able to get this information from IOKit. If you run the command ioreg -l, you’ll get a lengthy tree of hardware present in your Mac. I’ve used IOKit in my code before, so I figured I would try to do that again. Here is the solution I came up with:

// Check the PCI devices for video cards.  
CFMutableDictionaryRef match_dictionary = IOServiceMatching("IOPCIDevice");

// Create a iterator to go through the found devices.
io_iterator_t entry_iterator;
if (IOServiceGetMatchingServices(kIOMasterPortDefault, 
                                 &entry_iterator) == kIOReturnSuccess) 
  // Actually iterate through the found devices.
  io_registry_entry_t serviceObject;
  while ((serviceObject = IOIteratorNext(entry_iterator))) {
    // Put this services object into a dictionary object.
    CFMutableDictionaryRef serviceDictionary;
    if (IORegistryEntryCreateCFProperties(serviceObject, 
                                          kNilOptions) != kIOReturnSuccess) 
      // Failed to create a service dictionary, release and go on.
    // If this is a GPU listing, it will have a "model" key
    // that points to a CFDataRef.
    const void *model = CFDictionaryGetValue(serviceDictionary, @"model");
    if (model != nil) {
      if (CFGetTypeID(model) == CFDataGetTypeID()) {
        // Create a string from the CFDataRef.
        NSString *s = [[NSString alloc] initWithData:(NSData *)model 
        NSLog(@"Found GPU: %@", s);
        [s release];
    // Release the dictionary created by IORegistryEntryCreateCFProperties.

    // Release the serviceObject returned by IOIteratorNext.

  // Release the entry_iterator created by IOServiceGetMatchingServices.

Using Compressed Textures in OpenGL

I’m not sure if it’s just me, but for some reason OpenGL coding involves a lot of trial and error before getting a feature such as lighting, blending, or texture mapping to work correctly. The past few days I have been working on adding texture compression to my OpenGL map test project. Ultimately, this code will be merged with the rest of the Seasonality source tree, and it’s going to look pretty cool.

Most OpenGL developers will use regular images and possibly compress them when loading them as a texture on the GPU. This is fairly straightforward, and just involves changing one line of code when loading the texture. Note that this is a huge gain when it comes to graphics memory savings, as I was using about 128MB of VRAM when texture compression was disabled and only around 30MB with compression enabled. I wanted to accomplish something a bit more difficult though. I’m going to be using several thousand textures, so I would like to have OpenGL compress them the first time Seasonality is launched, and then save the compressed images back to disk so concurrent launches will not require the re-compression of the imagery.

The problem I ran into was not enough developers are using this technique to speed up their application, so sample code was scarce. I found some in a book I bought awhile back called “More OpenGL Game Programming,” but the code was written for Windows, and it didn’t work on Mac OS X. So I dove deep into the OpenGL API reference and hacked my way through it. The resulting code is a simplification of the method I’m using. It should integrate with your OpenGL application, but I can’t guaranty this completely because it is excerpted from my project. If you’re having a problem integrating it though, post a comment or send me an email.

First, we have some code that will check for a compressed texture file on disk. If the compressed file doesn’t exist, then we are being launched for the first time and should create a compressed texture file.

- (bool) setupGLImageName:(NSString *)imageName
         toTextureNumber:(unsigned int)textureNumber
   GLint width, height, size;
   GLenum compressedFormat;
   GLubyte *pData = NULL;

   // Attempt to load the compressed texture data.
   if (pData = LoadCompressedImage("/path/to/compressed/image", &width, &height,
       &compressedFormat, &size))
      // Compressed texture was found, image bytes are in pData.
      // Bind to this texture number.
      glBindTexture(GL_TEXTURE_2D, textureNumber);

      // Define how to scale the texture.

      // Create the texture from the compressed bytes.
      glCompressedTexImage2D(GL_TEXTURE_2D, 0, compressedFormat,
                             width, height, 0, size, pData);

      // Define your texture edge handling, here I'm clamping.
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP);
      // Free the buffer (allocated in LoadCompressedImage)
      return YES;
   else {
      // A compressed texture doesn't exist yet, run the standard texture code.
      NSImage *baseImage = [NSImage imageNamed:imageName];
      return [self setupGLImage:baseImage toTextureNumber:textureNumber];

Next is the code to load a standard texture. Here we get the bitmap image rep and compress the texture to the GPU. Next we’ll grab the compressed texture and write it to disk.

- (bool) setupGLImage:(NSImage *)image
         toTextureNumber:(unsigned int)textureNumber
   NSData *imageData = [image TIFFRepresentation];
   NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithData:imageData];
   // Add your own error checking here.

   NSSize size = [rep size];
   // Again, more error checking.  Here we aren't using
   // MIPMAPs, so make sure your dimensions are a power of 2.

   int bpp = [rep bitsPerPixel];

   // Bind to the texture number.
   glBindTexture(GL_TEXTURE_2D, textureNumber);
   glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

   // Define how to scale the texture.

   // Figure out what our image format is (alpha?)
   GLenum format, internalFormat;
   if (bpp == 24) {
      format = GL_RGB;
      internalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT;
   else if (bpp == 32) {
      format = GL_RGBA;
      internalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;

   // Read in and compress the texture.
   glTexImage2D(GL_TEXTURE_2D, 0, internalFormat,
                size.width, size.height, 0,
                format, GL_UNSIGNED_BYTE, [rep bitmapData]);

   // If our compressed size is reasonable, write the compressed image to disk.
   GLint compressedSize;
   glGetTexLevelParameteriv(GL_TEXTURE_2D, 0,
   if ((compressedSize > 0) && (compressedSize < 100000000)) {
      // Allocate a buffer to read back the compressed texture.
      GLubyte *compressedBytes = malloc(sizeof(GLubyte) * compressedSize);

      // Read back the compressed texture.
      glGetCompressedTexImage(GL_TEXTURE_2D, 0, compressedBytes);

      // Save the texture to a file.
      SaveCompressedImage("/path/to/compressed/image", size.width, size.height,
                          internalFormat, compressedSize, compressedBytes);

      // Free our buffer.

   // Define your texture edge handling, again here I'm clamping.

   // Release the bitmap image rep.
   [rep release];

   return YES;

Finally we have a few functions to write the file to disk and read it from the disk. These functions were pulled almost verbatim from the OpenGL book. In the first code block above we called LoadCompressedImage to read the texture data from the disk. In the second code block, we called SaveCompressedImage to save the texture to disk. Nothing really special is going on here. We write some parameters to the head of the file, so when we go to read it back in we have the details. Bytes 0-3 of the file are the image width, 4-7 is the image height, 8-11 is the format (GL_COMPRESSED_RGB_S3TC_DXT1_EXT or GL_COMPRESSED_RGBA_S3TC_DXT5_EXT), 12-15 is the size of the image data in bytes, and bytes 16+ are the image data.

void SaveCompressedImage(const char *path, GLint width, GLint height,
                         GLenum compressedFormat, GLint size, GLubyte *pData)
   FILE *pFile = fopen(path, "wb");
   if (!pFile)

   GLuint info[4];

   info[0] = width;
   info[1] = height;
   info[2] = compressedFormat;
   info[3] = size;

   fwrite(info, 4, 4, pFile);
   fwrite(pData, size, 1, pFile);

GLubyte * LoadCompressedImage(const char *path, GLint *width, GLint *height,
                              GLenum *compressedFormat, GLint *size)
   FILE *pFile = fopen(path, "rb");
   if (!pFile)
      return 0;
   GLuint info[4];

   fread(info, 4, 4, pFile);
   *width = info[0];
   *height = info[1];
   *compressedFormat = info[2];
   *size = info[3];

   GLubyte *pData = malloc(*size);
   fread(pData, *size, 1, pFile);
   return pData;
   // Free pData when done...

Hopefully this will save someone development time in the future. If you catch any errors, let me know.

© 2017 *Coder Blog

Theme by Anders NorenUp ↑