*Coder Blog

Life, Technology, and Meteorology

Month: September 2007

MicroNet G-Force MegaDisk NAS Review

If you have been following my Twitter feed, you know that I just ordered a 1TB NAS last week for the office network here. I wanted some no-fuss storage sitting on the network so I could backup my data and store some archive information there instead of burning everything to DVD. (In reality, I’ll still probably burn archive data to DVD just to have a backup.)

Earlier this month, MicroNet released the G-Force MegaDisk NAS (MDN1000). The features were good and the price was right so I bought one. It finally arrived today and I’ve been spending some time getting to know the system and performing some benchmarks.

When opening the box, the first thing that surprised me was the size of the device. It’s really not much bigger than 2 3.5″ hard drives stacked on top of each other. The case is pretty sturdy, made out of aluminum, but the stand is a joke. Basically, two metal pieces came with rubber pads on them. You’re supposed to put a metal piece on each side to support the case. It’s not very sturdy, and a pain to setup like this, so I doubt I’ll use them.

I had a few problems reaching the device on my network when I plugged it in. I had to cycle the power a couple of times before I was finally able to pick it up on the network and login to the web interface. I’m guessing future firmware updates will make the setup process easier. It’s running Linux, which is nice. The firmware version is 2.6.1, so I’m guessing that means the kernel is version 2.6 (nmap identifies it as kernel 2.6.11 – 2.6.15). Hopefully it’s only a matter of time before someone’s hacked it with ssh access. MicroNet’s website claims there is an embedded dual-core processor on board, which again sounds pretty cool. The OS requires just under 61MB of space on one of the hard drives. There are two 500GB drives in this unit. Both are Hitachi (HDT725050VLA360) models, which are SATA2 drives that run at 7200 RPM with 16MB of cache. From the web interface, it looks like the disks are mounted at /dev/hdc and /dev/hdd.

Disk management is pretty straightforward. You can select a format for each disk (ext2, ext3, fat32), and there is an option to encrypt the content on the disk. The drives are monitored via the SMART interface, and you can view the reports in detail via the web. By default, the drives come in a striped RAID format, but I was able to remove the RAID and access each disk separately (contrary to the documentation’s claims). Unfortunately, for some reason I was unable to access the second disk over NFS. It looks like you might be able to mess with the web configuration page to get around this limitation though.

Moving on to the RAID configuration, you can choose between RAID 0, RAID 1, and Linear (JBOD). Ext2 and ext3 are your filesystem options. Building a RAID 1 took a very long time (~ 4 hours), which I’m guessing is because the disks require a full sync of all 500GB of data when initializing such a partition.

So let’s bust out the benchmarks! I benchmarked by performing 2 different copies. One copy was a single 400.7MB file (LARGE FILE), and the other was a directory with 4,222 files totally 68.7MB (SMALL FILES). All tests were performed over a gigabit Ethernet network from my 2.5Ghz G5 desktop machine. Transfers were done via the Terminal with the time command, to remove any human-error from the equation.

A note about testing Samba with SMALL FILES: I started running a write test and let it go for around 8 minutes. At that point, it was still only done copying around a quarter of the files, and the transfer rate averaged less than 20KB/sec. This was absurdly slow, so I didn’t bother waiting for the full test to go through. It’s difficult to say if this is a limitation of the NAS, Samba, Mac OS X or all of the above.

Striped RAID (Standard) NFS Samba
Write LARGE FILE 1:13 (5,544 KB/sec) 0:42 (9,542 KB/sec)
Read LARGE FILE 0:42 (9,769 KB/sec) 0:35 (11,723 KB/sec)
Write SMALL FILES 3:46 (310 KB/sec) DNF
Read SMALL FILES 0:39 (1,759 KB/sec) DNF
Mirrored RAID NFS Samba
Write LARGE FILE 1:17 (5,328 KB/sec) 0:47 (8,730 KB/sec)
Read LARGE FILE 0:40 (10,257 KB/sec) 0:41 (10,007 KB/sec)
Write SMALL FILES 3:44 (314 KB/sec) DNF
Read SMALL FILES 0:43 (1,636 KB/sec) DNF
Separate Disks NFS Samba
Write LARGE FILE 1:13 (5,620 KB/sec) 0:43 (9,542 KB/sec)
Read LARGE FILE 0:46 (8,919 KB/sec) 0:35 (11,723 KB/sec)
Write SMALL FILES 3:11 (368 KB/sec) DNF
Read SMALL FILES 0:42 (1,675 KB/sec) DNF

All of these were using standard mounting, either through the Finder’s browse window, or mount -t nfs with no options on the console. I decided to try tweaking the NFS parameters to see if I could squeeze any more speed out of it. The following results are all using a striped RAID configuration…

no options wsize=16384
rsize=16384
wsize=16384
rsize=16384
noatime
intr
Write LARGE FILE 1:13
(5,544 KB/sec)
1:00
(6,838 KB/sec)
0:59
(6,954 KB/sec)
Read LARGE FILE 0:42
(9,769 KB/sec)
0:32
(12,822 KB/sec)
0:32
(12,822 KB/sec)
Write SMALL FILES 3:46
(311 KB/sec)
3:47
(310 KB/sec)
3:09
(372 KB/sec)
Read SMALL FILES 0:39
(1,759 KB/sec)
0:42
(1,675 KB/sec)
0:40
(1,758 KB/sec)

In summary, while this NAS isn’t necessarily the fastest out there, it’s certainly fast enough, especially after some tweaking. A RAID configuration doesn’t necessarily improve performance on this device. All of the transfer rates were about the same, regardless of format. You’ll notice slightly slower speeds for a RAID 1, but the difference is minimal. Before tweaking, Samba had a clear lead in transfer rates on large files, but it was completely unusable with smaller files. After modifying the NFS mount parameters, it seems to give the best of both worlds.

Update: I researched the Samba performance (or lack thereof) and found that it is not the fault of the NAS. Using a Windows XP box, writing small files went at a reasonable pace (around the same as using NFS above). Then, testing from my MacBook Pro with an OS that shall not be named, performance was similar to the Windows XP machine. I’m going to attribute this to a bug in the Samba code between version 3.0.10 on the G5 and 3.0.25 on the MacBook Pro.

Software Announcements

Some very cool apps were released/upgraded recently. Just wanted to send out some props here.

First, Daniel Jalkut has been working on MarsEdit 2.0 for quite some time, and I have to say the results are spectacular. The blog post window has been much improved, and now includes the ability to add new blog categories without going to my WordPress admin interface. This feature alone is worth the upgrade price. Overall, the app seems a lot slicker, and blends into Tiger very nicely.

Next is Gus Mueller’s newest application, Acorn. I was fortunate enough to be shown a demo of a beta version back at C4 last month. This really is a good alternative to Photoshop Elements. It’s easy to use, has a very clean interface, and has some advanced features like Layers and Core Image filters. Definitely give Acorn a look-see, and take advantage of the introductory pricing offered.

Using Compressed Textures in OpenGL

I’m not sure if it’s just me, but for some reason OpenGL coding involves a lot of trial and error before getting a feature such as lighting, blending, or texture mapping to work correctly. The past few days I have been working on adding texture compression to my OpenGL map test project. Ultimately, this code will be merged with the rest of the Seasonality source tree, and it’s going to look pretty cool.

Most OpenGL developers will use regular images and possibly compress them when loading them as a texture on the GPU. This is fairly straightforward, and just involves changing one line of code when loading the texture. Note that this is a huge gain when it comes to graphics memory savings, as I was using about 128MB of VRAM when texture compression was disabled and only around 30MB with compression enabled. I wanted to accomplish something a bit more difficult though. I’m going to be using several thousand textures, so I would like to have OpenGL compress them the first time Seasonality is launched, and then save the compressed images back to disk so concurrent launches will not require the re-compression of the imagery.

The problem I ran into was not enough developers are using this technique to speed up their application, so sample code was scarce. I found some in a book I bought awhile back called “More OpenGL Game Programming,” but the code was written for Windows, and it didn’t work on Mac OS X. So I dove deep into the OpenGL API reference and hacked my way through it. The resulting code is a simplification of the method I’m using. It should integrate with your OpenGL application, but I can’t guaranty this completely because it is excerpted from my project. If you’re having a problem integrating it though, post a comment or send me an email.

First, we have some code that will check for a compressed texture file on disk. If the compressed file doesn’t exist, then we are being launched for the first time and should create a compressed texture file.

- (bool) setupGLImageName:(NSString *)imageName
         toTextureNumber:(unsigned int)textureNumber
{
   GLint width, height, size;
   GLenum compressedFormat;
   GLubyte *pData = NULL;

   // Attempt to load the compressed texture data.
   if (pData = LoadCompressedImage("/path/to/compressed/image", &width, &height,
       &compressedFormat, &size))
   {
      // Compressed texture was found, image bytes are in pData.
      // Bind to this texture number.
      glBindTexture(GL_TEXTURE_2D, textureNumber);

      // Define how to scale the texture.
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

      // Create the texture from the compressed bytes.
      glCompressedTexImage2D(GL_TEXTURE_2D, 0, compressedFormat,
                             width, height, 0, size, pData);


      // Define your texture edge handling, here I'm clamping.
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP);
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
      // Free the buffer (allocated in LoadCompressedImage)
      free(pData);
      return YES;
   }
   else {
      // A compressed texture doesn't exist yet, run the standard texture code.
      NSImage *baseImage = [NSImage imageNamed:imageName];
      return [self setupGLImage:baseImage toTextureNumber:textureNumber];
   }
}

Next is the code to load a standard texture. Here we get the bitmap image rep and compress the texture to the GPU. Next we’ll grab the compressed texture and write it to disk.

- (bool) setupGLImage:(NSImage *)image
         toTextureNumber:(unsigned int)textureNumber
{
   NSData *imageData = [image TIFFRepresentation];
   NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithData:imageData];
   // Add your own error checking here.

   NSSize size = [rep size];
   // Again, more error checking.  Here we aren't using
   // MIPMAPs, so make sure your dimensions are a power of 2.

   int bpp = [rep bitsPerPixel];

   // Bind to the texture number.
   glBindTexture(GL_TEXTURE_2D, textureNumber);
   glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

   // Define how to scale the texture.
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

   // Figure out what our image format is (alpha?)
   GLenum format, internalFormat;
   if (bpp == 24) {
      format = GL_RGB;
      internalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT;
   }
   else if (bpp == 32) {
      format = GL_RGBA;
      internalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
   }

   // Read in and compress the texture.
   glTexImage2D(GL_TEXTURE_2D, 0, internalFormat,
                size.width, size.height, 0,
                format, GL_UNSIGNED_BYTE, [rep bitmapData]);

   // If our compressed size is reasonable, write the compressed image to disk.
   GLint compressedSize;
   glGetTexLevelParameteriv(GL_TEXTURE_2D, 0,
                            GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
                            &compressedSize);
   if ((compressedSize > 0) && (compressedSize < 100000000)) {
      // Allocate a buffer to read back the compressed texture.
      GLubyte *compressedBytes = malloc(sizeof(GLubyte) * compressedSize);

      // Read back the compressed texture.
      glGetCompressedTexImage(GL_TEXTURE_2D, 0, compressedBytes);

      // Save the texture to a file.
      SaveCompressedImage("/path/to/compressed/image", size.width, size.height,
                          internalFormat, compressedSize, compressedBytes);

      // Free our buffer.
      free(compressedBytes);
   }

   // Define your texture edge handling, again here I'm clamping.
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

   // Release the bitmap image rep.
   [rep release];

   return YES;
}

Finally we have a few functions to write the file to disk and read it from the disk. These functions were pulled almost verbatim from the OpenGL book. In the first code block above we called LoadCompressedImage to read the texture data from the disk. In the second code block, we called SaveCompressedImage to save the texture to disk. Nothing really special is going on here. We write some parameters to the head of the file, so when we go to read it back in we have the details. Bytes 0-3 of the file are the image width, 4-7 is the image height, 8-11 is the format (GL_COMPRESSED_RGB_S3TC_DXT1_EXT or GL_COMPRESSED_RGBA_S3TC_DXT5_EXT), 12-15 is the size of the image data in bytes, and bytes 16+ are the image data.

void SaveCompressedImage(const char *path, GLint width, GLint height,
                         GLenum compressedFormat, GLint size, GLubyte *pData)
{
   FILE *pFile = fopen(path, "wb");
   if (!pFile)
      return;

   GLuint info[4];

   info[0] = width;
   info[1] = height;
   info[2] = compressedFormat;
   info[3] = size;

   fwrite(info, 4, 4, pFile);
   fwrite(pData, size, 1, pFile);
   fclose(pFile);
}

GLubyte * LoadCompressedImage(const char *path, GLint *width, GLint *height,
                              GLenum *compressedFormat, GLint *size)
{
   FILE *pFile = fopen(path, "rb");
   if (!pFile)
      return 0;
   GLuint info[4];

   fread(info, 4, 4, pFile);
   *width = info[0];
   *height = info[1];
   *compressedFormat = info[2];
   *size = info[3];

   GLubyte *pData = malloc(*size);
   fread(pData, *size, 1, pFile);
   fclose(pFile);
   return pData;
   // Free pData when done...
}

Hopefully this will save someone development time in the future. If you catch any errors, let me know.

© 2017 *Coder Blog

Theme by Anders NorenUp ↑