*Coder Blog

Life, Technology, and Meteorology

Category: Sys Admin (page 2 of 2)

Setting up a Mac/iPhone VPN to a Cisco ASA Router

I bought a Cisco ASA 5505 about 6 months ago, and love it so far. While setting up a VPN between my iPod touch and the ASA was straightforward, I was less fortunate when trying to get the same thing working from my MacBook Pro. Here’s a description of how to configure the ASA VPN so both devices work.

First, let me give a brief outline of what I am trying to do. I want both my iPod touch and my MacBook Pro to be able to connect to the Cisco ASA box over a VPN interface. Once the VPN has been established, I want all of my internet traffic to go first to the ASA and then out to the rest of the internet from there (otherwise known as split-tunneling in network jargon). With a default VPN setup on the ASA, this works fine from the iPhone, but from the Mac I was only able to access the internal network. The rest of my internet traffic just wouldn’t get sent. Note that this configuration will not work with Mac OS X’s L2TP VPN client, you’ll need to install the Cisco VPN client instead.

The solution isn’t too difficult. First, setup a fairly default VPN configuration on the ASA. Use the VPN Wizard on the ASDM console with the following settings…

Page 1
VPN Tunnel Type: Remote Access
VPN Tunnel Interface: outside
Check the box to enable inbound IPsec sessions to bypass interface access lists.

Page 2
Select Cisco VPN Client for the client type.

Page 3
Select Pre-shared key for authentication method, typing a password into the Pre-Shared Key field.
Type in a Tunnel Group Name to use, which will be used again later. I’ll use VPNGroup as an example.

Page 4
Authenticate using the local user database.

Page 5
Make sure your ASDM username is in the list on the right side, so you are able to connect to the VPN with that account.

Page 6
If you haren’t already, create a IP address pool to use for VPN connections. This is an IP range within your internal network. I use 192.168.1.128 with a subnet mask of 255.255.255.240.

Page 7
Type in your primary and secondary DNS servers into the box. I also set my default domain name to my domain (gauchosoft.com).

Page 8
Leave everything default: Encryption is 3DES, Authentication is SHA, and DH Group is 2.

Page 9
Again, leave everything default. Encryption is 3DES and Authentication is SHA.

Page 10
Leave everything as-is, except check the box at the bottom to enable split tunneling.

Page 11
Click Finish and you are done.

Now, your iPhone should be working just fine. Just go into the VPN preferences and setup a new IPSec configuration with your server, user account/password, and group name/pre-shared secret. Unfortunately, the Mac will not be able to access the entire internet when connected to the VPN. To fix this issue, some additional configuration needs to take place in a terminal connection to the ASA box. If you haven’t already, enable SSH access to the ASA box and login. Then run the following commands: (comments in red)

cisco-gw> enable
Password: your password here
cisco-gw# config terminal

cisco-gw(config)# access-list outside_nat extended permit ip 192.168.1.128 255.255.255.240
Use your pool network and subnet mask in the last two args above.
cisco-gw(config)# nat (outside) 1 access-list outside_nat

cisco-gw(config)# group-policy DfltGrpPolicy attributes
cisco-gw(config-group-policy)# dns-server value 208.67.222.222
Replace IP above with first DNS server
cisco-gw(config-group-policy)# nem enable
cisco-gw(config-group-policy)# exit

cisco-gw(config)# group-policy VPNGroup attributes
Replace VPNGroup above with your group from earlier.
cisco-gw(config-group-policy)# split-tunnel-policy tunnelall
cisco-gw(config-group-policy)# split-tunnel-network-list none
cisco-gw(config-group-policy)# exit

cisco-gw(config)# write memory

That’s it! Just open the Cisco VPN Client on your Mac and add a new connection profile with the group and user settings you configured on the ASA.

Wireless Network

When upgrading to the ASA 5505 router, I was left in a situation where there would be two routers on my home office network: the ASA acting as a main wired router, and my old Linksys router acting as a host for wireless clients. The ASA was connected to the cable modem to my provider, and I set the internal network to 192.168.1.0. The wireless router was a host on that internal network with a WAN IP of 192.168.1.5 and a LAN network of 192.168.5.0. This works fine when accessing hosts on the internet, but it was less than ideal when trying to access the wired internal network from a wireless computer. Because of the firewall and NAT happening on the Linksys device, wireless devices were second-class citizens on the LAN.

There was this little radio button the Linksys router that would switch the device from Gateway mode to Router mode. Hmm, that looked promising, so I tried it. This was nice, because NAT was no longer active…a host on the 192.168.1.0 network could talk to a host on the wireless 192.168.5.0 network. The drawback was that I would have to add a separate route from wired hosts to send traffic to the 192.168.5.0 network through 192.168.1.5 instead of the default ASA gateway at 192.168.1.1. With the relatively small size of my network here, that’s not much of a problem, but I still felt there should be a better way.

Since I wanted to stick with one default route of 192.168.1.1, I looked into adding another VLAN to the ASA box, to see if it could route packets to 192.168.5.0 down the port that connects to the wireless router. Unfortunately, my ASA is only licensed for 3 VLANs which are all in use (outside link, inside link, and DMZ). I could spend a few hundred bucks upgrading my ASA license to support more VLANs, but it just didn’t seem worth it.

Another option is to add a managed switch to the internal network and use that to setup VLANs. New hardware is always fun, but again this would cost a couple hundred bucks and there has to be another way…

Finally, the solution became immediately obvious…so obvious that it’s amazing I hadn’t thought of it before. Instead of connecting a wire from an internal port on the ASA to the WAN port on the Linksys, I tried connecting from the same internal port on the ASA to an internal LAN port on the Linksys, leaving the WAN port on the Linksys unused.

This setup works perfectly. I changed the internal network of the Linksys to the same 192.168.1.0 as the ASA internal network, and gave the Linksys an internal IP of 192.168.1.2. The ASA is already running a DHCP server on the 192.168.1.0 network, so I disabled the Linksys DHCP server. Wireless hosts are now first-class citizens on this network…

New Disk

Having an application like Seasonality that relies upon online services requires those services to be reliable. This means any server I host has to be online as close to 100% of the time as possible. Website and email services are pretty easy to host out to a shared hosting provider for around $10-20/month. It’s inexpensive, and you can leave the server management to the hosting provider. For most software companies, this is as far as you need to go.

This also worked okay when Seasonality was simply grabbing some general data from various sources. As soon as I began supporting international locations, I stepped out of the bounds of shared hosting. The international forecasts need to be hosted on a pretty heavy-duty server. It pegs a CPU for about an hour to generate the forecasts, and the server updates the forecasts twice a day. Furthermore, the dataset is pretty large, so a fast disk subsystem is needed.

So I have a colocated server, which I’ve talked about before. It’s worked out pretty well until earlier this week when one of the 4 disks in the RAID died. Usually, when a disk in a RAID dies, the system should remain online and continue working (as long as you aren’t using RAID 0). In this situation, the server crashed though, and I was a bit puzzled as to why this occurred.

After doing some research, I found that the server most likely crashed because of an additional partition on the failed disk—a swap partition. When setting up the server, I configured swap across all four disks, with the hope that if I ever did go into swap a little bit it would be much faster than just killing a single disk with activity. The logic seemed good at the time, but looking back that was a really bad move. In the future, I’ll stick to having swap on just a single disk (probably the same one as the / partition) to reduce the chances of a system crash by 75%.

After getting a new disk overnighted from Newegg, I replaced the failed mechanism and added it back into the RAID, so the system is back up and running again.

This brings up the question of how likely something like this will happen in the future. The server is about 2 and a half years old, so disk failures happening at this age is reasonable, especially considering the substantial load on the disks on this server (blinky lights, all day long). At this point, I’m thinking of just replacing the other 3 disks. That way, I will have scheduled downtime instead of unexpected downtime. With the constantly dropping cost of storage, I’ll be able to replace the 300Gb disks with 750Gb models. It’s not that I actually need the extra space (the current 300s are only about half full), but I need at least 4 mechanisms to get acceptable database performance.

In the future, I will probably look toward getting hot-swappable storage. I’ve had to replace 2 disks now since I built the server, and to have the option of just sliding one disk out and replacing it with a new drive without taking the server offline is very appealing.

MicroNet G-Force MegaDisk NAS Review

If you have been following my Twitter feed, you know that I just ordered a 1TB NAS last week for the office network here. I wanted some no-fuss storage sitting on the network so I could backup my data and store some archive information there instead of burning everything to DVD. (In reality, I’ll still probably burn archive data to DVD just to have a backup.)

Earlier this month, MicroNet released the G-Force MegaDisk NAS (MDN1000). The features were good and the price was right so I bought one. It finally arrived today and I’ve been spending some time getting to know the system and performing some benchmarks.

When opening the box, the first thing that surprised me was the size of the device. It’s really not much bigger than 2 3.5″ hard drives stacked on top of each other. The case is pretty sturdy, made out of aluminum, but the stand is a joke. Basically, two metal pieces came with rubber pads on them. You’re supposed to put a metal piece on each side to support the case. It’s not very sturdy, and a pain to setup like this, so I doubt I’ll use them.

I had a few problems reaching the device on my network when I plugged it in. I had to cycle the power a couple of times before I was finally able to pick it up on the network and login to the web interface. I’m guessing future firmware updates will make the setup process easier. It’s running Linux, which is nice. The firmware version is 2.6.1, so I’m guessing that means the kernel is version 2.6 (nmap identifies it as kernel 2.6.11 – 2.6.15). Hopefully it’s only a matter of time before someone’s hacked it with ssh access. MicroNet’s website claims there is an embedded dual-core processor on board, which again sounds pretty cool. The OS requires just under 61MB of space on one of the hard drives. There are two 500GB drives in this unit. Both are Hitachi (HDT725050VLA360) models, which are SATA2 drives that run at 7200 RPM with 16MB of cache. From the web interface, it looks like the disks are mounted at /dev/hdc and /dev/hdd.

Disk management is pretty straightforward. You can select a format for each disk (ext2, ext3, fat32), and there is an option to encrypt the content on the disk. The drives are monitored via the SMART interface, and you can view the reports in detail via the web. By default, the drives come in a striped RAID format, but I was able to remove the RAID and access each disk separately (contrary to the documentation’s claims). Unfortunately, for some reason I was unable to access the second disk over NFS. It looks like you might be able to mess with the web configuration page to get around this limitation though.

Moving on to the RAID configuration, you can choose between RAID 0, RAID 1, and Linear (JBOD). Ext2 and ext3 are your filesystem options. Building a RAID 1 took a very long time (~ 4 hours), which I’m guessing is because the disks require a full sync of all 500GB of data when initializing such a partition.

So let’s bust out the benchmarks! I benchmarked by performing 2 different copies. One copy was a single 400.7MB file (LARGE FILE), and the other was a directory with 4,222 files totally 68.7MB (SMALL FILES). All tests were performed over a gigabit Ethernet network from my 2.5Ghz G5 desktop machine. Transfers were done via the Terminal with the time command, to remove any human-error from the equation.

A note about testing Samba with SMALL FILES: I started running a write test and let it go for around 8 minutes. At that point, it was still only done copying around a quarter of the files, and the transfer rate averaged less than 20KB/sec. This was absurdly slow, so I didn’t bother waiting for the full test to go through. It’s difficult to say if this is a limitation of the NAS, Samba, Mac OS X or all of the above.

Striped RAID (Standard) NFS Samba
Write LARGE FILE 1:13 (5,544 KB/sec) 0:42 (9,542 KB/sec)
Read LARGE FILE 0:42 (9,769 KB/sec) 0:35 (11,723 KB/sec)
Write SMALL FILES 3:46 (310 KB/sec) DNF
Read SMALL FILES 0:39 (1,759 KB/sec) DNF
Mirrored RAID NFS Samba
Write LARGE FILE 1:17 (5,328 KB/sec) 0:47 (8,730 KB/sec)
Read LARGE FILE 0:40 (10,257 KB/sec) 0:41 (10,007 KB/sec)
Write SMALL FILES 3:44 (314 KB/sec) DNF
Read SMALL FILES 0:43 (1,636 KB/sec) DNF
Separate Disks NFS Samba
Write LARGE FILE 1:13 (5,620 KB/sec) 0:43 (9,542 KB/sec)
Read LARGE FILE 0:46 (8,919 KB/sec) 0:35 (11,723 KB/sec)
Write SMALL FILES 3:11 (368 KB/sec) DNF
Read SMALL FILES 0:42 (1,675 KB/sec) DNF

All of these were using standard mounting, either through the Finder’s browse window, or mount -t nfs with no options on the console. I decided to try tweaking the NFS parameters to see if I could squeeze any more speed out of it. The following results are all using a striped RAID configuration…

no options wsize=16384
rsize=16384
wsize=16384
rsize=16384
noatime
intr
Write LARGE FILE 1:13
(5,544 KB/sec)
1:00
(6,838 KB/sec)
0:59
(6,954 KB/sec)
Read LARGE FILE 0:42
(9,769 KB/sec)
0:32
(12,822 KB/sec)
0:32
(12,822 KB/sec)
Write SMALL FILES 3:46
(311 KB/sec)
3:47
(310 KB/sec)
3:09
(372 KB/sec)
Read SMALL FILES 0:39
(1,759 KB/sec)
0:42
(1,675 KB/sec)
0:40
(1,758 KB/sec)

In summary, while this NAS isn’t necessarily the fastest out there, it’s certainly fast enough, especially after some tweaking. A RAID configuration doesn’t necessarily improve performance on this device. All of the transfer rates were about the same, regardless of format. You’ll notice slightly slower speeds for a RAID 1, but the difference is minimal. Before tweaking, Samba had a clear lead in transfer rates on large files, but it was completely unusable with smaller files. After modifying the NFS mount parameters, it seems to give the best of both worlds.

Update: I researched the Samba performance (or lack thereof) and found that it is not the fault of the NAS. Using a Windows XP box, writing small files went at a reasonable pace (around the same as using NFS above). Then, testing from my MacBook Pro with an OS that shall not be named, performance was similar to the Windows XP machine. I’m going to attribute this to a bug in the Samba code between version 3.0.10 on the G5 and 3.0.25 on the MacBook Pro.

DynDNS Updater 2.0 Public Beta

This just went live in the last 24 hours. Along with working on my own Gaucho Software products such as Seasonality, Dash Monitors, and XRG, I also consult for other firms. I’ve been working with Jeremy and the rest of the DynDNS team on their Mac update client for quite some time. Just recently, version 2.0 of the project has reached a stage where it’s ready for public beta consumption. The interface, developed by FJ de Kermadec and his team at Webstellung is top notch, and does a great job hiding the complexity of everything going on behind the scenes. We still have some work yet to do before a final 2.0 release is ready, but I think the app is looking pretty good thus far.

If you use DynDNS services, check it out. If you aren’t using DynDNS services, give their website a look-see to determine if you should be using their services. 🙂

Server Colocation

I’m happy to say that after spending the past 5 months in the pipeline, Rio (the Gaucho Software Forecast Server) has now been moved to a colocation facility. The facility provides redundant power, cooling, and several connections to Tier 1 network providers, which should definitely increase the server’s overall reliability. Previously, this server was located at my office. I had it hooked up to a UPS battery backup that gave it about 30 minutes of runtime, but it’s a far cry from true redundant power. Also, over the past several months, it seems that my business network connection reliability has been slowly decreasing. This should fix that issue.

Rio in the rack...

Before moving the server, I thought it would be a good idea to add a bit more memory and hard drive space to the box. I bumped up the memory to the motherboard’s max of 4Gb, which gives some more breathing room for the virtual machines I’m running via VMware. I also added another 300GB hard drive and switched from a 3 disk RAID 5 configuration to a 4 disk RAID 10. I had been reading on PostgreSQL mailing lists that for configurations with less than 6-8 hard drives, RAID 10 was substantially faster than RAID 5. RAID 5 has always been infamously slow at writes, but the read speeds are pretty good in general, so I had my doubts. Well, my doubts were definitely unfounded, because this single hard drive upgrade has given a dramatic performance increase. Previously, when running the forecast generator (twice a day), the processing would take approximately 2 hours. Now, after adding the 4th disk and switching to RAID 10 using an XFS filesystem (more on this below), the same process takes only 1 hour and 10 minutes.

Rio inside...

Since I was starting with a fresh RAID partition, I thought I should put some time into looking at different Linux filesystems. I used Bonnie++ to perform the disk benchmarks using a temp file size of 16GB. Initially with a standard Ext3 filesystem:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
rio.gauchoso 16000M 50719  83 103617  41 43095  10 51984  76 117808  13 285.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  2730  98 +++++ +++ +++++ +++  3024  99 +++++ +++  9129  99

Not bad…just over 100MB/sec writes and 117MB/sec reads. Notice the random seeks/sec value of 285.4/sec and file creates of 2730/sec. On a database system, the disk heads are given quite a workout, reading from indexes and data tables all over the disk. So seeks/sec performance was important to me. Memory cache helps, but my database is around 30GB on disk, so caching only goes so far. Overall, the ext3 numbers sounded pretty good, but I didn’t have anything to really compare them to.

I decided to try out SGI’s XFS filesystem. After seeing several benchmarks online between filesystems like ReiserFS, JFS, Ext3, and XFS, it seemed that XFS almost always had the best performance in general, so I gave it a go. XFS has a lot of nice features, including tools to defragment a partition while the system is active, the use of B+ trees for volume indexes (resulting in a much greater efficiency when you have a ton of files in a single directory), and a pretty decent tool to repair broken filesystems as well. I reformatted the partition and ran Bonnie++ again with these results:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
rio.gauchoso 16000M 55354  79 126942  21 32567   6 47537  70 126927  14 415.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3917   2 +++++ +++  2662   3  3102   5 +++++ +++  2612   3

Write speed increased a healthy 22% to almost 127MB/sec, and read speed increased almost 8% to a similar 127MB/sec rate. But look closer at the seeks/sec and creates/sec rates… Seeks/sec increased an incredible 45% to 415.8/sec, and file creates improved 43% to 3917/sec. The drawback? Deleting files is quite a bit slower, 71% slower to be exact. To me, this tradeoff was well worth the gains, as it’s fairly rare for me to be deleting lots of files on the server. I have noticed a slight performance degradation while using rm with a lot of files, but it’s still a RAID so performance is certainly acceptable.

It’s not good to use just a single benchmarking tool, so I checked the validity of a couple of Bonnie++’s claims with a few simple dd commands, this time using 8GB file sizes:

root@rio:/rio# dd if=/dev/zero of=/rio/bench bs=8192 count=1000000
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 63.7116 seconds, 129 MB/s
root@rio:/rio# dd if=/rio/bench of=/dev/null bs=8192 count=1000000
1000000+0 records in
1000000+0 records out
8192000000 bytes (8.2 GB) copied, 59.2728 seconds, 138 MB/s

Those match up pretty well on the writes, and reading from the disk sequentially with dd is even faster than Bonnie++ claims.

Overall, I’m pretty pleased with the upgrade. I’m even happier to have this server at a datacenter. This should give me a lot of room to grow with hosting more weather data for upcoming Seasonality features, and also gives me a good server to run some other services on as well.

Rio Benchmarks

After installing the new Athlon X2 4600+ processor in Rio, I have to say I’m very impressed with the performance gains. Not only was I able to re-install Kubuntu server and get all the necessary packages installed again quickly, but the overall responsiveness of the machine is greatly improved while multitasking. This is especially noticeable with running 2 virtual servers using VMware Server.

So what about the benchmarks? I decided to start with the svnmark that Luis introduced on his blog, and downloaded Subversion 1.3.0 (1.3.2 is available, but the last benchmarks I did previously used 1.3.0). After a couple of runs, I found make -j4 to be the quickest. Here are the numbers:

Mac Pro Quad 3Ghz: 0:53
Dual Core Athlon 64 2.4Ghz: 1:27
Quad 2.5Ghz G5: 1:39
MacBook Pro Dual 1.83Ghz: 2:11
Dual 2.5Ghz G5: 2:35
Single Core Athlon 64 2Ghz (same server before upgrade): 2:59

After running this test and seeing the Athlon X2 compile faster than even a Quad G5, I’m pretty happy. Granted, the OS is Linux and not Mac OS X, but I doubt Linux would be that much more efficient when compiling software using gcc.

So how does it stack up with the forecast processing I need the server to do? Well in this case, I don’t have any solid benchmarks, I’m just running off memory here. The old processor was able to generate and serve a forecast in 0.4 seconds, where the new one can do the same request just under 0.3 seconds–a pretty solid 25% performance gain. This is all sequential code, so this doesn’t take into account the availability of a second processor. The forecast update is also a lot quicker, and with a second core the machine should still be able to handle connections from Seasonality to generate forecasts while updating the forecast database back-end.

Rio Upgrade

With the additional resource requirement I wrote about a couple of weeks ago, I ended up deciding it was time to upgrade the server here (Rio) with some additional CPU hardware. When building Rio late last year, I wanted to make sure the hardware was fairly upgradable. The easy choice at the time was to go with Athlon 64 processors, since I could start with a pretty basic 2Ghz single core chip, and have the option to upgrade to a dual-core CPU later on. Well, that time is now, and today the new processor arrived. I ended up purchasing an Athlon 64 X2 4600+ processor, which boils down to a 2.4Ghz dual core CPU. Fortunately, with AMD’s price drop just a couple of months ago, I didn’t pay much more for this processor than I did for the original.

One thing I was a bit surprised with was the difference between the new and old heat-sinks. I wasn’t expecting much of a difference between retail CPUs in the same processor line-up, but the new one is of a much higher quality. Here’s a picture…

So now I have to do some real-world benchmarking to find out just how much faster this CPU will go. I suspect the database importing times will improve dramatically, and the server will be much more usable while the update is taking place with the additional core. I’ll probably post again here with some benchmarks when I’ve had a chance to try things out.

With most of the forecast back-end work complete, I’m hoping to release a public beta sometime later this week or maybe next week. I just need to finish tweaking performance for the new CPU and smooth out some database replication issues. It sure will be a relief to have this new forecast system online.

Another one bites the dust…

I’m very thankful for RAID 5 at the moment…gotta love that parity thing. I checked my server status emails this morning, only to find these lines in /proc/mdstat:

md1 : active raid5 sda4[0] sdc4[2] sdb4[3](F)
      576283520 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]

Hmm, that little F doesn’t look too promising, and one of the U’s is missing. So I look into this a bit further and find:

root@rio:/rio# mdadm --detail /dev/md1
/dev/md1:
     Raid Level : raid5
     Array Size : 576283520 (549.59 GiB 590.11 GB)
    Device Size : 288141760 (274.79 GiB 295.06 GB)
   Raid Devices : 3
  Total Devices : 3
 Active Devices : 2
Working Devices : 2
 Failed Devices : 1

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       0        0        -      removed
       2       8       36        2      active sync   /dev/sdc4
       3       8       20        -      faulty        /dev/sdb4

Sure enough, after checking /var/log/messages, last night at around 8pm a disk failed…

kernel: ata2: status=0x25 { DeviceFault CorrectedError Error }
kernel: SCSI error :  return code = 0x8000002
kernel: sdb: Current: sense key: Hardware Error
kernel:     Additional sense: No additional sense information
kernel: end_request: I/O error, dev sdb, sector 18912489
kernel: RAID5 conf printout:
kernel:  --- rd:3 wd:2 fd:1
kernel:  disk 0, o:1, dev:sda4
kernel:  disk 1, o:0, dev:sdb4
kernel:  disk 2, o:1, dev:sdc4
kernel: RAID5 conf printout:
kernel:  --- rd:3 wd:2 fd:1
kernel:  disk 0, o:1, dev:sda4
kernel:  disk 2, o:1, dev:sdc4

I’m a bit surprised because the drives I used for this RAID are manufactured by Seagate, which I’ve had luck with in the past. Fortunately, Seagate offers a 5 year warranty for all of it’s drives, so this one is going back to the manufacturer to be replaced. In the mean time, I ordered another disk with overnight shipping–I need to take care of this before leaving for WWDC on Saturday. 🙂

Update (8/4): The replacement disk arrived yesterday afternoon and I was able to re-add partitions to the RAID volumes using mdadm <raid volume device> --add <disk device>. Rebuilding went pretty quick–/usr finished rebuilding in less than a minute and the larger volume took just over an hour and a half:

Personalities : [raid5]
md1 : active raid5 sdb4[3] sda4[0] sdc4[2]
      576283520 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [>....................]  recovery =  2.7% (8006784/288141760)
      finish=98.3min speed=47479K/sec
md0 : active raid5 sdb1[1] sda1[0] sdc1[2]
      5863552 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Newer posts »

© 2020 *Coder Blog

Theme by Anders NorenUp ↑