
{"id":270,"date":"2011-08-17T16:43:58","date_gmt":"2011-08-17T16:43:58","guid":{"rendered":"http:\/\/www.starcoder.com\/wordpress\/?p=270"},"modified":"2021-10-30T19:54:50","modified_gmt":"2021-10-30T19:54:50","slug":"shrinking-a-linux-software-raid-volume","status":"publish","type":"post","link":"https:\/\/www.starcoder.com\/wordpress\/2011\/08\/shrinking-a-linux-software-raid-volume\/","title":{"rendered":"Shrinking a Linux Software RAID Volume"},"content":{"rendered":"<p>I upgrade the disks in my servers a lot, and often times this requires replacing 3-4 drives.  Throwing the old drives out would be a huge waste, so I bring them back to my office and put them in a separate Linux file server with a ton of drive bays.  I wrote <a href=\"http:\/\/www.starcoder.com\/wordpress\/2009\/09\/building-a-soho-file-server-the-hardware\/\">about the fileserver previously<\/a>.<\/p>\n<p>In the file server, I configure the drives into multiple RAID 5 volumes.  Right now, I have 3 RAID volumes, each with four drives.  Yesterday, one of the disks in an older volume went bad.  So right now I&#8217;m running 3 out of 4 drives in a RAID 5.  No data loss yet, which is good.  Since this is an older RAID volume, I&#8217;ve decided not to replace the failed drive.  Instead, I&#8217;ll just shrink the RAID from 4 disks into 3 disks.  It was quite a hassle to figure out how to do this by researching online, so I thought I would document the entire process here, step by step, to save other people some time in the future.  It should go without saying that you should have a recent backup of everything on the volume you are about to change.<\/p>\n<ol>\n<li>Make sure the old disk really is removed from the array.  The device name shouldn&#8217;t show up in \/proc\/mdstat and mdadm &#8211;detail should say &#8220;removed&#8221;.  If not, be sure you mdadm &#8211;fail and mdadm &#8211;remove the device from the array.\n<pre># cat \/proc\/mdstat\nPersonalities : [linear] [multipath] [raid0] [raid1] [raid6]... \nmd0 : active raid5 sdh2[1] sdj2[0] sdi2[3]\n      1452572928 blocks level 5, 64k chunk, algorithm 2 [4\/3] [UU_U]\n      unused devices: &lt;none&gt;\n# mdadm --detail \/dev\/md0\n\/dev\/md0:\n        Version : 0.90\n  Creation Time : Wed Apr  8 12:24:35 2009\n     Raid Level : raid5\n     Array Size : 1452572928 (1385.28 GiB 1487.43 GB)\n  Used Dev Size : 484190976 (461.76 GiB 495.81 GB)\n   Raid Devices : 4\n  Total Devices : 3\nPreferred Minor : 0\n    Persistence : Superblock is persistent\n\n    Update Time : Tue Aug 16 13:33:25 2011\n          State : clean, degraded\n Active Devices : 3\nWorking Devices : 3\n Failed Devices : 0\n  Spare Devices : 0\n\n         Layout : left-symmetric\n     Chunk Size : 64K\n\n           UUID : 02f177d1:cb919a65:cb0d4135:3973d77d\n         Events : 0.323834\n\n    Number   Major   Minor   RaidDevice State\n       0       8      146        0      active sync   \/dev\/sdj2\n       1       8      114        1      active sync   \/dev\/sdh2\n       2       0        0        2      removed\n       3       8      130        3      active sync   \/dev\/sdi2<\/pre>\n<\/li>\n<li>Unmount the filesystem:\n<pre># umount \/dev\/md0<\/pre>\n<\/li>\n<li>Run fsck on the filesystem:\n<pre># e2fsck -f \/dev\/md0<\/pre>\n<\/li>\n<li>Shrink the filesystem, giving yourself plenty of extra space for disk removal.  Here I resized the partition to 800 GB, to give plenty of breathing room for a RAID 5 of three 500 GB drives.  We&#8217;ll expand the filesystem to fill the gaps later.\n<pre># resize2fs \/dev\/md0 800G<\/pre>\n<\/li>\n<li>Now we need to actually reconfigure the array to use one less disk.  To do this, we&#8217;ll first query mdadm to find out how big the new array needs to be.  Then we&#8217;ll resize the array and reconfigure it for one fewer disk.  First, query mdadm for a new size (replace -n3 with the number of disks in the new array):\n<pre># mdadm --grow -n3 \/dev\/md0\nmdadm: this change will reduce the size of the array.\n       use --grow --array-size first to truncate array.\n       e.g. mdadm --grow \/dev\/md0 --array-size 968381952<\/pre>\n<\/li>\n<li>This gives our new size as being 968381952.  Use this to resize the array:\n<pre># mdadm --grow \/dev\/md0 --array-size 968381952<\/pre>\n<\/li>\n<li>Now that the array has been truncated, we set it to reside on one fewer disk:\n<pre># mdadm --grow -n3 \/dev\/md0 --backup-file \/root\/mdadm.backup<\/pre>\n<\/li>\n<li>Check to make sure the array is rebuilding.  You should see something like this:\n<pre># cat \/proc\/mdstat\nPersonalities : [linear] [multipath] [raid0] [raid1] [raid6]... \nmd0 : active raid5 sdh2[1] sdj2[0] sdi2[3]\n      968381952 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3\/2] [UU_]\n      [&gt;....................]  reshape =  1.8% (9186496\/484190976) \n                                  finish=821.3min speed=9638K\/sec<\/pre>\n<\/li>\n<li>At this point, you probably want to wait until the array finishes rebuilding.  However, Linux software RAID is smart enough to figure things out if you don&#8217;t want to wait.  Run fsck again before expanding your filesystem back to it&#8217;s maximum size (resize2fs requires this).\n<pre># e2fsck -f \/dev\/md0<\/pre>\n<\/li>\n<li>Now do the actual expansion so the partition uses the complete raid volume (resize2fs will use the max size if a size isn&#8217;t specified):\n<pre># resize2fs \/dev\/md0<\/pre>\n<pre>\u00a0<\/pre>\n<\/li>\n<li>(Optional) Run fsck one last time to make sure everything is still sane:\n<pre># e2fsck -f \/dev\/md0<\/pre>\n<\/li>\n<li>Finally, remount the filesystem:\n<pre># mount \/dev\/md0<\/pre>\n<\/li>\n<\/ol>\n<p>Everything went smoothly for me while going through this process.  I could have just destroyed the entire old array and recreated a new one, but this process was easier and I didn&#8217;t have to move a bunch of data around.  Certainly if you are using a larger array, and are going from 10 disks to 9 or something along those lines, this benefits of using this process are even greater.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I upgrade the disks in my servers a lot, and often times this requires replacing 3-4 drives. Throwing the old drives out would be a huge waste, so I bring them back to my office and put them in a separate Linux file server with a ton of drive bays. I wrote about the fileserver [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30],"tags":[],"class_list":["post-270","post","type-post","status-publish","format-standard","hentry","category-sys-admin","post-preview"],"_links":{"self":[{"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/posts\/270","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/comments?post=270"}],"version-history":[{"count":4,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/posts\/270\/revisions"}],"predecessor-version":[{"id":566,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/posts\/270\/revisions\/566"}],"wp:attachment":[{"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/media?parent=270"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/categories?post=270"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.starcoder.com\/wordpress\/wp-json\/wp\/v2\/tags?post=270"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}