Softraid add new disk to raid array6/10/2023 ![]() ![]() After you have certified a disk, you know that it can safely and reliably store your files. If you don’t take the time to certify a disk, you run the risk of losing all the data you put on it. Fdisk will parttion only upto 2TB and around 4TB will remain as unused space.Certifying a large disk with SoftRAID can take a long time-often a day or two (or more)-but it’s worth it! I’ve to say sorry, because fdisk can’t partition hard-drives more than 2TB in a Linux server. Most possibly we’ll think of fdisk utility. More than that, suppose think of a criteria that you need to partition a 6TB hard-disk on a Linux server. Using Parted we can add, delete and edit partitions along with the file systems located on them. Parted is a command which helps you to modify hard-disk partitions. Not always we’ll get chances to work on “parted” based servers. Recently I had to work on a “Parted” based server. That’s it, now you’ve replaced /dev/sdd! Any questions? post a comment!! Parted – A useful information! Once you finished adding drives to the RAID arrays, it’ll start synchronising automatically. Next is, we need to add the new partitions to the RAID arrays, for that we use the following command: # mdadm -manage /dev/md0 -add /dev/sdd1 Now you can execute the following command to check whether both the harddrives have the same partitions: # fdisk -l Here, the entire partitions on /dev/sda will be copied over to the new one – /dev/sdd Now we need to create partitions on the new drive with the exact replica of the other drive /devc/sda as it is RAID1. Once the defective drive is replaced boot up the server. (That is, if old drive is 1TB then the new one should also be 1TB) Replace the defective /dev/sdd with a new one □ It should be in exact size with that of the old one. Now it’s time to power off the server and contact your DC for a drive replacement. Once the bad drive is removed from the RAID array it’ll display only one harddrive, you can see it from cat /proc/mdstat Here’s a sample screen-shot obtaining its output: To remove the failed drives from the RAID array, please use the following command: # mdadm -manage /dev/md0 -remove /dev/sdd1 Here’s a sample output after executing it for other RAID arrays: Similarly, do it for other drives as well. Here’s the command to mark the drive as failed: # mdadm -manage /dev/md0 -fail /dev/sdd1 Marking the hard-drive as failed and removing it We need to mark the drive as failed for other arrays as well and then need to remove it from the RAID arrays. Check for ATA errors in the smartctl output. You can also initiate a smartctl for /dev/sdd to confirm it. Here in this given example though ‘_’ is in second position, you can see a ‘F’ besides sdd2 and sdd8 so we can confirm that /dev/sdd is failing. Instead of UU if you see ‘ _‘(underscore), it’s a degrading drive. From the command cat /proc/mdstat we can also get the details on degrading array. Here the failing disk is /dev/sdd and we need to replace it. This can be identified from the following command: # cat /proc/mdstat dev/sda8 and /dev/sdd8 makes the /dev/md6 RAID 1 array ![]() ![]() dev/sda7 and /dev/sdd7 makes the /dev/md1 RAID 1 array dev/sda6 and /dev/sdd6 makes the /dev/md2 RAID 1 array dev/sda5 and /dev/sdd5 makes the /dev/md4 RAID 1 array dev/sda3 and /dev/sdd3 makes the /dev/md5 RAID 1 array dev/sda2 and /dev/sdd2 makes the /dev/md3 RAID 1 array This is how RAID array is built: /dev/sda1 and /dev/sdd1 makes the /dev/md0 RAID 1 array Here I’ve two hard drives /dev/sda and /dev/sdd with partitions /dev/sda1, /dev/sda2, /dev/sda3, /dev/sda5, /dev/sda6, /dev/sda7 and /dev/sda8 as well as /dev/sdd1, /dev/sdd2, /dev/sdd3, /dev/sdd5, /dev/sdd6, /dev/sdd7 and /dev/ssd8. Here I’m explaining the detailed steps in replacing a bad drive from software RAID 1 array. Is it possible to replace a faulty drive from RAID 1? What are the steps? ![]()
0 Comments
Leave a Reply. |