====== Software RAID – Replacing a failed hard drive ======
[[http://www.revantine.net/?p=15|Revantine]]\\
[[http://www.thomas-krenn.com/en/wiki/Mdadm_recover_degraded_Array|Mdadm recover degraded Array]]\\
[[http://www.thomas-krenn.com/en/wiki/Mdadm_recovery_and_resync|Mdadm recovery and resync]]\\
[[http://www.linuceum.com/Server/srvRAIDTest.php|How to Test a RAID 1 Array]]\\
[[http://community.spiceworks.com/how_to/show/36066-replacing-a-failed-drive-in-a-linux-software-raid1-configuration-mdraid|Replacing a failed drive in a Linux Software RAID1 configuration (mdraid)]]\\
[[http://aplawrence.com/Linux/rebuildraid.html|Rebuilding failed Linux software RAID]]\\
[[http://conshell.net/wiki/index.php/Software_RAID_Recovery_on_Linux|Software RAID Recovery on Linux]]\\
[[http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array|Replacing A Failed Hard Drive In A Software RAID1 Array]]\\
[[http://blogger.corp.eng.br/2008/08/mais-informao-sobre-raid-1.html|Mais informação sobre RAID 1]]\\
I will make the partition table on sdb the same as sda. I will duplicate sda1 **(/boot)** as well so that if sda fails I can get it booting more quickly as well.
**note:** md0 doesn’t have a partition. That is really best suited to a seperate article discussing raid and lvm and so I am not going to delve in to it at this moment.
# fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 38913 312464250 fd Linux raid autodetect
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/md0: 319.9 GB, 319963267072 bytes
2 heads, 4 sectors/track, 78116032 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Apr 8 00:22:19 2007
Raid Level : raid1
Array Size : 312464128 (297.99 GiB 319.96 GB)
Device Size : 312464128 (297.99 GiB 319.96 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Aug 16 12:18:10 2007
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 7ffa6982:50ea5134:11c17882:91cfa617
Events : 0.960047
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 0 0 1 removed
I replace the failed hard drive with an identical hard drive. Create the partitions using the same layout if they match.
# fdisk /dev/sdb
n - new
p - primary
1 - partition number
Start 1
End 13
n - new
p - primary
2 - partition number
Start 14
End 38913
t - type
fd (Linux raid autodetect)
w - write and quit
Added the new raid partition to md0 (/dev/md0 is the mirrored raid array device)
# mdadm /dev/md0 -a /dev/sdb2
mdadm: added /dev/sdb2
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[2] sda2[0]
312464128 blocks [2/1] [U_]
[>....................] recovery = 0.3% (1072128/312464128) finish=275.0min speed=18869K/secunused devices:
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Apr 8 00:22:19 2007
Raid Level : raid1
Array Size : 312464128 (297.99 GiB 319.96 GB)
Device Size : 312464128 (297.99 GiB 319.96 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Aug 16 12:23:25 2007
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 0% complete
UUID : 7ffa6982:50ea5134:11c17882:91cfa617
Events : 0.960629
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
2 8 18 1 spare rebuilding /dev/sdb2