Software RAID – Replacing a failed hard drive
Revantine
Mdadm recover degraded Array
Mdadm recovery and resync
How to Test a RAID 1 Array
Replacing a failed drive in a Linux Software RAID1 configuration (mdraid)
Rebuilding failed Linux software RAID
Software RAID Recovery on Linux
Replacing A Failed Hard Drive In A Software RAID1 Array
Mais informação sobre RAID 1
I will make the partition table on sdb the same as sda. I will duplicate sda1 (/boot) as well so that if sda fails I can get it booting more quickly as well.
note: md0 doesn’t have a partition. That is really best suited to a seperate article discussing raid and lvm and so I am not going to delve in to it at this moment.
# fdisk -l Disk /dev/sda: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 38913 312464250 fd Linux raid autodetect Disk /dev/sdb: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/md0: 319.9 GB, 319963267072 bytes 2 heads, 4 sectors/track, 78116032 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md0 doesn't contain a valid partition table # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Apr 8 00:22:19 2007 Raid Level : raid1 Array Size : 312464128 (297.99 GiB 319.96 GB) Device Size : 312464128 (297.99 GiB 319.96 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Aug 16 12:18:10 2007 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 7ffa6982:50ea5134:11c17882:91cfa617 Events : 0.960047 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed
I replace the failed hard drive with an identical hard drive. Create the partitions using the same layout if they match.
# fdisk /dev/sdb n - new p - primary 1 - partition number Start 1 End 13 n - new p - primary 2 - partition number Start 14 End 38913 t - type fd (Linux raid autodetect) w - write and quit Added the new raid partition to md0 (/dev/md0 is the mirrored raid array device) # mdadm /dev/md0 -a /dev/sdb2 mdadm: added /dev/sdb2 # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[2] sda2[0] 312464128 blocks [2/1] [U_] [>....................] recovery = 0.3% (1072128/312464128) finish=275.0min speed=18869K/secunused devices: <none> # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Apr 8 00:22:19 2007 Raid Level : raid1 Array Size : 312464128 (297.99 GiB 319.96 GB) Device Size : 312464128 (297.99 GiB 319.96 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Aug 16 12:23:25 2007 State : active, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 0% complete UUID : 7ffa6982:50ea5134:11c17882:91cfa617 Events : 0.960629 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 2 8 18 1 spare rebuilding /dev/sdb2