RAID1 on linux with mdadm – Part1

Due to time restriction, I’m putting a pause to the last two tests of  FakeRAID on ubuntu using dmraid – Part3 and going ahead with software RAID for the next testing phase:

2 HDDs x 1TB each – WD Caviar Black

Ist test:

2Partitions x 493G
RAID1
EXT4/XFS

Second test:

2Partitions x 493G
LVM over RAID1
EXT4/XFS

Third test:

2Partitions x 493G
Encrypted LVM over RAID1
EXT4/XFS

Fourth test:

2Partitions x 493G
Truecrypt partition on top of LVM over RAID1
EXT4/XFS

Changing the BIOS settings:

Pressed ctrl+F to access the BIOS RAID menu and got rid of the RAID-SET that was created for FakeRAID testing. In the HDD section changed, RAID to AHCI mode for SATA1-4 slots.

Installed the RAID utility:

$sudo aptitude -y install mdadm

Partitioning and changing system type of partitions


$sudo fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb768a91d

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-121601, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-121601, default 121601): 60801

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (60802-121601, default 60802):
Using default value 60802
Last cylinder, +cylinders or +size{K,M,G} (60802-121601, default 121601):
Using default value 121601

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb768a91d

Device Boot Start End Blocks Id System
/dev/sda1 1 60801 488384001 fd Linux raid autodetect
/dev/sda2 60802 121601 488376000 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Same will be applied for the second HDD.

Overwrite the superblock on HDDs

If the device contains a valid md superblock, the block is overwritten with zeros. With –force the block where the superblock would be is overwritten even if it doesn’t appear to be valid.
sudo mdadm --zero-superblock /dev/sd{a,b}{1,2} --force
mdadm: Unrecognised md component device - /dev/sda1
mdadm: Unrecognised md component device - /dev/sda2
mdadm: Unrecognised md component device - /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb2

As there was no previous RAID setup on these partitions, the above warning is normal.

Improving rebuild/resync time


$sudo sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min = 1000
$sudo sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_max = 2000


$sudo sysctl -w dev.raid.speed_limit_min=100000
$sudo sysctl -w dev.raid.speed_limit_max=200000

This is a non-production system so using that kind of limits won’t hurt.

RAID Array being sync’d

This is something which I hate..

$sudo cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
488375936 blocks [2/2] [UU]
resync=DELAYED

md0 : active raid1 sdb1[1] sda1[0]
488383936 blocks [2/2] [UU]
[>………………..] resync = 3.3% (16248576/488383936) finish=59.4min speed=132424K/sec

unused devices:

Details of both the RAID devices


# sudo mdadm --detail /dev/md0 /dev/md1
/dev/md0:
Version : 00.90
Creation Time : Thu Oct 6 01:43:33 2011
Raid Level : raid1
Array Size : 488383936 (465.76 GiB 500.11 GB)
Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Oct 6 02:12:25 2011
State : active, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Rebuild Status : 43% complete

UUID : 4595044d:f3bd332f:afa333af:aa038ee0 (local to host user-desktop)
Events : 0.17

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
/dev/md1:
Version : 00.90
Creation Time : Thu Oct 6 01:43:47 2011
Raid Level : raid1
Array Size : 488375936 (465.75 GiB 500.10 GB)
Used Dev Size : 488375936 (465.75 GiB 500.10 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Thu Oct 6 02:05:20 2011
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 056340db:a141cb03:afa333af:aa038ee0 (local to host user-desktop)
Events : 0.3

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2

4 thoughts on “RAID1 on linux with mdadm – Part1”

  1. I think this is the info I was looking for. So what you did here is you were able to rebuild your fakeraid bios hardware raid 1 with mdadm after obliterating the MBR on one of them?

Leave a Reply

Your email address will not be published. Required fields are marked *