FakeRAID on ubuntu using dmraid – Part2

Continued from FakeRAID on ubuntu using dmraid

This FakeRAID setup seems to be reliable in theory but I’m still working on scenario where I can actually test how things will work if any of my HDD fails. Adding another HDD will allow me to rebuild(at least that’s what’s written in the MB manual) the RAID-SET but trying to do the same using dmraid resulted in the following

$sudo dmraid -S -M /dev/sdb
ERROR: format handler string is NULL
ERROR: unknown format type: (null)
ERROR: metadata_handler() is not supported in "pdc" format

Promise FastTrack format is not supported by dmraid. Not sure if ISW(Intel Software RAID) is supported by the RAID Controller(AMD SB850 controller) on my board but if it is then I need to recreate the RAID-SET with ISW so that I can rebuild the RAID-SET using dmraid.

Still the number one concern remains if data would survive if either of the drive dies. Going to test that:
Will change RAID to AHCI in BIOS so that both the HDDs can be read as normal drives. Will wipe(to kill the RAID metadata) the second HDD from within Linux and will change back the BIOS setting to AHCI.

Current Status:

$sudo dmsetup status
pdc_gdiiehcgj: 0 1953394048 mirror 2 8:0 8:16 14904/14904 1 AA 1 core
pdc_gdiiehcgj2: 0 976703805 linear
pdc_gdiiehcgj1: 0 976687677 linear

Although, I’m going to test shortly but for those, who’re planning to use only a Linux machine, Software RAID is a much better option, it is much more community supported and clearly documented as compared to dmraid and has got couple of other benefits which’re not available when using FakeRAID:

  • Read somewhere that in software raid, data is read from both the drives in a load-balanced(parallel) fashion – although, I’m not sure as I’ve been using RAID 10 and 5 on servers till now.**
  • Re-sync happens much faster – that’s for sure – although, you need to tweak a couple of parameters if you wish to maximize the speed.
  • Can add other spare drive[s] later.
  • Ability to change the mainboard if required without having to think about controllers on the new mainboard.

I went to BIOS, switched RAID to AHCI for both the DRIVES, rebooted the machine and from within Linux wiped all the partitions using fdisk.
From here, I again rebooted the machine, went to Bios, switched back the Controller Mode from AHCI to RAID and rebooted the machine. Pressing Ctrl+F took me to the RAID menu from where the Adaptec RAID utility was showing RAID-SET to be healthy and consistent. I rebooted the machine again and logged to the linux to check more on the partitions on the wiped drive.

The partitions were still not present on the second drive and dmraid -ay was still giving:

$sudo dmraid -ay
RAID set "pdc_gdiiehcgj" already active
RAID set "pdc_gdiiehcgj1" already active
RAID set "pdc_gdiiehcgj2" was activated

and -s gave

sudo dmraid -s -vvv
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sda: pdc metadata discovered
*** Active Set
name : pdc_gdiiehcgj
size : 1953394048
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0

so here the metadata was discovered in both of the drives which was weird for me. Perhaps using dd would be helpful.

and status was being shown as:

$sudo dmsetup status
pdc_gdiiehcgj: 0 1953394048 mirror 2 8:0 8:16 14904/14904 1 AA 1 core
pdc_gdiiehcgj2: 0 976703805 linear
pdc_gdiiehcgj1: 0 976687677 linear

From here, I had a couple of ideas in my mind to test:
1. Use dd to get rid of first 512 sectors of the drive
2. Before #1, I should probably switch to windows and try to rebuild the RAID-SET from there or if it could happen by itself.
3. Should use -E to erase the metadata

$sudo dmraid -r -E /dev/sdb and then try to rebuild the array.
Do you really want to erase "pdc" ondisk metadata on /dev/sdb ? [y/n]

With the 3rd one there will be a problem as dmraid gave me an error that it doesn’t support rebuild with PDC metadata format.

**Taken from http://tldp.org/HOWTO/Software-RAID-0.4x-HOWTO-8.html

Linux MD RAID-1 (mirroring) read performance:

MD implements read balancing. That is, the RAID-1 code will alternate between each of the (two or more) disks in the mirror, making alternate reads to each. In a low-I/O situation, this won’t change performance at all: you will have to wait for one disk to complete the read. But, with two disks in a high-I/O environment, this could as much as double the read performance, since reads can be issued to each of the disks in parallel. For N disks in the mirror, this could improve performance N-fold.

Linux MD RAID-1 (mirroring) write performance:

Must wait for the write to occur to all of the disks in the mirror. This is because a copy of the data must be written to each of the disks in the mirror. Thus, performance will be roughly equal to the write performance to a single disk.

Leave a Reply

Your email address will not be published. Required fields are marked *