This Blog is to share our knowledge and expertise on Linux System Administration and VMware Administration

Wednesday, September 23, 2015

How to configure Software Raid on Linux?

Software RAID is one of the greatest feature in Linux to protect the data from disk failure.We have LVM also in Linux to configure mirrored volumes but Software RAID  recovery is much easier in disk failures compare to Linux LVM. I have seen some of the environments are configured with Software RAID and LVM (Volume groups are built using RAID devices).
Using simple md commands, we can easily add and remove the disks from RAID.

Supported Software RAID Configurations on Linux

RAID level      Description                                                    Linux Option
RAID 0           Stripping                                                    “–level=0 –raid-devices=3”
RAID 1           Mirroring                                                    “–level=mirror –raid-devices=2 “
RAID 5           Stripping with Distributed Parity.               “–level=5 –raid-devices=3”
RAID 6           Stripping with Distributed Double Parity    “–level=6 –raid-devices=4”
RAID 10         Mirrored Stripe.                                          “–level=10 –raid-devices=4”

Available Disks for configuring software RAID.
/dev/sdb
/dev/sdc

1.Label the disk with Software RAID tag:
Before configuring the software RAID,you have to label the disk properly using fdisk command.So that you can easily identify which disks are in RAID and ioctl can read the disk properly.
[root@Test ~]# fdisk /dev/sdb

Create New Partition (n) with Primary (P) Select the size. Set the code fd for software raid and save the configuration
Perform the same for disk /dev/sdc as well. 

Run Partprobe command

2.Verify the both disk flag status using fdisk command.
[root@Test ~]# fdisk -l /dev/sdb /dev/sdc

3.Configure the desired RAID level.Here i am configuring RAID 1.
[root@Test ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

4.Create filesystem on md device and mount it.Do not forget to add the device details in
/etc/fstab to mount the volume across the system reboot.
[root@Test ~]# mkfs.ext4 /dev/md0
[root@Test ~]# mkdir /Test1
[root@Test ~]# mount /dev/md0 /Test1/
[root@Test ~]# df -h /Test1/

5.If you want to use md device on LVM, then skip “step 4” and continue from here.
You can add the logical volume details in /etc/fstab to mount the volume across the reboots.
[root@Test ~]# pvcreate /dev/md0
[root@Test ~]# pvs
[root@Test ~]# vgcreate raidvg /dev/md0
[root@Test ~]# lvcreate -L 200M -n lvdata raidvg
[root@Test ~]# lvs |grep lvdata

[root@Test ~]# mkfs.ext4 /dev/raidvg/lvdata
[root@Test ~]# mount /dev/raidvg/lvdata /Test1
[root@Test ~]# df -h /Test1


6. Check the raid status
[root@Test ~]# cat /proc/mdstat

7. mdadm configuration file is /etc/mdadm.conf


8. You have to update mdadm.conf with newly configured RAID information using below method
[root@Test ~]# mdadm --examine --scan
[root@Test ~]# mdadm --examine --scan >> /etc/mdadm.conf
[root@Test ~]# cat /etc/mdadm.conf

9.To see Configured RAID information,use below commands.
[root@Test ~]# mdadm --query /dev/md0
[root@Test ~]# mdadm --detail /dev/md0

10. If you want remove the software RAID, use the below methods.First stop the RAID using madadm command.Once its stopped ,you can remove the super block to destroy complete RAID configuration from the configured disks.

[root@Test ~]# mdadm --stop /dev/md0
[root@Test ~]# mdadm --query /dev/md0
[root@Test ~]# mdadm --zero-superblock /dev/sdb1
[root@Test ~]# mdadm --zero-superblock /dev/sdc1
[root@Test ~]# watch cat /proc/mdstat

No comments:

Post a Comment