In this tutorial, you will learn, how to configure Linux RAID 1. #centlinux #linux
Table of Contents
What is RAID Storage?
RAID is abbreviation of Redundant Array of Inexpensive Disks. There are different levels of RAID, each have different purpose and application. In this article we focus on RAID Level 1. With RAID 1, data is mirrored on another disk on real time. This RAID method is therefore frequently called disk mirroring.
What is RAID 1?
The main advantage of RAID 1 is that, if one disk in the RAID is failed, the other one continues to function. When the failed disk is replaced, the new disk is automatically synchronized with the survived disk. RAID 1 also offers the possibility of using a hot standby spare disk that will be automatically mirrored in the event of a disk failure on any of the primary RAID devices.
RAID 1 offers data redundancy, without the speed advantages of RAID 0. A limitation of RAID 1 is that the total RAID size in gigabytes is equal to that of the smallest disk in the RAID set.
Read Also: How to configure Virtual Data Optimizer in Linux
Problem Statement:
Objective of this write-up is get understanding of how to configure a Software RAID Level 1 in Linux based OS to provide data redundancy. This tutorial will cover configuration, management and recovery options of RAID 1.
System Specification:
We have configured a CentOS virtual machine with following specifications:
Operating System | CentOS 6 |
RAID Device Name | /dev/md0 |
RAID Level | 1 (Mirroring) |
RAID Disks :
Device |
Size |
/dev/sdb | 2 GB |
/dev/sdc | 2 GB |
/dev/sdd | 2 GB |
/dev/sde | 2 GB |
Configure Linux RAID 1:
To check available disks, execute the following commands to get a list of disks connected to system.
# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sde doesn't contain a valid partition table
Above output shows that we have 5 hard disks connected to the system, disk /dev/sda is in use by the system, while the disks /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde (size 2GB each) has not yet initialized. We will use them to create our RAID array.
Let’s initialize two disks /dev/sdb and /dev/sdc to be used by our Linux RAID 1 array.
# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-261, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-261, default 261): Using default value 261 Command (m for help): t Selected partition 1 Hex code (type L to list codes): L 0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot 1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris 2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx 6 FAT16 42 SFS 86 NTFS volume set da Non-FS data 7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT 10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/ 11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b 12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor 14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor 16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary 17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto 18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep 1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT 1c Hidden W95 FAT3 75 PC/IX Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdb: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 261 2096451 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. # partprobe /dev/sdb
Repeat the same steps for initializing disk /dev/sdc.
Now create a RAID Level 1 and add the disks /dev/sdb and /dev/sdc to it.
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc mdadm: array /dev/md0 started.
To check RAID configurations execute following command.
# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc[1] sdb[0] 2097088 blocks [2/2] [UU] unused devices: <none>
Our RAID configurations are not permanent and will be lost when the machine will reboot. To make it persistent, we have to create a configuration file and add the information in it. A single command will be sufficient to accomplish the task.
# mdadm --detail --scan > /etc/mdadm.conf
To create file system ext3 of RAID /dev/md0, use the following command.
# mke2fs -j /dev/md0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 262144 inodes, 524272 blocks 26213 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=536870912 16 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Now our Linux RAID 1 is ready to use. Let’s create a mount point in our hard disk and mount the RAID persistently.
# mkdir /u01 # vi /etc/fstab /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 /dev/md0 /u01 ext3 defaults 0 0 # mount –a # df -m Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 18723 2718 15039 16% / /dev/sda1 99 12 83 13% /boot tmpfs 252 0 252 0% /dev/shm /dev/md0 2016 36 1879 2% /u01
From the last line of the above screenshot it is clear that the Storage Capacity of our RAID is 2016 MB, i.e. size of the smallest disk in the array.
To test our RAID Array, copy a large file to /u01. (I have copy a 626 MB file).
# cp 10201_database_win32.zip /u01 # cd /u01 # du -m * 626 10201_database_win32.zip 1 lost+found
Testing Linux RAID 1:
As we already know, In RAID 1 architecture, the files mirrored on all disks. Test it now by stopping the RAID, mount the Array Disks at different mount points, and list the disk contents.
# cd / # umount /u01 # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 # mkdir d0{1,2} # mount -t ext3 /dev/sdb /d01 # mount -t ext3 /dev/sdc /d02 # ls /d01 10201_database_win32.zip lost+found # ls /d02 10201_database_win32.zip lost+found
So, it is clear from the above test that, our Linux RAID 1 is working absolutely fine. Now let’s start the RAID again.
# umount /d01 # umount /d02 # mdadm --assemble /dev/md0 mdadm: /dev/md0 has been started with 2 drives. # mount -a
The mdm –assemble command will only work if you have save your RAID configuration to /etc/mdadm.conf file.
Add a Disk to RAID Array:
Now let’s add one more disk /dev/sdd to our existing array. Initialize it according to steps above, then execute the following command to add it.
# mdadm --manage /dev/md0 --add /dev/sdd mdadm: added /dev/sdd # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdd[2](S) sdc[1] sdb[0] 2097088 blocks [2/2] [UU] unused devices: <none>
Although /dev/hdd has been added but it is not used by RAID, because our RAID is configure to use only two devices, and it already have two devices, i.e. /dev/sdb and /dev/sdc. Therefore /dev/sdd is added as SPARE ((S) in above screen shot after sdd[2] represents this) disk that will become active automatically if an Active Disk fails. (It is the feature of RAID 1, that we have discussed above).
We have two ways to make use of /dev/sdd, either we increase the number of raid-devices or we replace an existing disk with /dev/sdd. The last option will be discussed in next section, for now we are increasing the raid-devices as under:
# mdadm --grow /dev/md0 --raid-devices=3 # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdd[3] sdc[1] sdb[0] 2097088 blocks [3/2] [UU_] [=======>.............] recovery = 38.8% (815808/2097088) finish=0.9min speed=21559K/sec unused devices: <none>
Observe the output of cat command. The RAID has been performing some kind of recovery. Actually it is the rebalancing activity to create exact mirror at /dev/sdd. It will take some time based on the files at /u01.
Remove a disk from RAID Array:
Now our RAID has 3 disks, and running in level 1. Let’s remove a disk /dev/sdd and replace it with a new one /dev/sde. To do so we have to force a device failure.
# mdadm --manage /dev/md0 --fail /dev/sdd mdadm: set /dev/sdd faulty in /dev/md0 # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdd[3](F) sdc[1] sdb[0] 2097088 blocks [3/2] [UU_] unused devices: <none>
Observe the output of cat command, the disk /dev/sdd is marked as FAULTY spare ((F) after sdd[3] in above screenshot represents this). To remove this disk from array use the following command.
# mdadm --manage /dev/md0 --remove /dev/sdd mdadm: hot removed /dev/sdd # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Jan 15 09:33:01 2012 Raid Level : raid1 Array Size : 2097088 (2048.28 MiB 2147.42 MB) Device Size : 2097088 (2048.28 MiB 2147.42 MB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Jan 15 10:49:44 2012 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3c18230e:40c11f0f:fdcec7f4:d575f031 Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 0 0 2 removed
Last line of the above screenshot shows that the RaidDevice 2 has been removed.
To add a new device /dev/sde, Initialize it, and add it to RAID Array.
Don’t forget to update the /etc/mdadm.conf file, or your changes will be lost after a reboot.
Remove Linux RAID 1 Configurations:
At the end, we will show you how to remove RAID configuration from your system. These steps are given below and it is reversal of the configuration that we have made so long. I don’t think these steps required any further clarification.
# cd / # umount /u01 # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 # rm –rf /etc/mdadm.conf # rmdir /u01
Also remove entry from /etc/fstab.
In the above write-up, we use RAID 1 example, because its architecture is relatively simple to understand and experiment as compare to other levels. We hope that after go thru this write-up you may be able configure the more complex RAIDs like 5 and 6.
Conclusion:
In this tutorial, you have learned, how to configure Linux RAID 1.