stuff you never thought you wanted to know.


Setup Software Raid 1 with LVM on Linux

Setup Software Raid 1 with LVM on Linux

Written by Jon Berg <jon.berg|a|turtlemeat.com>

Created: May 2010

Introduction The following text describes how to setup Software Raid 1 with LVM on Linux. Software means that RAID (redundant array of independent disks or redundant array of inexpensive disks) is done in software instead of on a hardware disk controller. Raid 1, basically means that the data is mirrored on multiple disks. So it is a tool to get fault tolerance in case of a hard drive crash and some better availability since you have the option of continuing to run the system with just the remaining disk. LVM (Logical Volume Manager) is a tool to dynamically manage partitions. The particular Linux distribution used is Archlinux. This is made as a personal reference that could be useful for others. So RAID 1 is good to protect yourself from some of the problems with disk failures, but you still need to make off-site backups. It does not protect yourself from fire, water floods, burglars or American hellfire missiles coming in through your windows.

Please feel free to email if you find errors or omissions in the text. Please don't send any emails asking about stuff that does not work on your system and how I can fix your system. I am just a casual Linux user and people in Linux forums are more likely to be able to help you with particular problems.

Partition disks (deletes data on the disk)

Insert hard drives in the machine.
Create partitions, so /dev/sdc and /dev/sdb are the disk
we want to make into the raid 1 array, all
the data on these will be lost:

fdisk /dev/sdc

 -delete partitions 
 (if this is an existing partition with data, the data in it will be deleted):
  d 1

  -create new partition (and type in the sizes etc you want):

  -set type to Linux raid auto (type t, then 32):
  t , 32

  -write changes to disk:

do the same to other disk:
fdisk /dev/sdb

create raid 1 device

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

it has to sync, to see progress:
cat /proc/mdstat
you can continue the steps below before the prosess is complete, but don't reboot before it is completed.

create file system on raid device:
mkfs.ext3 /dev/md0

add to /etc/mdadm.conf (copy the original file then, then insert the output of mdadm to /etc/mdadm.conf):
cp /etc/mdadm.conf /etc/mdadm.conf.org
mdadm --detail --scan >> /etc/mdadm.conf

- add email monitoring, to /etc/mdadm.conf:
MAILADDR youremail@gmail.com 

-to /etc/rc.conf, add mdadm inside the daemons array (inside the parenteses of DAEMONS=() ), this will start the
daemon to check the raid array periodically and email you if something happens. So add to DAEMONS=():

edit /etc/fstab:
     /dev/md0  /raid ext3 defaults 0 2

reboot and see if everything loads and mounts correctly.


Try to remove one partition in the raid array to see if you get the notification. 
(adjust /dev/md0 /dev/sdc1 to match your system!)
mdadm --manage --set-faulty /dev/md0 /dev/sdc1

See that one has failed [F] with:
cat /proc/mdstat

Try to read and write some files on that is stored on the array.

Put it back together :
 - it must first be removed: remove /dev/sdc1 from the array:
  mdadm /dev/md0 -r /dev/sdc1
 - then it can be added back in:
  mdadm /dev/md0 -a /dev/sdc1

See that it gets added back with:
cat /proc/mdstat


When a disk in the RAID 1 array stops working for real:
I _believe_ this is the way to do it:
- first remove the failed disk from the array (where /dev/sdc1 is the failed partition):
mdadm /dev/md0 -r /dev/sdc1

Turn off the machine and insert a new disk. Start the machine again.
Create a new partition that is equal (or bigger, but the space will be unused) in size as
the current partition that is in the raid array.

This is done with (where /dev/sdd is the new disk you inserted into the machine):
fdisk /dev/sdd

 -delete partitions (if you have old partitions you want to delete, it deletes any data you might 
  have on that partition):
  d 1

  -create new partition (n and press enter a couple of times if you want use the default values for
   sizes,otherwise fill it out with the values you want):

  -set type to Linux raid auto (type t, then 32):
  t , 32

  -write changes to disk:

- then add the new partition into the array (where /dev/sdd1 is the new partion you created):
mdadm /dev/md0 -a /dev/sdd1

See that it gets reconstructed with:
cat /proc/mdstat

LVM on top of RAID 1

by now we have a functioning raid 1, and we can stop there.

but we want to use LVM to create a partition on top of the raid1 that we can resize etc. with LVM.
so to create a 40 gig partition on top of the raid1 array.

unmount the raid1 array to modify it:
umount /raid

pvcreate /dev/md0

create a volume group:
vgcreate volumegroup1 /dev/md0

create logical volume:
lvcreate --name logicalvolume1 --size 40G volumegroup1

so now you should have the file: /dev/volumegroup1/logicalvolume1

You should make the habit of backing up /etc/lvm/backup each time you make changes to lvm volumes, send 
the files in /etc/lvm/backup to your gmail account 
or somewhere else that is out of your system. This will make it safer when it is time to restore 
the volumes, see the section below on restoring it.

format the logical volume (-m 0 means 0 percent reserved for super user):
mkfs.ext3 -m 0 /dev/volumegroup1/logicalvolume1

make a directory to mount it in:
mkdir /mnt/logicalvolume1

mount it:
mount /dev/mapper/volumegroup1-logicalvolume1 /mnt/logicalvolume1/

have it mount on boot, put in /etc/fstab :
/dev/mapper/volumegroup1-logicalvolume1 /mnt/logicalvolume1 ext3 rw,noatime 0 0

if you followed the raid part in this text you put "/dev/md0 /raid ext3 defaults 0 2" into /etc/fstab,
but you need to remove this line when you now use LVM.

I have in /etc/rc.conf.

see if it mount correctly at boot, use the test steps to verify that it works when you take one partition down.

Moving the disks with RAID 1 and LVM to another machine

How do you restore this LVM stuff say after plugging the disks into another machine?
You need to have a safe copy of the stuff in /etc/lvm/backup or you are in a bit of trouble.
There is a way to try to get it from the disk, but the disk can contain multiple versions and you 
need to figure out which one is the correct one.

-plug in the disks into the machine, and boot it, become root.

-load raid module
modprobe md-mod 

-get the raid array running
mdadm --assemble --scan
this gives me: mdadm: /dev/md/myhostname:0 has been started with 2 drives.

-check that it is running smoothly with:
cat /proc/mdstat

I suppose the device: "/dev/md/myhostname:0", or what "fdisk -l" shows the raid device to be can now be
mounted if you just have a plain filesystem on top of raid and without any LVM.

to have it load on boot:
mdadm --detail --scan >> /etc/mdadm.conf 

- add email monitoring, to /etc/mdadm.conf:
MAILADDR youremail@gmail.com 

-to /etc/rc.conf, add mdadm inside the daemons array (inside the parentheses of DAEMONS=() ), this will start the
daemon to check the raid array periodically and email you if something happens. So add to DAEMONS=():

-load lvm module:
modprobe dm-mod

So the LVM configuraion can not be retrived as easily as with raid and "mdadm". The best thing is to 
have the config file from your original setup: /etc/lvm/backup/volumegroup1 (where volumegroup1 was the name 
chosen in this example, you can have named it something else).
So this file must be backed up everytime you change anything related to lvm.

So if you don't have this file,  there is a bit of a hacky way to get it:
LVM stores one or more copy(s) of the configuration file content at the beginning of the partition. Use the 
command dd  to extract the first part of the partition and write it to a text file:
dd if=/dev/md127 bs=512 count=255 skip=1 of=/tmp/md127.txt
my raid device is now /dev/md127 for some reason... 
So open the /tmp/md127.txt find the last configuration of volumegroup1, there are some dates in there, 
but you have to make educated guesses, and you need to extract everything that belongs to volumegroup1, 
that is the text volgroup1 and everything inside the {}.
Then put that in a config file: /etc/lvm/config/volumegroup1

To get the LVM stuff going:



vgchange -a y

now you should be able to mount it:
mkdir /mnt/logicalvolume1
mount /dev/volumegroup1/logicalvolume1 /mnt/logicalvolume1

-have in /etc/rc.conf:

-edit /etc/fstab to have it load at boot and add:
/dev/mapper/volumegroup1-logicalvolume1 /mnt/logicalvolume1 ext3 rw,noatime 0 0

Now it should be working... try to reboot and see that it comes up with the stuff mounted.

Setup Software Raid 1 with LVM
Setup Linux with Desktop

Manage your website ads with DFP
Google AdSense for Domains - not so great
Let Google Handle Email for your Domain Name
Page Rank banned by Google
Google's highest ranked web pages
SEO intelligent spam causes irrelevant search results
Google Sandbox
Google ranking factors
How to not give out page rank in web page links

Web Server Programming
Simple Java web server
Simple Python web server
Configuring Apache webserver with .htaccess file

Turn off the loginscreen in XP, after installing .NET .

Turn off xp login screen unread mail count
What is .NET

Web (webmastering)
Introduction to Cascading style sheets (CSS)
The value of Alexa traffic rank
HTML META tag, not a search engine optimization tool
Create a maintainable webpage with modularization
The www prefix in your domain name
What is RSS and cool things RSS can be used for
MySql backup and restore with phpMyAdmin

Mix Computer related text
Doing business (making money) with Information Technology
Business with Computer Science
Research in Computer Science
Current and future possibilities of Medical Informatics
Tasks that make sense to have automated
Programming handheld mobile devices (overview)
Security tips for the web surfer
Price and Capacity in computer hardware
Java RMI Tutorial.

Microsoft Word
Page numbering in Word
Numbering headers or outlines in Word
Create a List of Figures
Turn off the default collapsing menus in Word

Turtlmeat.com 2004-2011 ©