Proxmox VE 2.x With Software Raid

Channel: Linux
Abstract: Unrecognised md component device - /dev/sdb1 [email protected]fdisk /dev/sdb [email protected]
Proxmox VE 2.x With Software Raid

Proxmox Virtual Environment is an easy to use Open Source virtualization platform for running Virtual Appliances and Virtual Machines. Proxmox does not officially support software raid but I have found software raid to be very stable and in some cases have had better luck with it than hardware raid.

I do not issue any guarantee that this will work for you!

 

Overview

First install Proxmox V2 the normal way with the CD downloaded from Proxmox. Next we create a RAID 1 array on the second hard drive and move the proxmox install to it.

Then we adjust the Grub settings so it will boot with the new setup.

 

Credits

These following tutorials are what I used:

https://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze

A special thankyou to Falko from HowtoForge as a lot of this material is re-used from his how to. https://www.howtoforge.com/linux_lvm

 

Installing Proxmox

Install proxmox from the latest downloaded CD from Proxmox http://www.proxmox.com/downloads/proxmox-ve/17-iso-images

If you want an ext4 install type type this in at the boot prompt:

linux ext4

Installation instructions here: http://pve.proxmox.com/wiki/Quick_installation

Next login with ssh and run:

apt-get update
apt-get upgrade

 

Installing Raid

Note: this tutorial assumes that proxmox installed to /dev/sda and the spare disk is /dev/sdb. Use the following command to list the current partitioning:

fdisk -l

The output should look as follows:

[email protected]:/# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f7a7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 8e Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System

There is more here but we are only concerned with the first two disks for now. We can see that /dev/sda has the proxmox install and /dev/sdb has no partitions.

First we install software raid aka mdraid:

apt-get install mdadm

In the package configuration window choose ok then all. Next we start the kernel modules with modprobe:

modprobe linear
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run:

cat /proc/mdstat

The output should look as follows:

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
[email protected]:~#

Now we need to copy the partition table from sda to sdb:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

The output should be:

[email protected]:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0 - 0 0 0 Empty
/dev/sdb2 0 - 0 0 0 Empty
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1048575 1046528 83 Linux
/dev/sdb2 1048576 1953523711 1952475136 8e Linux LVM
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[email protected]:/# [email protected]:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
-bash: [email protected]:/#: No such file or directory
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 65- 66- 523264 83 Linux
/dev/sdb2 65+ 121601- 121536- 976237568 8e Linux LVM
/dev/sdb3 0 - 0 0 0 Empty
/dev/sdb4 0 - 0 0 0 Empty
New situation:
No partitions found

sfdisk: no partition table present.

Now we need to change the partition types to linux raid autodetect:

fdisk /dev/sdb

[email protected]:/# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00078af8

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

As we can see we now have two linux raid autodetect partitions on /dev/sdb.

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:

mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2

If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):

[email protected]:~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
[email protected]:~#

Otherwise the commands will not display anything at all.

Now we need to create our new raid arrays:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2

This will show(answer yes):

[email protected]:/# mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[email protected]:/#

The command

cat /proc/mdstat

[email protected]:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid1 sdb1[1]
523252 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
976236408 blocks super 1.2 [2/1] [_U]

unused devices:

<none>

should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok).

Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

The standard proxmox install uses /dev/sda1 for the boot partition and uses lvm on /dev/sda2 for the root, swap and data partitions.

If you are new to lvm partitions I recommend you check out the link under credits at the top of this how to. To see the lvm partitions use the command:

lvscan

That should output:

[email protected]:~# lvscan
ACTIVE '/dev/pve/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
ACTIVE '/dev/pve/data' [804.02 GiB] inherit

Now we will create a new volume group named pve1 and matching logical volumes for swap, root, and data.

First the physical volume:

pvcreate /dev/md1

This outputs

Writing physical volume data to disk "/dev/md1"
Physical volume "/dev/md1" successfully created

This command:

pvscan

shows our new physical volume:

PV /dev/sda2 VG pve lvm2 [931.01 GiB / 16.00 GiB free]
PV /dev/md1 lvm2 [931.01 GiB]
Total: 2 [1.82 TiB] / in use: 1 [931.01 GiB] / in no VG: 1 [931.01 GiB]

Ref From: howtoforge

Related articles