Installation And Setup Guide For DRBD, OpenAIS, Pacemaker + Xen On OpenSUSE 11.1

Channel: Linux
Abstract: [[email protected] ~]# crm configure crm(live)configure# primitive debian ocf[[email protected] ~]# crm configure crm(live)configure# primitive xen_fs
Installation And Setup Guide For DRBD, OpenAIS, Pacemaker + Xen On OpenSUSE 11.1

Written by Adam Gandelman

The following will install and configure DRBD, OpenAIS, Pacemaker and Xen on OpenSUSE 11.1 to provide highly-available virtual machines. This setup does not utilize Xen's live migration capabilities. Instead, VMs will be started on the secondary node as soon as failure of the primary is detected. Xen virtual disk images are replicated between nodes using DRBD and all services on the cluster will be managed by OpenAIS and Pacemaker. The following setup utilizes DRBD 8.3.2 and Pacemaker 1.0.4. It is important to note that DRBD 8.3.2 has come a long way since previous versions in terms of compatibility with Pacemaker. In particular, a new DRBD OCF resource agent script and new DRBD-level resource fencing features. This configuration will not work with older releases of DRBD.

This document does not cover the configuration of Xen virtual machines. Instead, it is assumed you have a working virtual machine configured locally with a file-based disk image. As an example, our domU resource will manage a Debian virtual machine configured in debian.cfg.

Visit these links for more information on any of these components as well as additional documentation:

DRBD - http://www.drbd.org
Pacemaker - http://www.clusterlabs.org
OpenaAIS - http://www.openais.org

Contents:

1. Install Xen
2. Install and Configure DRBD
3. Install and Configure OpenAIS + Pacemaker
4. Configure DRBD Master/Slave Resource
5. Configure File System Resource
6. Configure domU Resource
7. Additional Information

 

1. Install Xen

The easiest way to install Xen and its prerequisites is through the yast command line tool:

# yast

Choose 'Virtualization' -> 'Install Hypervisor and tools'. If you're working on a remote server you may need to answer 'No' when asked about installing graphical components. Select 'Yes' when prompted about Xen Network Bridge.

Select 'System' -> 'Boot Loader' and set the Xen kernel as the default kernel.

Reboot.

At this point, the Xen kernel should be booted and a network interface br0 should be configured as a bridge to eth0.

 

2. Install and Configure DRBD

Compile and install on both nodes:

# cd /usr/src
# wget http://oss.linbit.com/drbd/8.3/drbd-8.3.2.tar.gz
# tar -zxf drbd-8.3.2.tar.gz
# cd drbd-8.3.2/
# make clean all
# make install

Edit /etc/drbd.conf:

global {
        usage-count no;
}
common {
        protocol C;
}
resource r0 {
  disk {
        fencing resource-only;
  }
  handlers {
        # these handlers are necessary for drbd 8.3 + pacemaker compatibility
        fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
        after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
   
  }
  syncer {
        rate 40M;  
   }
  on alpha {
        device  /dev/drbd0;
        disk    /dev/sdb1;
        address 192.168.10.22:7789;
        meta-disk       internal;
        }
  on bravo {
        device  /dev/drbd0;
        disk    /dev/sdb1;
        address 192.168.10.23:7789;
        meta-disk       internal;
}

Copy to other node:

alpha:~ # scp /etc/drbd.conf [email protected]:/etc/drbd.conf

Create meta-data:

alpha:~ # drbdadm create-md r0
bravo:~ # drbdadm create-md r0

Start DRBD:

alpha:~ # /etc/init.d/drbd start
Starting DRBD resources: [ d(r0) s(r0) n(r0) ]..
bravo:~ # /etc/init.d/drbd start

Starting DRBD resources: [ d(r0) s(r0) n(r0) ].

After DRBD has started and connected, look at /proc/drbd on either node to get the status of the resource:

# cat /proc/drbd

GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by [email protected], 2009-07-31 14:27:05
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:3156604

Sync resource:

alpha:~ # drbdadm -- --overwrite-data-of-peer primary r0

As of DRBD 8.3.2, a new feature has been added to skip the initial sync if desired:

NOTE: This is only intended for disks that are either blank or have the exact same data.

alpha:~ # drbdadm -- --clear-bitmap new-current-uuid r0

 

3. Install and configure OpenAIS + Pacemaker

Prerequisites:

zypper install tcl-devel ncurses-devel tcl

Obtain and install the latest versions of the HA utilities:

wget http://download.opensuse.org/repositories/openSUSE:/11.1/standard/i586/OpenIPMI-2.0.14-1.35.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/heartbeat-common-2.99.2-8.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/heartbeat-2.99.2-8.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/heartbeat-resources-2.99.2-8.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/libheartbeat2-2.99.2-8.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/libopenais2-0.80.5-13.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/libpacemaker3-1.0.4-24.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/openais-0.80.5-13.1.i586.rpm
wget http://download.opensuse.org/repositories/server:/ha-clustering/openSUSE_11.1/i586/pacemaker-1.0.4-24.1.i586.rpm
rpm -ivh *.rpm

Create AIS key:

alpha:~ # ais-keygen bravo:~ # ais-keygen

Edit /etc/ais/openais.conf:

aisexec {
        user:   root
        group:  root
}
service {
        name: pacemaker
        ver:  0
}
totem {
        version: 2
        token:          1000
        hold: 180
        token_retransmits_before_loss_const: 20
        join:           60
        consensus:      4800
        vsftype:        none
        max_messages:   20
        clear_node_high_bit: yes
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.10.0
                mcastaddr: 226.94.1.2
                mcastport: 5406
        }
}
logging {
        debug: off
        fileline: off
        to_syslog: yes
        to_stderr: no
        syslog_facility: daemon
        timestamp: on
}
amf {
        mode: disabled
}

Copy to other node:

alpha:~ # scp /etc/ais/openais.conf [email protected]:/etc/ais/openais.conf

Start OpenAIS:

alpha:~ # /etc/init.d/openais start
bravo:~ # /etc/init.d/openais start

Configure the default cluster options:

alpha:~ # crm
crm(live)# configure
crm(live)configure# property no-quorum-policy=ignore
crm(live)configure# property stonith-enabled=false
crm(live)configure# property default-resource-stickiness=1000
crm(live)configure# commit
crm(live)configure# bye

A two node cluster should not be concerned with quorum. STONITH is disabled in this configuration though it is highly-recommended in any production environment to eliminate the risk of divergent data. A default resource stickiness of 1000 will keep resources where they are after a fail-over and prevent them from returning to a failed node after coming back online.

 

4. Configure DRBD Master/Slave Resource

alpha:~ # crm configure
crm(live)configure# primitive drbd_xen ocf:linbit:drbd \
params drbd_resource="r0" \
op monitor interval="15s"
crm(live)configure# ms ms_drbd_xen drbd_xen \
meta master-max="1" master-node-max="1" \
clone-max="2" clone-node-max="1" \
notify="true"
crm(live)configure# commit
crm(live)configure# bye

At this point, Pacemaker is handling the DRBD resource r0. Check crm_mon to make sure.

 

5. Configure File System Resource

For this setup there will be one file system resource that runs on the DRBD master and virtual machine resources that run on whichever node the file system is mounted.

Create file system and mount points:

[[email protected] ~]# mkfs.ext3 /dev/drbd0
[[email protected] ~]# mkdir /xen
[[email protected] ~]# mkdir /xen

Note, you must run the mkfs command on whichever node is the current Master/Primary.

Copy the existing virtual machine configuration file and disk image to the shared storage:

[[email protected] ~]# mount /dev/drbd0 /xen
[[email protected] ~]# cp /etc/xen/vm/debian.cfg /xen
[[email protected] ~]# cp /etc/xen/vm/debian.img /xen
[[email protected] ~]# umount /xen

Note: Do not forget to update debian.cfg to point to the new location of the disk image.

Configure file system resource, constrain it to run with and after DRBD:

[[email protected] ~]# crm configure
crm(live)configure# primitive xen_fs ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/xen"
crm(live)configure# colocation fs_on_drbd inf: xen_fs ms_drbd_xen:Master
crm(live)configure# order fs_after_drbd inf: ms_drbd_xen:promote xen_fs:start
crm(live)configure# commit
crm(live)configure# bye

 

6. Configure domU Resource

domU's will be configured to use virtual disk images that are stored on the DRBD resource mounted at /xen. It is not required but is a good idea to also store domU configuration files on the shared resource.

Configure Xen domU resource and constrain it to run with and after xen_fs:

[[email protected] ~]# crm configure
crm(live)configure# primitive debian ocf:heartbeat:Xen \
params xmfile="/xen/debian.cfg" \
op monitor interval="10s" \
op start interval="0s" timeout="30s" \
op stop interval="0s" timeout="300s"
crm(live)configure# colocation debian-with-xen_fs inf: debian xen_fs
crm(live)configure# order debian-after-xen_fs inf: xen_fs:start debian:start
crm(live)configure# commit

The file system and domU resource should now be running on whichever node is the DRBD primary:

Online: [ alpha bravo ]
Master/Slave Set: ms_drbd_xen
       Masters: [ alpha ]
       Slaves: [ bravo ]
xen_fs  (ocf::heartbeat:Filesystem):    Started alpha
debian  (ocf::heartbeat:Xen):   Started alpha                

The Debian virutal machine as well as its backing storage are now configured for full redundancy as well high-availability. Should host alpha fail, services will automatically fail-over to bravo.

This configuration can be expanded to include any number of virtual machines assuming they adhere to the storage and memory constraints of the environment. To do so, simply repeat 6. Configure domU Resource for each domU.

 

7. Additional Information

LINBIT has led the way in high-availability since 2001, and continues to be the market leader in business uptime, disaster recovery, and continuity solutions. Built on a solid base of Austrian software engineering and open-source technology, DRBD is the industry standard for high-availability and data redundancy for mission critical systems.

For more information on how LINBIT can evolve your IT infrastructure call 1-877-4-LINBIT, visit http://www.linbit.com, or join us on irc.freenode.net in #DRBD

Ref From: howtoforge

Related articles