Configure NFS Server Clustering with Pacemaker on CentOS 7 / RHEL 7

Channel: Linux
Abstract: [[email protected] ~]# pcs cluster setup --start --name nfs_cluster nfs1.example.com \[[email protected] ~]# pcs cluster auth nfs1.example.com nfs2.ex

NFS (Network File System) is the most widely server to provide files over network. With NFS server we can share folders over the network and allowed clients or system can access those shared folders and can use them in their applications. When it comes to the production environment then we should configure nfs server in high availability to rule out the single point of failure.

In this article we will discuss how we can configure nfs server high availability clustering(active-passive) with pacemaker on CentOS 7 or RHEL 7

Following are my lab details that I have used for this article,

  • NFS Server 1 (nfs1.example.com) – 192.168.1.40 – Minimal CentOS 7 / RHEL 7
  • NFS Server 2 (nfs2.example.com) – 192.168.1.50 – Minimal CentOS 7 / RHEL 7
  • NFS Server VIP – 192.168.1.51
  • Firewall enabled
  • SELinux enabled

Refer the below steps to configure NFS Server active-passive clustering on CentOS 7 / RHEL 7

Step 1) Set Host name on both nfs servers and update /etc/hosts file

Login to both nfs servers and set the hostname as 「nfs1.example.com」 and 「nfs2.example.com」 respectively using hostnamectl command, Example is shown below

~]# hostnamectl set-hostname "nfs1.example.com"
~]# exec bash

Update the /etc/hosts file on both nfs servers,

192.168.1.40  nfs1.example.com
192.168.1.50  nfs2.example.com
Step 2) Update both nfs servers and install pcs packages

Use below ‘yum update’ command to apply all the updates on both nfs servers and then reboot once.

~]# yum update && reboot

Install pcs and fence-agent packages on both nfs servers,

[[email protected] ~]# yum install -y pcs fence-agents-all
[[email protected] ~]# yum install -y pcs fence-agents-all

Once the pcs and fencing agents’s packages are installed then allow pcs related ports in OS firewall from both the nfs servers,

~]# firewall-cmd --permanent --add-service=high-availability
~]# firewall-cmd --reload

Now Start and enable pcsd service on both nfs nodes using beneath commands,

~]# systemctl enable pcsd
~]# systemctl start  pcsd
Step 3) Authenticate nfs nodes and form a cluster

Set the password to hacluster user, pcsd service will use this user to get the cluster nodes authenticated, so let’s first set the password to hacluster user on both the nodes,

[[email protected] ~]# echo "enter_password" | passwd --stdin hacluster
[[email protected] ~]# echo "enter_password" | passwd --stdin hacluster

Now authenticate the Cluster nodes, In our case nfs2.example.com will be authenticated on nfs1.example.com, run the below pcs cluster command on 「nfs1」

[[email protected] ~]# pcs cluster auth nfs1.example.com nfs2.example.com
Username: hacluster
Password:
nfs1.example.com: Authorized
nfs2.example.com: Authorized
[[email protected] ~]#

Now its time to form a cluster with the name 「nfs_cluster」 and add both nfs nodes to it. Run below 「pcs cluster setup」 command from any nfs node,

[[email protected] ~]# pcs cluster setup --start --name nfs_cluster nfs1.example.com \
 nfs2.example.com

Enable pcs cluster service on both the nodes so that nodes will join the cluster automatically after reboot. Execute below command from either of nfs node,

[[email protected] ~]# pcs cluster enable --all
nfs1.example.com: Cluster Enabled
nfs2.example.com: Cluster Enabled
[[email protected] ~]#
Step 4) Define Fencing device for each cluster node

Fencing is the most important part of a cluster, if any of the node goes faulty then fencing device will remove that node from the cluster. In Pacemaker fencing is defined using Stonith (Shoot The Other Node In The Head) resource.

In this tutorial we are using a shared disk of size 1 GB (/dev/sdc) as a fencing device. Let’s first find out the id of /dev/sdc disk

[[email protected] ~]# ls -l /dev/disk/by-id/

Note down the id of disk /dev/sdc as we will it in 「pcs stonith」 command.

Now run below 「pcs stonith」 command from either of the node to create fencing device(disk_fencing)

[[email protected] ~]# pcs stonith create disk_fencing fence_scsi \ 
pcmk_host_list="nfs1.example.com nfs2.example.com" \ 
pcmk_monitor_action="metadata" pcmk_reboot_action="off" \ 
devices="/dev/disk/by-id/wwn-0x6001405e49919dad5824dc2af5fb3ca0" \ 
meta provides="unfencing"
[[email protected] ~]#

Verify the status of stonith using below command,

[[email protected] ~]# pcs stonith show
 disk_fencing   (stonith:fence_scsi):   Started nfs1.example.com
[[email protected] ~]#

Run 「pcs status」 command to view status of cluster

[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: nfs2.example.com (version 1.1.16-12.el7_4.7-94ff4df) \
 - partition with quorum
Last updated: Sun Mar  4 03:18:47 2018
Last change: Sun Mar  4 03:16:09 2018 by root via cibadmin on nfs1.example.com

2 nodes configured
1 resource configured
Online: [ nfs1.example.com nfs2.example.com ]
Full list of resources:
 disk_fencing   (stonith:fence_scsi):   Started nfs1.example.com
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[[email protected] ~]#

Note: If your cluster nodes are the Virtual machines and hosted on VMware then you can use 「fence_vmware_soap」 fencing agent. To configure 「fence_vmware_soap」 as fencing agent, refer the below logical steps:

1) Verify whether your cluster nodes can reach to VMware hypervisor or Vcenter

# fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> \
 --ssl -z -v -o list |egrep "(nfs1.example.com|nfs2.example.com)"
or
# fence_vmware_soap -a <vCenter_IP_address> -l <user_name> -p <password> \ 
--ssl -z -o list |egrep "(nfs1.example.com|nfs2.example.com)"

if you are able to see the VM names in the output then it is fine, otherwise you need to check why cluster nodes not able to make connection esxi or vcenter.

2) Define the fencing device using below command,

# pcs stonith create vmware_fence fence_vmware_soap \ 
pcmk_host_map="node1:nfs1.example.com;node2:nfs2.example.com" \ 
ipaddr=<vCenter_IP_address> ssl=1 login=<user_name> passwd=<password>

3) check the stonith status using below command,

# pcs stonith show
Step 5) Install nfs and format nfs shared disk

Install ‘nfs-utils’ package on both nfs servers

[[email protected] ~]# yum install nfs-utils -y
[[email protected] ~]# yum install nfs-utils -y

Stop and disable local 「nfs-lock」 service on both nodes as this service will be controlled by pacemaker

[[email protected] ~]# systemctl stop nfs-lock &&  systemctl disable nfs-lock
[[email protected] ~]# systemctl stop nfs-lock &&  systemctl disable nfs-lock

Let’s assume we have a shared disk 「/dev/sdb」 of size 10 GB between two cluster nodes, Create partition on it and format it as xfs file system

[[email protected] ~]# fdisk /dev/sdb

Run the partprobe command on both nodes and reboot once.

~]# partprobe

Now format 「/dev/sdb1」 as xfs file system

[[email protected] ~]# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=655296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2621184, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]#

Create mount point for this file system on both the nodes,

[[email protected] ~]# mkdir /nfsshare
[[email protected] ~]# mkdir /nfsshare
Step 6) Configure all required NFS resources on Cluster Nodes

Followings are the required NFS resources:

  • Filesystem resource
  • nfsserver resource
  • exportfs resource
  • IPaddr2 floating IP address resource

For Filesystem resource, we need a shared storage among the cluster nodes, we have already created partition on the shared disk (/dev/sdb1) in above steps, so we will use that partition. Use below 「pcs resource create」 command to define Filesystem resource from any of the node,

[[email protected] ~]# pcs resource create nfsshare Filesystem device=/dev/sdb1 \
  directory=/nfsshare fstype=xfs --group nfsgrp
[[email protected] ~]#

In above command we have defined NFS filesystem as 「nfsshare」 under the group 「nfsgrp「. Now onwards all nfs resources will created under the group nfsgrp.

Create nfsserver resource with name ‘nfsd‘ using the below command,

[[email protected] ~]# pcs resource create nfsd nfsserver \ 
nfs_shared_infodir=/nfsshare/nfsinfo --group nfsgrp
[[email protected] ~]#

Create exportfs resource with the name 「nfsroot

[[email protected] ~]#  pcs resource create nfsroot exportfs clientspec="192.168.1.0/24" options=rw,sync,no_root_squash directory=/nfsshare fsid=0 --group nfsgrp
[[email protected] ~]#

In the above command, clientspec indicates the allowed clients which can access the nfsshare

Create NFS IPaddr2 resource using below command,

[[email protected] ~]# pcs resource create nfsip IPaddr2 ip=192.168.1.51 \ 
cidr_netmask=24 --group nfsgrp
[[email protected] ~]#

Now view and verify the cluster using pcs status

[[email protected] ~]# pcs status

Once you are done with NFS resources then allow nfs server ports in OS firewall from both nfs servers,

~]# firewall-cmd --permanent --add-service=nfs
~]#  firewall-cmd --permanent --add-service=mountd
~]#  firewall-cmd --permanent --add-service=rpc-bind
~]#  firewall-cmd --reload
Step 7)  Try Mounting NFS share on Clients

Now try mounting the nfs share using mount command, example is shown below

[[email protected] ~]# mkdir /mnt/nfsshare
[[email protected] ~]# mount 192.168.1.51:/ /mnt/nfsshare/
[[email protected] ~]# df -Th /mnt/nfsshare
Filesystem     Type  Size  Used Avail Use% Mounted on
192.168.1.51:/ nfs4   10G   32M   10G   1% /mnt/nfsshare
[[email protected] ~]#
[[email protected] ~]# cd /mnt/nfsshare/
[[email protected] nfsshare]# ls
nfsinfo
[[email protected] nfsshare]#

For Cluster testing, stop the cluster service on any of the node and see whether nfsshare is accessible or not. Let’s assume I am going stop cluster service on 「nfs1.example.com」

[[email protected] ~]# pcs cluster stop
Stopping Cluster (pacemaker)...
Stopping Cluster (corosync)...
[[email protected] ~]#

Now go to client machine and see whether nfsshare is still accessible, In my case I am still able to access it and able to create files on it.

[[email protected] nfsshare]# touch test
[[email protected] nfsshare]#

Now enable the cluster service on 「nfs1.example.com」 using below command,

[[email protected] ~]# pcs cluster start
Starting Cluster...
[[email protected] ~]#

That’s all from this article, it confirms that we have successfully configured NFS active-passive clustering using pacemaker. Please do share your feedback and comments in the comments section below.

Read AlsoConfigure Two Node Squid Cluster using Pacemaker on CentOS 7 / RHEL 7

Ref From: linuxtechi

Related articles