How to Setup GlusterFS Storage on CentOS 7 / RHEL 7

Channel: Linux
Abstract: I am going to form trusted storage pool which consists of server 1 and server 2 in and will create bricks on that and after that will create distribut

GlusterFS is a free and open source file and object storage solution that can be used across the physical, virtual and cloud servers over the network. The main benefit of GlusterFS is that we can scale up or scale out the storage up-to multiple petabytes without any downtime, it also provides the redundancy and high availability of the storage.

Where to use GluserFS … ?

GllusterFS based storage can be used in Physical, Virtual and Cloud Servers over the network.

It can also be used in the firms where they used to serve multimedia or other content to the Internet users and have to deal hundreds of terabytes of files.

GlusterFS can also be used as object Storage in private and public cloud.

Different Terminology used in GlusterFS storage :
  • Trusted Storage Pool : It is a group of multiple servers that trust each other and form a storage cluster.
  • Node : A node is storage server which participate in trusted storage pool
  • Brick : A brick is LVM based XFS (512 byte inodes) file system mounted on folder or directory.
  • Volume : A Volume is a file system which is presented or shared to the clients over the network. A volume can be mounted using glusterfs, nfs and smbs methods.
Different types of Volumes that can be configure using GlusterFS :
  • Distribute Volumes :It is the default volume which is created when no option is specified while creating volume. In this type of volume files will be distributed across the the bricks using elastic hash algorithm
  • Replicate Volumes : As the name suggests in this type of volume files will replicated or mirrored across the bricks , In other words a file which is written in one brick will also be replicated to another bricks.
  • striped Volumes : In this type of volume larger files are cut or split into chunks and then distributed across the bricks.
  • Distribute Replicate Volumes : As the name suggest in type of volume files will be first distributed among the bricks and then will be replicated to different bricks.

Though other combination can be tried to form different volumes like striped-replicated.

In this article I will demonstrate how to setup GlusterFS Storage on RHEL 7.x and CentOS 7.x. In my case i taking four RHEL 7 / CentOS 7 Server with minimal installation and assuming additional disk is attached to these servers for glustesfs setup.

  • server1.example.com (192.168.43.10 )
  • server2.example.co m ( 192.168.43.20 )
  • server3.example.com ( 192.168.43.30 )
  • server4.example.com ( 192.168.43.40 )

Add the following lines in /etc/hosts file in case you have your own dns server.

192.168.43.10  server1.example.com server1
192.168.43.20  server2.example.com server2
192.168.43.30  server3.example.com server3
192.168.43.40  server4.example.com server4
Install Glusterfs server packages on all servers.

Glusterfs packages are not included in the default centos and RHEL repositories so we will setup gluster repo and EPEL repo. Run the following commands one after the another on all 4 servers.

~]# yum install wget
~]# yum install centos-release-gluster -y
~]#yum install epel-release -y
~]# yum install glusterfs-server -y

Start and enable the GlusterFS Service on all the four servers.

~]# systemctl start glusterd
~]# systemctl enable glusterd

Allow the ports in the firewall so that servers can communicate and form storage cluster (trusted pool). Run the beneath commands on all 4 servers.

~]# firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent
~]# firewall-cmd --zone=public --add-port=24009/tcp --permanent
~]# firewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanent
~]# firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanent
~]# firewall-cmd --reload
Distribute Volume Setup :

I am going to form trusted storage pool which consists of server 1 and server 2 in and will create bricks on that and after that will create distributed volume. I am also assuming that a raw disk of 16 GB (/dev/sdb) is allocated to both servers.

Run the below command from server 1 console to form a trusted storage pool with server 2.

[[email protected] ~]# gluster peer probe server2.example.com
peer probe: success.
[[email protected] ~]#

We can check the peer status using below command :

[[email protected] ~]# gluster peer status
Number of Peers: 1

Hostname: server2.example.com
Uuid: 9ef0eff2-3d96-4b30-8cf7-708c15b9c9d0
State: Peer in Cluster (Connected)
[[email protected] ~]#

Create Brick on Server 1

To create brick first we have to setup thing provision logical Volumes on the raw disk (/dev/sdb).

Run the following commands on Server 1

[[email protected] ~]# pvcreate /dev/sdb /dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2
[[email protected] ~]# vgcreate vg_bricks /dev/sdb
[[email protected] ~]# lvcreate -L 14G -T vg_bricks/brickpool1

In the above command brickpool1 is the name of thin pool.

Now create a create a logical volume of 3 GB

[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool1 -n dist_brick1

Now format the logical Volume using xfs file system

[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick1
[[email protected] ~]# mkdir -p /bricks/dist_brick1

Mount the brick using mount command

[[email protected] ~]# mount /dev/vg_bricks/dist_brick1 /bricks/dist_brick1/

To mount it permanently add the following line in /etc/fsatb

/dev/vg_bricks/dist_brick1 /bricks/dist_brick1 xfs rw,noatime,inode64,nouuid 1 2

create a directory with brick under the mount point

[[email protected] ~]# mkdir /bricks/dist_brick1/brick

Similarly perform the following set of commands on sever 2

[[email protected] ~]# pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
[[email protected] ~]# lvcreate -L 14G -T vg_bricks/brickpool2
[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool2 -n dist_brick2
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/dist_brick2
[[email protected] ~]# mkdir -p /bricks/dist_brick2
[[email protected] ~]# mount /dev/vg_bricks/dist_brick2 /bricks/dist_brick2/
[[email protected] ~]# mkdir /bricks/dist_brick2/brick

Create distributed volume using below gluster command :

[[email protected] ~]# gluster volume create distvol server1.example.com:/bricks/dist_brick1/brick server2.example.com:/bricks/dist_brick2/brick
[[email protected] ~]# gluster volume start distvol
volume start: distvol: success
[[email protected] ~]#

Verify the Volume status using following command :

[[email protected] ~]# gluster volume info distvol

Mount Distribute volume on the Client :

Before mounting the volume using glusterfs first we have to make sure that glusterfs-fuse package is installed on the client. Also make sure to add the gluster storage server entries in /etc/hosts file in case you do don’t have local DNS server.

Login to the client and run the below command from the console to install glusterfs-fuse

[[email protected] ~]# yum install glusterfs-fuse -y

Create a mount for distribute volume :

[[email protected] ~]# mkdir /mnt/distvol

Now mount the ‘distvol‘ using below mount command :

[[email protected] ~]# mount -t glusterfs -o acl server1.example.com:/distvol /mnt/distvol/

For permanent mount add the below entry in the /etc/fstab file

server1.example.com:/distvol   /mnt/distvol    glusterfs     _netdev    0 0

Run the df command to verify the mounting status of volume.

Now start accessing the volume 「distvol

Replicate Volume Setup :

For replicate volume setup i am going to use server 3 and server 4 and i am assuming additional disk (/dev/sdb ) for glusterfs is already assigned to the servers. Refer the following steps :

Add server 3 and server 4 in trusted storage pool

[[email protected] ~]# gluster peer probe server3.example.com
peer probe: success.
[[email protected] ~]# gluster peer probe server4.example.com
peer probe: success.
[[email protected] ~]#

Create and mount the brick on Server 3. Run the beneath commands one after the another.

[[email protected] ~]# pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
[[email protected] ~]# lvcreate -L 14G -T vg_bricks/brickpool3
[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool3 -n shadow_brick1
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick1
[[email protected] ~]# mkdir -p /bricks/shadow_brick1
[[email protected] ~]# mount /dev/vg_bricks/shadow_brick1 /bricks/shadow_brick1/
[[email protected] ~]# mkdir /bricks/shadow_brick1/brick

Do the below entry in /etc/fstab file for brick permanent mounting :

/dev/vg_bricks/shadow_brick1  /bricks/shadow_brick1/  xfs  rw,noatime,inode64,nouuid 1 2

Similarly perform the same steps on server 4 for creating and mounting brick :

[[email protected] ~]# pvcreate /dev/sdb ; vgcreate vg_bricks /dev/sdb
[[email protected] ~]# lvcreate -L 14G -T vg_bricks/brickpool4
[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool4 -n shadow_brick2
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/shadow_brick2
[[email protected] ~]# mkdir -p /bricks/shadow_brick2
[[email protected] ~]# mount /dev/vg_bricks/shadow_brick2 /bricks/shadow_brick2/
[[email protected] ~]# mkdir /bricks/shadow_brick2/brick

For permanent mounting of brick do the fstab entry.

Create Replicated Volume using below gluster command.

[[email protected] ~]# gluster volume create shadowvol replica 2 server3.example.com:/bricks/shadow_brick1/brick server4.example.com:/bricks/shadow_brick2/brick
volume create: shadowvol: success: please start the volume to access data
[[email protected] ~]# gluster volume start shadowvol
volume start: shadowvol: success
[[email protected] ~]#

Verify the Volume info using below gluster command :

[[email protected] ~]# gluster volume info shadowvol

If you want to access this volume 「shadowvol」 via nfs set the following :

[[email protected] ~]# gluster volume set shadowvol nfs.disable off
Mount the Replicate volume on the client via nfs

Before mounting create a mount point first.

[[email protected] ~]# mkdir /mnt/shadowvol

Note : One of the limitation in gluster storage is that GlusterFS server only supports version 3 of NFS protocol.

Add the below entry in the file 「/etc/nfsmount.conf」 on both the Storage Servers (Server 3 & Server 4 )

Defaultvers=3

After making the above entry reboot both servers once.Use below mount command to volume 「shadowvol

[[email protected] ~]# mount -t nfs -o vers=3 server4.example.com:/shadowvol /mnt/shadowvol/

For permanent mount add the following entry in /etc/fstab file

server4.example.com:/shadowvol  /mnt/shadowvol/  nfs vers=3  0 0

Verify the size and mounting status of the volume :

[[email protected] ~]# df -Th

Distribute-Replicate Volume Setup :

For setting up Distribute-Replicate volume i will be using one brick from each server and will form  the volume.  I will create the logical volume from the existing thin pool on the respective servers.

Create a brick on all 4 servers using beneath commands

Server 1

[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool1 -n prod_brick1
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick1
[[email protected] ~]# mkdir -p /bricks/prod_brick1
[[email protected] ~]# mount /dev/vg_bricks/prod_brick1 /bricks/prod_brick1/
[[email protected] ~]# mkdir /bricks/prod_brick1/brick

Server 2

[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool2 -n prod_brick2
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick2
[[email protected] ~]# mkdir -p /bricks/prod_brick2
[[email protected] ~]# mount /dev/vg_bricks/prod_brick2 /bricks/prod_brick2/
[[email protected] ~]# mkdir /bricks/prod_brick2/brick

 Server 3

[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool3 -n prod_brick3
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick3
[[email protected] ~]# mkdir -p /bricks/prod_brick3
[[email protected] ~]# mount /dev/vg_bricks/prod_brick3 /bricks/prod_brick3/
[[email protected] ~]# mkdir /bricks/prod_brick3/brick

 Server 4

[[email protected] ~]# lvcreate -V 3G -T vg_bricks/brickpool4 -n prod_brick4
[[email protected] ~]# mkfs.xfs -i size=512 /dev/vg_bricks/prod_brick4
[[email protected] ~]# mkdir -p /bricks/prod_brick4
[[email protected] ~]# mount /dev/vg_bricks/prod_brick4 /bricks/prod_brick4/
[[email protected] ~]# mkdir /bricks/prod_brick4/brick

Now Create volume with name 「dist-rep-vol」 using below gluster command :

[[email protected] ~]# gluster volume create dist-rep-vol replica 2 server1.example.com:/bricks/prod_brick1/brick server2.example.com:/bricks/prod_brick2/brick server3.example.com:/bricks/prod_brick3/brick server4.example.com:/bricks/prod_brick4/brick force
[[email protected] ~]# gluster volume start dist-rep-vol

Verify volume info using below command :

[[email protected] ~]# gluster volume info dist-rep-vol

In this volume first files will be distributed on any two bricks and then the files will be replicated into remaining two bricks.

Now Mount this volume on the client machine via gluster

let’s first create the mount point for this volume :

[[email protected] ~]# mkdir  /mnt/dist-rep-vol
[[email protected] ~]# mount.glusterfs server1.example.com:/dist-rep-vol /mnt/dist-rep-vol/

Add below entry in fstab for permanent entry

server1.example.com:/dist-rep-vol /mnt/dist-rep-vol/  glusterfs   _netdev 0 0

Verify the Size and volume using df command :

That’s it. Hope you have enjoyed the gluster storage configuration steps.

Ref From: linuxtechi

Related articles