Skip to content

How to install GlusterFS + NFS on CentOS 7 for Virtual Machine Storage

GlusterFS is one of the fastest growing Open Source storage platforms in existence. It’s very simple to install, scale, and manage. What makes Gluster so amazing, is its ability to scale and replicate. It really sets the bar for software defined storage systems. It runs on whitebox hardware, or virtual machines. Lately, I’ve come across quite a few people that seem to be scared of Gluster and don’t know where to begin. I am here to help! Today, we’re going to install and configure GlusterFS on a CentOS 7 virtual machine; and, we’re going to make it NFS accessible for VM storage. Every hypervisor in existence supports NFS storage for virtual machines, including VMware ESXi / vSphere, Proxmox, Xen, KVM, oVirt, OpenStack, and all the others.

Installing GlusterFS Server and Client on CentOS 7 (two nodes)

I am using two virtual machines, each running CentOS 7. Their hostnames are gfs1 and gfs2. I have added a 40GB second disk to each VM that will be dedicated to GlusterFS. I suggest you have an identically sized second partition or drive on each of your systems as well.

As always, after connecting via SSH or console, go ahead and make sure everything is updated and upgraded on both nodes.

yum -y update

And, let’s go ahead and install a few useful packages (both nodes).

yum -y install nano net-tools wget

Edit the hosts file on both nodes. Make sure both nodes can resolve to each other via hostname.

nano /etc/hosts

Screen Shot 2015-06-03 at 1.44.35 PM

Now we can download and install Gluster (both nodes).

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

yum -y install glusterfs glusterfs-fuse glusterfs-server

systemctl start glusterd

When I tried to install the gluster packages with the yum command above, I encountered an error / bug. Installation failed with this dependency error.

Screen Shot 2015-06-03 at 2.07.55 PM

--> Finished Dependency Resolution
Error: Package: glusterfs-server-3.7.1-1.el7.x86_64 (glusterfs-epel)
           Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.1-1.el7.x86_64 (glusterfs-epel)
           Requires: liburcu-bp.so.1()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Those packages are provided by the userspace-rcu (read-copy-update) package. So, I installed the RPM manually, reran the yum install command and all was well. So, if you have the same problem follow the steps below. If you don’t, just skip this step.

wget http://dl.fedoraproject.org/pub/epel/7/x86_64/u/userspace-rcu-0.7.9-1.el7.x86_64.rpm

rpm -Uvh userspace-rcu-0.7.9-1.el7.x86_64.rpm

That’s an annoyance, but not a show stopper. Let’s continue. We need to create a single partition on the second drive on each host (both nodes).

fdisk /dev/sdb

Select “n” to create a new partition, then select “p” for primary. Go through the wizard and accept the defaults, they are fine. When you’re back at the fdisk prompt, type “w” to write the changes to the disk and exit.

Screen Shot 2015-06-03 at 2.16.07 PM

Next, we need to create a filesystem and mount it. You can use ext4 or xfs, your choice. I’m going with xfs(both nodes).

mkfs.xfs /dev/sdb1

mkdir /gluster

mount /dev/sdb1 /gluster

We need to add an entry to /etc/fstab to mount the drive at boot time (both nodes).

nano /etc/fstab

/dev/sdb1    /gluster    xfs      default  1   2

Screen Shot 2015-06-03 at 2.24.43 PM

Each gluster node needs to have unrestricted network access to other gluster nodes. For the sake of this lab exercise, we’re going to disable firewalld to make things easy. If you are setting this up in a production environment, I would suggest taking the time to create proper firewall rules. Gluster uses TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across all volumes).

systemctl mask firewalld

systemctl stop firewalld

Now we need to make the gluster nodes aware of each other. To do this, we will tell gluster to probe gfs2, from gfs1.

On gfs1 node:

[[email protected] /]# gluster peer probe gfs2
peer probe:  success.

Screen Shot 2015-06-03 at 2.29.48 PM

If you didn’t get a success, make sure gluster was started using the systemctl command from earlier in this guide. Now the nodes are aware of each other. Let’s list the storage pool to make sure both nodes are showing up.

[[email protected] /]# gluster pool list

UUID					                        Hostname 	   State
e0795e00-2cec-4476-8f10-4d51b2204fea	gfs2     	           Connected
2e5e20e6-2461-4df6-b3a8-b8d0d093cc49	localhost	           Connected

Screen Shot 2015-06-03 at 2.33.35 PM

If you only have one node showing up, probe the nodes to make sure they are aware of each other. Now we are ready to create a volume. There are many different types of volumes that can be created with gluster. You can stripe a volume across multiple nodes, or replicate (copy) or even stripe and replicate at the same time. The process is pretty much the same for each. I’m going to set up a simple replication between my two nodes. So, any file created in the shared gluster volume, will be located on both nodes.

On either node:

[[email protected] /]# gluster
gluster> volume create vol1 rep 2 transport tcp gfs1:/gluster/brick gfs2:/gluster/brick force

Screen Shot 2015-06-03 at 2.39.46 PM

You should see “volume create: vol1: success: please start the volume to access data.” This means the volume was created successfully and just needs to be started. To start the volume, use the volume start command.

On either node:

gluster> volume start vol1

Screen Shot 2015-06-03 at 2.42.58 PM

You can look at the volume information by running the volume info command.

gluster> volume info

Screen Shot 2015-06-03 at 2.45.34 PM

You should get a “volume start: vol1: success” response. Now, you can mount the gluster volume (vol1) on any linux system with the glusterfs-client installed, but we want to make it accessible via NFS.

gluster>  volume set vol1 nfs.rpc-auth-allow <IP addresses of nfs clients separated by commas>

Gluster has an NFS server built in, so you must make sure there is no other NFS server running on your node. Let’s make sure every service that could potentially be an issue is stopped and disabled. (some of these might fail, which is fine, just run them all).

systemctl stop nfs.target
systemctl disable nfs.target
systemctl stop nfs-lock.service

Let’s make sure rpcbind is started.

[[email protected] glusterfs]# service rpcbind start

Restart glusterd.

service glusterd restart

Make sure the NFS server is running.

[[email protected] glusterfs]# gluster volume status vol1

It should look like this.

Screen Shot 2015-06-03 at 3.31.04 PM

Now you can mount the gluster volume on your client or hypervisor of choice. My mount path looks like this:

192.168.1.40:/vol1

If you have any questions, feel free to ask in the comments below. I hope this helps you get on your way with gluster! Thanks.