Install GlusterFS Cluster on Debian 8

Introduction

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software.

Preliminary Note

In this tutorial, I will use three systems, two servers and a client:

  • gfsnode01.lplinux.com.ar: IP address 192.168.1.100 (server)
  • gfsnode02.lplinux.com.ar: IP address 192.168.1.101 (server)
  • proxmox01.lplinux.com.ar: IP address 192.168.1.102 (client)

All three systems should be able to resolve the other systems’ hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all three systems:

vim /etc/hosts
127.0.0.1 localhost
192.168.1.100 gfsnode01.lplinux.com.ar gfsnode01
192.168.1.101 gfsnode02.lplinux.com.ar gfsnode02
192.168.1.102 proxmox01.lplinux.com.ar proxmox01


# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don’t have to care about whether the hostnames can be resolved or not.)

Installing GlusterFS Server (i386 – 3.5.2)

Update package list:

apt-get update

Install packages:

apt-get install glusterfs-server

Installing GlusterFS Server (amd64 – Latest)

Add the GPG key to apt:

wget -O - http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -

Add the source:

echo deb http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/apt jessie main > /etc/apt/sources.list.d/gluster.list

Update package list:

apt-get update

Install packages:

apt-get install glusterfs-server

Configuring GlusterFS Server

Note: If you use a firewall, ensure that TCP ports 111, 24007, 24008, 24009-(24009 + number of bricks across all volumes) are open on server side.

Glusterfs shall store its data in the directory /data on the servers. This location can be a normal directory if you have a smaller installation or you use a separate hard disk partition and mount it as /data.

Run on both servers:

mkdir /data

to create the data directory.

Next, we must add gfsnode02.lplinux.com.ar to the trusted storage pool (please note that I’m running all GlusterFS configuration commands from gfsnode01.lplinux.com.ar, but you can as well run them from gfsnode02.lplinux.com.ar because the configuration is repliacted between the GlusterFS nodes – just make sure you use the correct hostnames or IP addresses):

gfsnode01.lplinux.com.ar:

On gfsnode01.lplinux.com.ar, run

gluster peer probe gfsnode02.lplinux.com.ar
root@gfsnode01:/# gluster peer probe gfsnode02.lplinux.com.ar
peer probe: success.
root@gfsnode01:/#

The status of the trusted storage pool should now be similar to this:

gluster peer status
root@gfsnode01:/# gluster peer status
Number of Peers: 1

Hostname: gfsnode02.lplinux.com.ar
Uuid: 0f7ee46c-6a71-4a31-91d9-6076707eff95
State: Peer in Cluster (Connected)
root@gfsnode01:/#

Next we create the share named testvol with two replicas (please note that the number of replicas is equal to the number of servers in this case because we want to set up mirroring) on gfsnode01.lplinux.com.ar and gfsnode02.lplinux.com.ar in the /data/testvol directory (this will be created if it doesn’t exist):

gluster volume create testvol replica 2 transport tcp gfsnode01.lplinux.com.ar:/data/testvol gfsnode02.lplinux.com.ar:/data/testvol force
root@gfsnode01:/# gluster volume create testvol replica 2 transport tcp gfsnode01.lplinux.com.ar:/data/testvol gfsnode02.lplinux.com.ar:/data/testvol force
volume create: testvol: success: please start the volume to access data
root@gfsnode01:/#

Start the volume:

gluster volume start testvol
root@gfsnode01:/# gluster volume start testvol
volume start: testvol: success
root@gfsnode01:/#

Our test volume has been started successfully.

It is possible that the above command tells you that the action was not successful:

root@gfsnode01:~# gluster volume start testvol
Starting volume testvol has been unsuccessful
root@gfsnode01:~#

In this case you should check the output of…

gfsnode01.lplinux.com.ar/gfsnode02.lplinux.com.ar:

netstat -tap | grep glusterfsd

on both servers.

If you get output like this…

root@gfsnode01:/# netstat -tap | grep glusterfsd
tcp 0 0 *:49152 *:* LISTEN 8007/glusterfsd
tcp 0 0 gfsnode01.example.c:65533 gfsnode01.example.c:24007 ESTABLISHED 8007/glusterfsd
tcp 0 0 gfsnode01.example.c:49152 gfsnode02.example.c:65531 ESTABLISHED 8007/glusterfsd
tcp 0 0 gfsnode01.example.c:49152 gfsnode01.example.c:65532 ESTABLISHED 8007/glusterfsd
tcp 0 0 gfsnode01.example.c:49152 gfsnode01.example.c:65531 ESTABLISHED 8007/glusterfsd
tcp 0 0 gfsnode01.example.c:49152 gfsnode02.example.c:65526 ESTABLISHED 8007/glusterfsd
root@gfsnode01:/#

… everything is fine, but if you don’t get any output…

root@gfsnode02:~# netstat -tap | grep glusterfsd
root@gfsnode02:~#

… restart the GlusterFS daemon on the corresponding server (gfsnode01.lplinux.com.ar in this case):

gfsnode02.lplinux.com.ar:

service glusterfs-server restart

Then check the output of…

netstat -tap | grep glusterfsd

… again on that server – it should now look like this:

root@gfsnode02:/# netstat -tap | grep glusterfsd
tcp 0 0 *:49152 *:* LISTEN 7852/glusterfsd
tcp 0 0 gfsnode02.example.c:49152 gfsnode02.example.c:65532 ESTABLISHED 7852/glusterfsd
tcp 0 0 gfsnode02.example.c:49152 gfsnode01.example.c:65526 ESTABLISHED 7852/glusterfsd
tcp 0 0 gfsnode02.example.c:49152 gfsnode02.example.c:65525 ESTABLISHED 7852/glusterfsd
tcp 0 0 gfsnode02.example.c:65533 gfsnode02.example.c:24007 ESTABLISHED 7852/glusterfsd
tcp 0 0 gfsnode02.example.c:49152 gfsnode01.example.c:65524 ESTABLISHED 7852/glusterfsd
root@gfsnode02:/#

Now back to gfsnode01.lplinux.com.ar:
gfsnode01.lplinux.com.ar:
You can check the status of the volume with the command

gluster volume info
root@gfsnode01:/# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: 3fc9af57-ca56-4a72-ad54-3d2ea03e5883
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfsnode01.lplinux.com.ar:/data/testvol
Brick2: gfsnode02.lplinux.com.ar:/data/testvol
Options Reconfigured:
performance.readdir-ahead: on
root@gfsnode01:/#

By default, all clients can connect to the volume. If you want to grant access to proxmox01.lplinux.com.ar (= 192.168.1.102) only, run:

gluster volume set testvol auth.allow 192.168.1.102
root@gfsnode01:/# gluster volume set testvol auth.allow 192.168.1.102
volume set: success
root@gfsnode01:/#

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.1.102,192.168.1.103).

The volume info should now show the updated status:

gluster volume info
root@gfsnode01:/# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: 3fc9af57-ca56-4a72-ad54-3d2ea03e5883
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gfsnode01.lplinux.com.ar:/data/testvol
Brick2: gfsnode02.lplinux.com.ar:/data/testvol
Options Reconfigured:
auth.allow: 192.168.1.102
performance.readdir-ahead: on
root@gfsnode01:/#

Installing GlusterFS Client (i386 – 3.5.2)

Update package list:

apt-get update

Install packages:

apt-get install glusterfs-client

Installing GlusterFS Client (amd64 – Latest)

Add the GPG key to apt:

wget -O - http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -

Add the source:

echo deb http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/apt jessie main > /etc/apt/sources.list.d/gluster.list

Update package list:

apt-get update

Install packages:

apt-get install glusterfs-client

Mounting GlusterFS device

We create the following directory:

mkdir /mnt/glusterfs

And mount for test purpose against any of the servers:

mount.glusterfs gfsnode01:/testvol /mnt/glusterfs

You should now see the new share in the outputs of…

mount
root@proxmox01:/# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=125556,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=204220k,mode=755)
/dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
gfsnode01.lplinux.com.ar:/testvol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
root@proxmox01:/#

… and…

df -h
root@proxmox01:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 57G 1.1G 53G 2% /
udev 10M 0 10M 0% /dev
tmpfs 200M 4.6M 195M 3% /run
tmpfs 499M 0 499M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 499M 0 499M 0% /sys/fs/cgroup
gfsnode01.lplinux.com.ar:/testvol 57G 21G 34G 39% /mnt/glusterfs
root@proxmox01:/#

Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.

To do that we need to create a .vol file to manage the clustering. We need 1 .vol file for each device.

vim /root/testvol.vol

And we put in the next block:

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host gfsnode01
  option remote-subvolume /data/testvol
end-volume
 
volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host gfsnode02
  option remote-subvolume /data/testvol
end-volume
 
volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume
 
volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume
 
volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

That block point to specific remote nodes of the GlusterFS cluster and to a particular device. In case of failure, the moint point remounts pointing to the other node.

These are the lines I put in /etc/fstab file:

/root/testvol.vol /mnt/glusterfs glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

We could also user Proxmox GUI to add it:

glusterfs_on_proxmox01

glusterfs_on_proxmox02

 

Reference Links:

Print Friendly, PDF & Email

Pablo Javier Furnari

Linux System Administrator at La Plata Linux
I'm a Linux Sysadmin with 8 years of experience. I work with several clients as a consulter here in Argentina and oversea (I have clients in the United States, Mexico, Pakistan and Germany).

I know my strengths and weaknesses. I'm a quick learner, I know how to work with small and big teams. I'm hard worker, proactive and I achieve everything I propose.

Leave a Reply

Your email address will not be published. Required fields are marked *


CAPTCHA Image
Reload Image