Installing and Configuring an OCFS2 Clustered File System

 

Installing and Configuring an OCFS2 Clustered File System

, , ,

Last year we had a project which required us to build out a KVM environment which used shared storage. Most often that would be NFS all the way and very occasionally Ceph.   This time however the client already had a Fibre Channel over Ethernet (FCoE) SAN which had to be used, and the hosts were HP blades using shared converged adaptors in the chassis- just add a bit more fun.

A small crowbar and a large hammer later, the LUNs from the SAN were being presented to the hosts. So far so good.  But…

Clustered File Systems

If you need to have a volume shared between two or more  hosts, you can provision the disk to all the machines, and everything might appear to work, but each host will be maintaining its own inode table and so will be unaware of changes other hosts are making to the file system, and in the event that writes ever happened to the same areas of the disk at the same time you will end up with data corruption. The key is that you need a way to track locks from multiple nodes.  This is called a Distributed Locking Manager or DLM and for this you need a Clustered File System.

Options

There are dozens of clustered file systems out there, proprietary and open source.
For this project we needed a file system which;

  • Supported on CentOS6.7
  • Open source
  • Supports multi-path
  • Easy to configure not a complex group of Distributed Parallel Filesystems
  • Need to support concurrent file access and deliver the utmost performance
  • No management node over head, so more cluster drive space.

So we opted for OCFS2 (Oracle Clustered File System 2)

Once you have the ‘knack’, installation isn’t that arduous, and it goes like this…

These steps should be repeated on each node.

1. Installing the OCFS file system binaries

In order to use OCFS2, we need to install the kernel modules and OCFS2-tools.

First we need to download and install the OCFS2 kernel modules for CentOS 6.  Oracle now bundles the OCFS2 kernel modules in its Unbreakable Kernel, but they also used to be shipped with CloudStack 3.x so we used those.

1
2
rpm -i ocfs2-kmod-1.5.0-1.el6.x86_64.rpm

Next we copy the OCFS2 kernel modules into the current running kernel directory for CentOS 6.7

1
cp -Rpv /lib/modules/2.6.32-71.el6.x86_64/extra/ocfs2/ /lib/modules/2.6.32-573.3.1.el6.x86_64/extra/ocfs2

Next we update the running kernel with the newly installed modules.

1
depmod –a

Add the Oracle yum repo for el6 (CentOS 6.7) for the OCFS2-tools

1
2

And add the PKI keys for the Oracle el6 YUM repo

1
2
3
cd /etc/pki/rpm-gpg/
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle-ol6

Now we can install the OCFS2 tools to be used to administrate the OCFS2 Cluster.

1
yum install -y ocfs2-tools

Finally we add the OCFS2 modules into the init script to load OCFS2 at boot.

1
sed -i "/online \"\$1\"/a\/sbin\/modprobe \-f ocfs2\nmount\-a" /etc/init.d/o2cb

2. Configure the OCFS2 Cluster.

OCFS2 cluster nodes are configured through a file (/etc/ocfs2/cluster.conf). This file has all the settings for the OCFS2 cluster. An example configuration file might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cd /etc/ocfs2/
vim cluster.conf
 
node:
ip_port = 7777
ip_address = 192.168.100.1
number = 0
name = host1.domain.com
cluster = ocfs2
 
node:
ip_port = 7777
ip_address = 192.168.100.2
number = 1
name = host2.domain.com
cluster = ocfs2
 
node:
ip_port = 7777
ip_address = 192.168.100.3
number = 2
name = host3.domain.com
cluster = ocfs2
 
cluster:
node_count = 3
name = ocfs2

We will need to run the o2cb service from the /etc/init.d/ directory to configure the OCFS2 cluster.

1
2
3
4
5
6
7
8
9
/etc/init.d/o2cb configure
Load O2CB driver on boot (y/n) [y]: y
 
Cluster stack backing O2CB [o2cb]: ENTER
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ENTER
Specify heartbeat dead threshold (=7) [31]: ENTER
Specify network idle timeout in ms (=5000) [30000]: ENTER
Specify network keepalive delay in ms (=1000) [2000]: ENTER
Specify network reconnect delay in ms (=2000) [2000]: ENTER

Update the iptables rules to allow the OCFS2 Cluster port 7777 on all the nodes that we have installed:

1
2
3
4
iptables -I INPUT -p udp -m udp --dport 7777 -j ACCEPT
iptables -I INPUT -p udp -m udp --dport 7777 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 7777 -j ACCEPT
iptables-save >> /etc/sysconfig/iptables
Restart the iptables service
1
service iptables restart

3. Setting up Linux file system

First we create a directory where the OCFS2 system will be mounted.

1
mkdir –p /san/primary/

We need to format the mounted volume as OCFS2. This only needs to be run on ONE of the nodes in the cluster.

1
mkfs.ocfs2 -L OCFS2_label -T vmstore --fs-feature-level=max-compat /dev/sdd -N (number of nodes +1)

The options work like this:
-L Is the Label of the OCFS2 cluster
-T What will the cluster be used for, type of Data
-fs-feature-level making OCFS2 compatible with older versions

4. Update the Linux FSTAB with the OCFS2 drive settings.

Next we had the following line to /etc/fstab to mount the volume at every boot.

1
/dev/sdd /san/primary _netdev,nointr 0 0

5. Mount the OCFS2 cluster.

Once the fstab has been updated we’ll need to mount the volume

1
mount -a

This will give us a mount point on each node in this cluster of /san/primary. This mount point is backed by the same LUN in the SAN, but most importantly the filesystem is aware that there are multiple hosts connected to it and will lock files accordingly.

Each cluster of hosts would have a specific LUN (or LUNs) which is would connect to.  It makes life a lot simpler if you are able to mask the LUNs from SAN such that only the hosts which will connect to a specific LUN can see that LUN, as this helps to avoid any mix ups.

Adding this storage into CloudStack

In order for the KVM hosts to utilise this storage in a CloudStack context, we must add the shared LUNs as primary storage in CloudStack. This is done by setting the storage type to ‘presetup – SharedMountPoint’ when adding the primary storage pools for these clusters.  The mountpoint path should be specified in the way that they will be seen locally by the KVM hosts; in this case – /san/primary.

Summary

In this article we looked at the requirement for a Clustered File System when connecting KVM hosts to a SAN and how to configure OCFS2 on CentOS6.7

0 (0)
Article Rating (No Votes)
Rate this article
Attachments
There are no attachments for this article.
Comments
There are no comments for this article. Be the first to post a comment.
Full Name
Email Address
Security Code Security Code
Related Articles RSS Feed
RHEL: Change system’s hostname
Viewed 3385 times since Sun, May 27, 2018
Sample logrotate configuration and troubleshooting part 2
Viewed 9392 times since Fri, Nov 30, 2018
Logowanie za pomocą kluczy Secure Shell
Viewed 2849 times since Thu, May 24, 2018
Logrotate Example for Custom Logs
Viewed 2501 times since Sun, Jan 12, 2020
RHEL: Allowing users to ’su’ to "root" / Allowing ’root’ to login directly to the system using ’ssh’
Viewed 2749 times since Sat, Jun 2, 2018
Linux - How to get network speed and statistic of ethernet adapter in Linux
Viewed 2062 times since Fri, Jun 8, 2018
Kernel sysctl configuration file for Linux
Viewed 5140 times since Fri, Aug 3, 2018
Manage Linux Password Expiration and Aging Using chage
Viewed 4501 times since Tue, Sep 11, 2018
How To Use Systemctl to Manage Systemd Services and Units
Viewed 7347 times since Mon, Dec 7, 2020
Manage SSH Key File With Passphrase
Viewed 2186 times since Tue, Mar 5, 2019