CONFIGURE OCFS2

17. Install & Configure Oracle Cluster File System (OCFS2)

Most of the installation and configuration procedures in this section should be performed on both Oracle RAC nodes in the cluster! Creating the OCFS2 filesystem, however, should only be executed on one of nodes in the RAC cluster.

It is now time to install and configure the Oracle Cluster File System, Release 2 (OCFS2) software. Developed by Oracle Corporation, OCFS2 is a Cluster File System which allows all nodes in a cluster to concurrently access a device via the standard file system interface. This allows for easy management of applications that need to run across a cluster.

OCFS Release 1 was released in December 2002 to enable Oracle Real Application Cluster (RAC) users to run the clustered database without having to deal with RAW devices. The file system was designed to store database related files, such as data files, control files, redo logs, archive logs, etc. OCFS2 is the next generation of the Oracle Cluster File System. It has been designed to be a general purpose cluster file system. With it, users can store not only database related files on a shared disk, but also store Oracle binaries and configuration files (a shared Oracle Home for example) making management of RAC even easier.

In this guide, you will be using the release of OCFS2 included with Oracle Enterprise Linux Release 5.3 (OCFS2 Release 1.2.9-1) to store the two files that are required to be shared by the Oracle Clusterware software. Along with these two files, you will also be using this space to store the shared SPFILE for all Oracle ASM instances.

See this page for more information on OCFS2 (including Installation Notes) for Linux.

Install OCFS2

In previous editions of this article, this would be the time where you would need to download the OCFS2 software from http://oss.oracle.com/. This is no longer necessary since the OCFS2 software is included with Oracle Enterprise Linux. The OCFS2 software stack includes the following packages:

32-bit (x86) Installations

 

    • OCFS2 Kernel Driver
      • ocfs2-x.x.x-x.el5-x.x.x-x.el5.i686.rpm - (for default kernel)
      • ocfs2-x.x.x-x.el5PAE-x.x.x-x.el5.i686.rpm - (for PAE kernel)
      • ocfs2-x.x.x-x.el5xen-x.x.x-x.el5.i686.rpm - (for xen kernel)

 

    • OCFS2 Tools
      • ocfs2-tools-x.x.x-x.el5.i386.rpm

 

    • OCFS2 Tools Development
      • ocfs2-tools-devel-x.x.x-x.el5.i386.rpm

 

  • OCFS2 Console
    • ocfs2console-x.x.x-x.el5.i386.rpm

64-bit (x86_64) Installations

 

    • OCFS2 Kernel Driver
      • ocfs2-x.x.x-x.el5-x.x.x-x.el5.x86_64.rpm - (for default kernel)
      • ocfs2-x.x.x-x.el5PAE-x.x.x-x.el5.x86_64.rpm - (for PAE kernel)
      • ocfs2-x.x.x-x.el5xen-x.x.x-x.el5.x86_64.rpm - (for xen kernel)

 

    • OCFS2 Tools
      • ocfs2-tools-x.x.x-x.el5.x86_64.rpm

 

    • OCFS2 Tools Development
      • ocfs2-tools-devel-x.x.x-x.el5.x86_64.rpm

 

  • OCFS2 Console
    • ocfs2console-x.x.x-x.el5.x86_64.rpm

With Oracle Enterprise Linux 5.3, the OCFS2 software packages do not get installed by default. The OCFS2 software packages can be found on CD #3. To determine if the OCFS2 packages are installed (which in most cases, they will not be), perform the following on both Oracle RAC nodes:

# rpm -qa | grep ocfs2 | sort

If the OCFS2 packages are not installed, load the Oracle Enterprise Linux CD #3 into each of the Oracle RAC nodes and perform the following:

From Oracle Enterprise Linux 5 - [CD #3]
# mount -r /dev/cdrom /media/cdrom
# cd /media/cdrom/Server
# rpm -Uvh ocfs2-tools-1.2.7-1.el5.i386.rpm
# rpm -Uvh ocfs2-2.6.18-128.el5-1.2.9-1.el5.i686.rpm
# rpm -Uvh ocfs2console-1.2.7-1.el5.i386.rpm
# cd /
# eject

After installing the OCFS2 packages, verify from both Oracle RAC nodes that the software is installed:

# rpm -qa | grep ocfs2 | sort
ocfs2-2.6.18-128.el5-1.2.9-1.el5
ocfs2console-1.2.7-1.el5
ocfs2-tools-1.2.7-1.el5

Disable SELinux (RHEL4 U2 and higher)

Users of RHEL4 U2 and higher (Oracle Enterprise Linux 5.3 is based on RHEL 5.3) are advised that OCFS2 currently does not work with SELinux enabled. If you are using RHEL4 U2 or higher (which includes us since we are using Oracle Enterprise Linux 5.3) you will need to verify SELinux is disabled in order for the O2CB service to execute.

During the installation of Oracle Enterprise Linux, we Disabled SELinux on the SELinux screen. If, however, you did not disable SELinux during the installation phase, you can use the tool system-config-securitylevel to disable SELinux.

To disable SELinux (or verify SELinux is disabled), run the "Security Level Configuration" GUI utility:

# /usr/bin/system-config-securitylevel &

This will bring up the following screen:


Figure 16: Security Level Configuration Opening Screen / Firewall Disabled

Now, click the SELinux tab and select the "Disabled" option. After clicking [OK], you will be presented with a warning dialog. Simply acknowledge this warning by clicking "Yes". Your screen should now look like the following after disabling the SELinux option:


Figure 17: SELinux Disabled

If you needed to disable SELinux in this section on any of the nodes, those nodes will need to be rebooted to implement the change. SELinux must be disabled before you can continue with configuring OCFS2!

Configure OCFS2

OCFS2 will be configured to use the private network (192.168.2.0) for all of its network traffic as recommended by Oracle. While OCFS2 does not take much bandwidth, it does require the nodes to be alive on the network and sends regular keepalive packets to ensure that they are. To avoid a network delay being interpreted as a node disappearing on the net which could lead to a node-self-fencing, a private interconnect is recommended. It is safe to use the same private interconnect for both Oracle RAC and OCFS2.

A popular question then is what node name should be used and should it be related to the IP address? The node name needs to match the hostname of the machine. The IP address need not be the one associated with that hostname. In other words, any valid IP address on that node can be used. OCFS2 will not attempt to match the node name (hostname) with the specified IP address.

The next step is to generate and configure the /etc/ocfs2/cluster.conf file on both Oracle RAC nodes in the cluster. The easiest way to accomplish this is to run the GUI tool ocfs2console. In this section, we will not only create and configure the /etc/ocfs2/cluster.conf file using ocfs2console, but will also create and start the cluster stack O2CB. When the /etc/ocfs2/cluster.conf file is not present, (as will be the case in our example), the ocfs2console tool will create this file along with a new cluster stack service (O2CB) with a default cluster name of ocfs2. This will need to be done on both Oracle RAC nodes in the cluster as the root user account:

$ su -
# ocfs2console &

This will bring up the GUI as shown below:


Figure 18: ocfs2console GUI

Using the ocfs2console GUI tool, perform the following steps:

 

  1. Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster Stack (Figure 19) and bring up the "Node Configuration" dialog.
  2. On the "Node Configuration" dialog, click the [Add] button.
    • This will bring up the "Add Node" dialog.
    • In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using linux1 / 192.168.2.100 for the first node and linux2 / 192.168.2.101 for the second node.
      Note: The node name you enter "must" match the hostname of the machine and the IP addresses will use the private interconnect.
    • Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" as shown in Figure 20.
    • Click [Close] on the "Node Configuration" dialog.
  3. After verifying all values are correct, exit the application using [File] -> [Quit]. This needs to be performed on both Oracle RAC nodes in the cluster.


Figure 19: Starting the OCFS2 Cluster Stack

The following dialog show the OCFS2 settings I used for the nodes linux1 and linux2:


Figure 20: Configuring Nodes for OCFS2

Note: See the Troubleshooting section if you get the error:

    o2cb_ctl: Unable to access cluster service while creating node

After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following. This process needs to be completed on both Oracle RAC nodes in the cluster and the OCFS2 configuration file should be exactly the same for all of the nodes:

node:
ip_port = 7777
ip_address = 192.168.2.100
number = 0
name = linux1
cluster = ocfs2

node:
ip_port = 7777
ip_address = 192.168.2.101
number = 1
name = linux2
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2

O2CB Cluster Service

Before we can do anything with OCFS2 like formatting or mounting the file system, we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result of the configuration process performed above). The stack includes the following services:

 

  • NM: Node Manager that keep track of all the nodes in the cluster.conf
  • HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster
  • TCP: Handles communication between the nodes
  • DLM: Distributed lock manager that keeps track of all locks, its owners and status
  • CONFIGFS: User space driven configuration file system mounted at /config
  • DLMFS: User space interface to the kernel space DLM

All of the above cluster services have been packaged in the o2cb system service (/etc/init.d/o2cb). Here is a short listing of some of the more useful commands and options for the o2cb system service.

Note: The following commands are for documentation purposes only and do not need to be run when installing and configuring OCFS2 for this article!

 

  • /etc/init.d/o2cb status
    Module "configfs": Loaded
    Filesystem "configfs": Mounted
    Module "ocfs2_nodemanager": Loaded
    Module "ocfs2_dlm": Loaded
    Module "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking O2CB cluster ocfs2: Online
    Heartbeat dead threshold: 31
    Network idle timeout: 30000
    Network keepalive delay: 2000
    Network reconnect delay: 2000
    Checking O2CB heartbeat: Not active

     

  • /etc/init.d/o2cb offline ocfs2
    Stopping O2CB cluster ocfs2: OK
    The above command will offline the cluster we created, ocfs2.

     

  • /etc/init.d/o2cb unload
    Unmounting ocfs2_dlmfs filesystem: OK
    Unloading module "ocfs2_dlmfs": OK
    Unmounting configfs filesystem: OK
    Unloading module "configfs": OK
    The above command will unload all OCFS2 modules.

     

  • /etc/init.d/o2cb load
    Loading module "configfs": OK
    Mounting configfs filesystem at /sys/kernel/config: OK
    Loading module "ocfs2_nodemanager": OK
    Loading module "ocfs2_dlm": OK
    Loading module "ocfs2_dlmfs": OK
    Mounting ocfs2_dlmfs filesystem at /dlm: OK
    Loads all OCFS2 modules.

     

  • /etc/init.d/o2cb online ocfs2
    Starting O2CB cluster ocfs2: OK
    The above command will online the cluster we created, ocfs2.

     

Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold

You now need to configure the on-boot properties of the OC2B driver so that the cluster stack services will start on each boot. You will also be adjusting the OCFS2 Heartbeat Threshold from its default setting of 31 to 61. Perform the following on both Oracle RAC nodes in the cluster:

# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 30000
Specify network keepalive delay in ms (>=1000) [2000]: 2000
Specify network reconnect delay in ms (>=2000) [2000]: 2000
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Format the OCFS2 Filesystem

Note: Unlike the other tasks in this section, creating the OCFS2 file system should only be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

We can now start to make use of the iSCSI volume we partitioned for OCFS2 in the section "Create Partitions on iSCSI Volumes".

If the O2CB cluster is offline, start it. The format operation needs the cluster to be online, as it needs to ensure that the volume is not mounted on some other node in the cluster.

Earlier in this document, we created the directory /u02 under the section Create Mount Point for OCFS2 / Clusterware which will be used as the mount point for the OCFS2 cluster file system. This section contains the commands to create and mount the file system to be used for the Cluster Manager.

Note that it is possible to create and mount the OCFS2 file system using either the GUI tool ocfs2console or the command-line tool mkfs.ocfs2. From the ocfs2console utility, use the menu [Tasks] - [Format].

The instructions below demonstrate how to create the OCFS2 file system using the command-line tool mkfs.ocfs2.

To create the file system, we can use the Oracle executable mkfs.ocfs2. For the purpose of this example, I run the following command only from linux1 as the root user account using the local SCSI device name mapped to the iSCSI volume for crs — /dev/iscsi/crs/part1. Also note that I specified a label named "oracrsfiles" which will be referred to when mounting or un-mounting the volume:

$ su -
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracrsfiles /dev/iscsi/crs/part1

mkfs.ocfs2 1.2.7
Filesystem label=oracrsfiles
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=2145943552 (65489 clusters) (523912 blocks)
3 cluster groups (tail covers 977 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 1 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

Mount the OCFS2 Filesystem

Now that the file system is created, we can mount it. Let's first do it using the command-line, then I'll show how to include it in the /etc/fstab to have it mount on each boot.

Note: Mounting the cluster file system will need to be performed on both Oracle RAC nodes in the cluster as the root user account using the OCFS2 label oracrsfiles!

First, here is how to manually mount the OCFS2 file system from the command-line. Remember that this needs to be performed as the root user account:

$ su -
# mount -t ocfs2 -o datavolume,nointr -L "oracrsfiles" /u02

If the mount was successful, you will simply get your prompt back. We should, however, run the following checks to ensure the file system is mounted correctly.

Use the mount command to ensure that the new file system is really mounted. This should be performed on both nodes in the RAC cluster:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
domo:Public on /domo type nfs (rw,addr=192.168.1.121)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sde1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
Please take note of the datavolume option I am using to mount the new file system. Oracle database users must mount any volume that will contain the Voting Disk file, Cluster Registry (OCR), Data files, Redo logs, Archive logs and Control files with the datavolume mount option so as to ensure that the Oracle processes open the files with the O_DIRECT flag. The nointr option ensures that the I/O's are not interrupted by signals.

Any other type of volume, including an Oracle home (which I will not be using for this article), should not be mounted with this mount option.

Why does it take so much time to mount the volume? It takes around 5 seconds for a volume to mount. It does so as to let the heartbeat thread stabilize. In a later release, Oracle plans to add support for a global heartbeat, which will make most mounts instant.

Configure OCFS2 to Mount Automatically at Startup

Let's take a look at what you've have done so far. You installed the OCFS2 software packages which will be used to store the shared files needed by Cluster Manager. After going through the install, you loaded the OCFS2 module into the kernel and then formatted the clustered file system. Finally, you mounted the newly created file system using the OCFS2 label "oracrsfiles". This section walks through the steps responsible for mounting the new OCFS2 file system each time the machine(s) are booted using its label.

Start by adding the following line to the /etc/fstab file on both Oracle RAC nodes in the cluster:

LABEL=oracrsfiles     /u02           ocfs2   _netdev,datavolume,nointr     0 0

Notice the "_netdev" option for mounting this file system. The _netdev mount option is a must for OCFS2 volumes. This mount option indicates that the volume is to be mounted after the network is started and dismounted before the network is shutdown.

Now, let's make sure that the ocfs2.ko kernel module is being loaded and that the file system will be mounted during the boot process.

If you have been following along with the examples in this article, the actions to load the kernel module and mount the OCFS2 file system should already be enabled. However, you should still check those options by running the following on both Oracle RAC nodes in the cluster as the root user account:

$ su -
# chkconfig --list o2cb
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
The flags that I have marked in bold should be set to "on".

Check Permissions on New OCFS2 Filesystem

Use the ls command to check ownership. The permissions should be set to 0775 with owner "oracle" and group "oinstall".

The following tasks only need to be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

Let's first check the permissions:

# ls -ld /u02
drwxr-xr-x 3 root root 4096 Jul 31 17:21 /u02
As you can see from the listing above, the oracle user account (and the oinstall group) will not be able to write to this directory. Let's fix that:
# chown oracle:oinstall /u02
# chmod 775 /u02
Let's now go back and re-check that the permissions are correct for both Oracle RAC nodes in the cluster:
# ls -ld /u02
drwxrwxr-x 3 oracle oinstall 4096 Jul 31 17:21 /u02

Create Directory for Oracle Clusterware Files

The last mandatory task is to create the appropriate directory on the new OCFS2 file system that will be used for the Oracle Clusterware shared files. We will also modify the permissions of this new directory to allow the "oracle" owner and group "oinstall" read/write access.

The following tasks only need to be executed on one of nodes in the RAC cluster. I will be executing all commands in this section from linux1 only.

# mkdir -p /u02/oradata/racdb
# chown -R oracle:oinstall /u02/oradata
# chmod -R 775 /u02/oradata
# ls -l /u02/oradata
total 4
drwxrwxr-x 2 oracle oinstall 4096 Jul 31 17:31 racdb

Reboot Both Nodes

Before starting the next section, this would be a good place to reboot both of the nodes in the RAC cluster. When the machines come up, ensure that the cluster stack services are being loaded and the new OCFS2 file system is being mounted:

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/hda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
configfs on /sys/kernel/config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)
/dev/sdc1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

If you modified the O2CB heartbeat threshold, you should verify that it is set correctly::

# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
61

How to Determine OCFS2 Version

To determine which version of OCFS2 is running, use:

# cat /proc/fs/ocfs2/version
OCFS2 1.2.9 Wed Jan 21 21:32:59 EST 2009 (build 5e8325ec7f66b5189c65c7a8710fe8cb)
0 (0)
Article Rating (No Votes)
Rate this article
Attachments
There are no attachments for this article.
Comments
There are no comments for this article. Be the first to post a comment.
Full Name
Email Address
Security Code Security Code
Related Articles RSS Feed
logrotate Log Rotate Configuration
Viewed 3094 times since Sun, Jan 12, 2020
Linux Add a Swap File – HowTo
Viewed 10057 times since Fri, Jun 8, 2018
Transform XML to CSV Format | Unix String Pattern Manipulation The Ugly Way
Viewed 5198 times since Sun, Jan 9, 2022
Turbocharge PuTTY with 12 Powerful Add-Ons – Software for Geeks #3
Viewed 14475 times since Sun, Sep 30, 2018
ubuntu How to Reset Forgotten Root Password in Ubuntu
Viewed 2706 times since Tue, Dec 8, 2020
A Simple Guide to Oracle Cluster File System (OCFS2) using iSCSI on Oracle Cloud Infrastructure
Viewed 8178 times since Sat, Jun 2, 2018
Linux Audit The Linux security blog about Auditing, Hardening, and Compliance lynis
Viewed 1968 times since Thu, Jan 16, 2020
OpenSSL: Find Out SSL Key Length – Linux Command Line
Viewed 6719 times since Mon, Feb 18, 2019
Linux – How to check the exit status of several piped commands
Viewed 2874 times since Wed, Jul 25, 2018
Improve security with polyinstantiation
Viewed 13116 times since Fri, May 15, 2020