Expand or grow a file system on a Linux VMWare VM without downtime

When your VM’s file systems are running out of space and you want to provide more space to your VM and you can’t afford any downtime, there are basically two options. Either you delete some files on the file system or you expand your current file system. Expanding your current file system can be accomplished by growing an existing virtual disk or adding a new virtual disk. Besides cleaning up, both other solutions work fine and I’ll try to explain how to get both of them active without rebooting the system or any downtime.

 

Available free space should be monitored at all times since a system really behaves unpredictable when there is no more free space on an (important) file system. When you see that a file system is filling up, you can’t keep deleting stuff since there is a reason that the storage is used.

To have the flexibility to dynamically grow or shrink file systems, you need to use LVM. The Logical Volume Manager adds an extra “virtual” layer above your real block devices which adds a lot of flexibility. When using virtual machines, that real block device is in fact also a virtual device. For the rest of this post, I assume you know what LVM is and how to use it. If not, take a look here.

Option 1: Add a new virtual disk to the VM

The first possible way to expand your storage online, is to add a new virtual disk to your running system. This is the only option when using VMWare Player. While this option is easier since it doesn’t involve trouble with the MBR-limitations, it’s less clean if you want to add space on a regular basis since your VM ends up with a lot of virtual disk drives and VMDK-files.

Our example machine has a virtual disk of only 5GB and before expanding the file system we can see that it’s running out of space in the /home filesystem:

The /home filesystem is on a logical volume, so let’s check if the volume group for that volume still has some free space:

As you can see, there are no more free extents in the volume group so we can’t simply expand the logical volume for /home.

Add a new virtual disk with the Vsphere Client

The first step for option 1, to resolve our problem, is to add a new virtual disk to the guest that is running out of space. This can be done via the Vsphere Client, the Vsphere Web-client, the API,…

In the vSphere client, right click on the VM that’s running out of space and click “Edit Settings…”:

vmware_editsettings

In the “Virtual Machine Properties”, click on “Add…” and select “Hard Disk” from the list.

vmware_adddisk1

Click “Next >” and choose the size for the new disk, then continue.

vmware_adddisk2

As you can see in the above screenshot, a new virtual disk of 5GB was added to the system.

The same process can be done with VMWare Player in a similar way

Use the newly added disk to extend an existing Volume Group

After adding the new disk to the VM, we can start using it. The first step is to check which name our newly added disk has.

As you can see in the last messages in the syslog and in the output of lsblk, we can see that a device, named sdb was added with a size of 5GB.

In case the newly added device isn’t automatically detected, which tends to happen mostly with VMWare Player, you can execute the following to scan for new devices:

Now that we know the name of our new virtual disk, we need to create a partition on the system before we can use it to expand a currently existing filesystem with LVM. This can be done with fdisk:

After adding the partition, which is named sdb1, we can start using it by creating a physical volume for LVM:

 

After creating the physical volume, we can extend the existing volume group, named vg_sys by adding the physical volume to it.

 

After adding the physcial volume /dev/sdb1 to volume group vg_sys, you can see that now our volume group has about 5GB of free space. and cosists of two physical volumes.

Time to, finally, extend the logical volume containing /home and grow the filesystem residing on that volume to the size of it.

The command to expand the filesystem depends on the type of filesystem on it. If you don’t know which filesystem you’re using, you can find the fs type with command df -T.

Now that we expanded logical volume lv_home and the filesystem on it, we’re back in a more comfortable state with some free space on /home.

To use the rest of the space in the volume group vg_sys, let’s give it to another logical volume and expand it:

 

Option 2: Expanding a current virtual disk and use it

While the previous option works well, it could be interesting to expand a current virtual disk instead of adding a new one. This has some limitations when working with MBR since a partition table can only have a maximum of 4 primary partition.

As with option 1, we ran out of space again in the /home filesystem and there’s no more free space in the volume group (we gave it to /tmp):

 

Expand an existing virtual disk with the Vsphere client

Similar as to add a new disk by changing the VM’s settings with the VMWare client, we can enlarge the existing first disk.

In the vSphere client, right click on the VM that’s running out of space and click “Edit Settings…”:

vmware_editsettings

Select the first disk (known on the system as /dev/sda):

vmware_expanddisk1

Simply increase the size in the right side of the window:

vmware_expanddisk2

Click “OK” to execute the changes on the VM.

USE THE NEWLY ADDED space TO EXTEND AN EXISTING VOLUME GROUP

After enlarging the disk with VMWare, the changes aren’t immediatelly seen on the system:

To refresh the devices, we need to ask the virtual SCSI adapter to rescan and update it’s connected devices:

As you see above, the disk is now seen as a 10GB disk, which is what we want.

Next step in the process is to start using the space that was added to the existing virtual disk /dev/sda. This is very similar as with option 1, we start by creating a new partition that uses the new space:

After writing the new partition table, containing the new partition /dev/sda3, we see that it isn’t added to the system since the disk is busy.

Fortunatelly, partx can override this limitation and update the partition table on a disk that is in use. You need to execute the command twice for it to work as expected:

As you can see, now our system sees /dev/sda3 and we can start using the device to expand our volume group and grow our logical volume with problems:

 

Limits of growing devices

Enlarging a device and creating a new partition is limited if you’re using the MBR partition scheme since this limits every disk to 4 primary partitions. To overcome this limitation, you can use a GPT partition layout or, if not possible, create an extended partition as the last of four possible partitions. In the extended partition we can have a virtual unlimited number of logical partitions.

A more clean solution is to extend existing partitions, which can be done with parted enlarge but this could possible bring down an online files ystem so I wouldn’t advise it for very critical file systems.

Using one of the above methods could possible save you from downtime so use it wisely :)

 
0 (0)
Article Rating (No Votes)
Rate this article
Attachments
There are no attachments for this article.
Comments
There are no comments for this article. Be the first to post a comment.
Full Name
Email Address
Security Code Security Code
Related Articles RSS Feed
HowTo: Send Email from an SMTP Server using the Command Line
Viewed 1744 times since Mon, Feb 18, 2019
YUM How to use yum command on CentOS/RHEL
Viewed 6837 times since Thu, Oct 25, 2018
SSL HowTo: Decode CSR
Viewed 4801 times since Mon, Feb 18, 2019
RHCS: Install a two-node basic cluster
Viewed 9841 times since Sun, Jun 3, 2018
Check a Website Availability from the Linux Command Line
Viewed 6309 times since Mon, Feb 18, 2019
debian Debian/Ubuntu Linux: Find If Installed APT Package Includes a Fix/Patch Via CVE Number
Viewed 9295 times since Sun, Sep 23, 2018
RHEL: Manually encrypting a filesystem with LUKS
Viewed 3595 times since Sun, May 27, 2018
Procedura powiekszania OCFS2 online
Viewed 5296 times since Fri, Jun 8, 2018
How to deal with dmesg timestamps
Viewed 3166 times since Wed, Oct 3, 2018
RHEL: Extending the maximum inode count on a ext2/ext3/ext4 filesystem
Viewed 2959 times since Sun, May 27, 2018