Learn how to align an SSD on Linux

I’ve got a small home server with a software RAID-5 for storing my files. It also runs a few virtual machines and acts as a NAT router for internet access. Nothing expensive, just some Frankensteinian patchwork built from old hardware left over when I upgraded my workstation. Nevertheless, I granted it a brand new Intel X25-M SSD last week.

Did I mention that this server is running Gentoo Linux? I thought this would be a good time to do a fresh install and get everything right that might have gone wrong the first time. Besides, installing Linux always is an interesting (and masochistic) experience, especially when your chosen distribution has no installer.

Because getting my partitions and file systems aligned also proved to be difficult task, I thought why not make a small article out of this!

Erase Block Size

SSDs always operate on entire blocks of memory. This is so because, before writing to a memory cell, flash memoryneeds to be erased, which requires the application of a large voltage to the memory cells, which can only happen to an entire memory cell block at once (probably because this kind of power would affect other cells around the one being erased, at least that’s my guess.)

Anyway, this means that if you write 1 KB of data to an SSD with an erase block size of 128 KB, the SSD needs to read 127 KB from the target block, erase the block and write the old data plus the new data back into the block. That’s something one just has to accept when using an SSD. Modern SSD firmware will do its best to pre-erase blocks when it’s idle and try to write new data into these pre-erased blocks (by mapping data to other locations on the drive without the knowledge of the OS.)

Still, watch what happens if a file system just sees the SSD as a brick of memory and writes data at a random position:

ssd-unaligned-write

The SSD now has to erase and write two blocks, even though one would have sufficed for the amount of data being written. To fix this, the drive’s firmware would have to do data mapping on the byte level, which likely isn’t going to happen (in the worst case, you would need more memory for the remapping table than the drive’s capacity!)

If the file system’s write was aligned to a multiple of the SSD’s erase block size, the result would be this:

ssd-aligned-write

Thus, it’s generally a good idea to make sure your file system’s writes are aligned to multiples of your SSD’s erase block size. As I found out, this isn’t quite as easy as it sounds. The first road block is already encountered when you partition a hard drive:

Partition Alignment

If the partitions of a hard drive aren’t aligned to begin at multiples of 128 KiB, 256 KiB or 512 KiB (depending on the SSD used), aligning the file system is useless because everything is skewed by the start offset of the partition. Thus, the first thing you have to take care of is aligning the partitions you create.

Traditionally, hard drives were addressed by indicating the cylinderhead and sector at which data was to be read or written. These represented the radial position, the drive head (= platter and side) and the axial position of the data respectively. With LBA (logical block addressing), this is no longer the case. Instead, the entire hard drive is addressed as one continuous stream of data.

Linux’ fdisk, however, still uses a virtual C-H-S system where you can define any number of heads and sectors yourself (the cylinders are calculated automatically from the drive’s capacity), with partitions always starting and ending at intervals of heads x cylinders. Thus, you need to choose a number of heads and sectors of which the SSD’s erase block size is a multiple.

I found two posts which detail this process: Aligning Filesystems to an SSD’s Erase Block Size and Partition alignment for OCZ Vertex in Linux. The first one recommends 224 heads and 56 sectors, but I can’t quite understand where those numbers come from, so I used the advice from the post on the OCZ forums with 32 heads and 32 sectors which means fdisk uses a cylinder size of 1024 bytes. And because fdisk partitions in units of 512 cylinders (= 512 x heads x sectors) fdisk’s unit size now happens to be an SSD’s maximum erase block size. Nice!

To make fdisk use 32 heads and 32 sectors, remove all partitions from a hard drive and then launch fdisk with the following command line when you create the first partition:

The OCZ post also recommends starting at the second 512-cylinder unit because the first partition is otherwise shifted by one track. Don’t ask me why :)

Here’s how I partitioned my SSD in the end:

fdisk-32-heads-32-sectors

For a normal hard drive, I’d probably use 128 heads and 32 tracks now to achieve 4 KiB boundaries for my partitions.

RAID Chunk Size

If you plan on running a software RAID array, I’ve seen chunk sizes of 64 KiB and 128 KiB being recommended. This can be specified using the --chunk parameter for mdadm, eg.

Probably the larger chunk size is more useful if you are storing large files on the RAID partition, but I haven’t found any advice which included benchmarks or at least a solid explanation yet.

File System Alignment

Now that the partitions have been taken care of, the file systems need to use proper alignment as well. Generally all file systems use some kind of allocation blocks, usually with a size of 4 KiB. But increasing this size to 128 KiB (or even 512 KiB) would waste a lot of space since any file would use up memory in a multiple of that number.

Luckily, Linux file systems can be tweaked a lot. I’m using ext4, here the -E stride,stripe-width parameters control the alignment. The HowTos/Disk Optimization page in the CentOS wiki gives this advice:

The drive calculation works like this: You divide the chunk size by the block size for one spindle/drive only. This gives you your stride size. Then you take the stride size, and multiply it by the number of data-bearing disks in the RAID array. This gives you the stripe width to use when formatting the volume. This can be a little complex, so some examples are listed below.

For example if you have 4 drives in RAID5 and it is using 64K chunks and given a 4K file system block size. The stride size is calculated for the one disk by (chunk size / block size), (64K/4K) which gives 16K. While the stripe width for RAID5 is 1 disk less, so we have 3 data-bearing disks out of the 4 in this RAID5 group, which gives us (number of data-bearing drives * stride size), (3*16K) gives you a stripe width of 48K.

The Linux Kernel RAID wiki offers further insight:

Calculation

  • chunk size = 128kB (set by mdadm cmd, see chunk size advise above)
  • block size = 4kB (recommended for large files, and most of time)
  • stride = chunk / block = 128kB / 4k = 32kB
  • stripe-width = stride * ( (n disks in raid5) – 1 ) = 32kB * ( (3) – 1 ) = 32kB * 2 = 64kB

If the chunk-size is 128 kB, it means, that 128 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be 32 filesystem blocks in one array chunk.

stripe-width=64 is calculated by multiplying the stride=32 value with the number of data disks in the array.

A raid5 with n disks has n-1 data disks, one being reserved for parity. (Note: the mke2fs man page incorrectly states n+1; this is a known bug in the man-page docs that is now fixed.) A raid10 (1+0) with n disks is actually a raid 0 of n/2 raid1 subarrays with 2 disks each.

So these are the stride and stripe-width parameters I’d use:

  • Intel SSDs with an erase block size of 128 (or 512 KiB — Intel isn’t quite straightforward with this, see the comments section for a discussion on the subject – if anyone from Intel is reading this, help us out! ;-)) that are not part of a software RAID:
    -E stride=32,stripe-width=32
  • OCZ Vertex SSDs with an erase block size of 512 KiB that are not part of a software RAID:
    -E stride=128,stripe-width=128
  • Normal hard drives that are not part of a software RAID
    trust the defaults
  • Any software RAID:
    -E stride=raid chunk size / file system block size,stripe-width=raid chunk size xnumber of data bearing disks

Thus, I set up the file systems on the Intel SSD like this:

mkfs.ext4 defaulted to 1024 byte allocation units on my boot partition, so I adjusted the stride up to 128 KiB according to the advice from the CentOS wiki. The alignment of my boot partition is probably not of any relevance because the system will read maybe 10 files from it and not modify anything, but I wanted to stay consistent

0 (0)
Article Rating (No Votes)
Rate this article
Attachments
There are no attachments for this article.
Comments
There are no comments for this article. Be the first to post a comment.
Full Name
Email Address
Security Code Security Code
Related Articles RSS Feed
debian How to Upgrade Debian 8 Jessie to Debian 9 Stretch
Viewed 2066 times since Sun, Sep 23, 2018
RHEL: Retrieve and generate a unique SCSI identifier
Viewed 2736 times since Sat, Jun 2, 2018
How to Migrate from RHEL 8 to CentOS 8
Viewed 2375 times since Fri, May 15, 2020
HowTo: Create CSR using OpenSSL Without Prompt (Non-Interactive)
Viewed 13660 times since Mon, Feb 18, 2019
OpenSSL: Check SSL Certificate Expiration Date and More
Viewed 6163 times since Mon, Feb 18, 2019
RHCS6: Luci - the cluster management console
Viewed 2911 times since Sun, Jun 3, 2018
OpenSSL: Check If Private Key Matches SSL Certificate & CSR
Viewed 2672 times since Mon, Feb 18, 2019
Linux An introduction to swap space on Linux systems
Viewed 2125 times since Thu, Jan 23, 2020
RHCS6: Quorum disk and heuristics
Viewed 3871 times since Sun, Jun 3, 2018
Linux ssh Hide OpenSSH Version Banner
Viewed 25935 times since Wed, Apr 22, 2020