Advanced Storage Configuration for Bare Metal Servers

Introduction

AKLWEB HOST Bare Metal Servers offer access to high-performance physical servers with no virtualization layer between the user and the server. Bare-metal servers have no resource limits and enable the hardware’s full potential to handle intensive workloads. The wide range of bare-metal servers offered by AKLWEB HOST features configurations with different processors, memory, storage, and more.

The Redundant Array of Independent Disks (RAID) is a storage virtualization method that combines multiple disks to provide data redundancy and high performance. Using RAID, you can configure multiple disks attached to a server based on your workload and the trade-off between performance and data protection.

This guide explains the disk configuration options available for various AKLWEB HOST Bare Metal Servers. It also provides an overview of the different RAID levels (0, 1, 5, 10). It walks you through creating, mounting, and deleting a RAID array using the mdadm utility on extra disks available on the bare-metal server.

Prerequisites

Disk Configuration Options

AKLWEB HOST Bare Metal Servers with an Intel processor have two SSDs, and the servers with an AMD processor have the same 2 SSDs setup but with two additional high-speed NVMe SSDs. This section explains the options for configuring disks attached to the server during bare-metal server provisioning.

The following are the available disk configuration options.

The RAID1 – Software RAID option combines the two main SSDs connected to the server into a RAID 1 array. The RAID array mounts as a single storage device with the capacity of a single disk. It leaves the additional NVMe disks (if any) unmounted and unformatted.

The No RAID – Disks formatted/mounted format the 2 main SSDs connected to the server, use one as the boot volume, and mount the other as an empty disk. It leaves the additional NVMe disks (if any) unmounted and unformatted.

The No RAID – Extra disks only format one of the two main SSDs connected to the server to use as the boot volume and leave all other disks, including additional NVMe disks (if any), unmounted and unformatted.

Overview of RAID

The Redundant Array of Independent Disks (RAID) is a storage virtualization method that combines multiple disks into an array to achieve higher performance, greater redundancy, or both. A RAID array appears as a single disk to the Operating System (OS), and you can interact with it just like a normal storage disk. It works by placing data on multiple disks that allow input and output operations to overlap. Different RAID levels use different methods to distribute data across the disks in the array.

The methods used to split data in a RAID array are disk mirroring and disk striping. The disk mirroring method stores the same data on multiple disks so that, if one disk fails, the data remains uncorrupted, thereby achieving data redundancy. The disk striping method distributes data blocks across multiple disks, improving capacity and performance. Still, the absence of redundancy makes it fault-intolerant without parity.

The following are the levels of a RAID array.

RAID 0 arrays use data striping to combine all attached disks, providing combined capacity and high-speed read performance. It focuses on performance and storage. It requires a minimum of two disks.

RAID1 arrays use data mirroring to combine all attached disks while providing the capacity of a single disk. The other disks attached to the array contain the same data as the first disk. It focuses on data redundancy and requires at least two disks.

The RAID5 arrays use the data striping method with parity disks. The parity disks contain partial data blocks to withstand disk failure. It provides combined capacity, excluding the parity disk and high-speed read operations. It focuses on performance, storage, and redundancy. It requires at least three disks.

The RAID 10 arrays use a combination of data striping and mirroring. The attached disks are divided into two RAID-1 arrays that together form a RAID-0 array, providing high-speed read performance but only half the capacity of the total attached disks. It focuses on performance and redundancy. It requires at least four disks.

A single bare metal server can have multiple RAID arrays, given that it has enough available disks. You can select the RAID level based on your priority between performance and security. If your use case requires high redundancy, you may use RAID 1 or RAID 10. If your use case requires high performance, you may use RAID 0 or RAID 5.

Create a RAID Array

As explained in the previous section, RAID arrays create a logical storage device that enables multiple disks to appear as a single disk. RAID execution is only possible with a RAID controller, which handles all the underlying work. The Linux operating system supports software RAID, and you can use the mdadm utility to manage and monitor RAID arrays.

Note: You can not follow the steps to set up a RAID array on a AKLWEB HOST Bare Metal Server, which features an Intel processor, as they do not have enough disks available to configure a RAID array.

You can create a RAID array using the mdadm --create command. The following example command creates a RAID1 array with two disks.

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb

The above command uses the --create flag to define the operation type, followed by the path to create the logical storage device. You can set the RAID array level with the --level flag and specify the number of disks with-- raid-devices, followed by the paths to the storage disks.

You can use the same command to configure any RAID array level, including RAID0RAID1RAID5RAID10, and so on.

Fetch the disks attached to the server.

# lsblk

The above command shows all the block devices connected to the server. You can identify and select the disks for forming a RAID array from the output.

Output.

sda       8:0    0 447.1G  0 disk 
└─sda1    8:1    0 447.1G  0 part /
sdb       8:16   0 447.1G  0 disk 
nvme1n1 259:0    0   1.5T  0 disk 
nvme3n1 259:1    0   1.5T  0 disk 
nvme0n1 259:2    0   1.5T  0 disk 
nvme4n1 259:3    0   1.5T  0 disk 
nvme5n1 259:4    0   1.5T  0 disk 
nvme2n1 259:5    0   1.5T  0 disk

Create a RAID array.

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1

The above command creates a RAID1 array using 2 additional NVMe disks.

Fetch RAID array details.

# mdadm --detail /dev/md0

The above command displays details of the RAID array associated with the specified logical storage device.

Output.

/dev/md0:

          Version : 1.2

    Creation Time : Thu Oct  6 23:00:21 2022

       Raid Level : raid1

       Array Size : 1562681664 (1490.29 GiB 1600.19 GB)

    Used Dev Size : 1562681664 (1490.29 GiB 1600.19 GB)

     Raid Devices : 2

    Total Devices : 2

      Persistence : Superblock is persistent

 

    Intent Bitmap : Internal

 

      Update Time : Thu Oct  6 23:06:07 2022

            State : clean

   Active Devices : 2

  Working Devices : 2

   Failed Devices : 0

    Spare Devices : 0

 

Consistency Policy : bitmap

 

             Name : guest:1  (local to host guest)

             UUID : b1285d12:87b7edf9:4528868a:d754a8db

             Events : 1

 

   Number   Major    Minor   RaidDevice   State

   0         259       3         0        active sync   /dev/nvme0n1

   1         259       4         1        active sync   /dev/nvme1n1

Fetch the logical block device.

# lsblk /dev/md0

Output.

NAME MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
md0    9:0    0  1.5T  0 raid1 

The output confirms the RAID array size and level and indicates that it has not yet been mounted on the server. The next section demonstrates the steps to mount the RAID array to the server.

Mount A RAID Array

Creating a RAID array forms a logical storage device. This device is not persistent by default and does not include a filesystem. This section explains how to make the RAID array persistent, format it with a filesystem, and mount it on the server.

Update the mdadm configuration file.

# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

The above command executes the mkconf script and overwrites the mdadm configuration file with the output. The script examines the system status and generates a configuration that includes all RAID arrays. If you do not wish to regenerate the whole configuration, you can manually edit it mdadm.conf using a text editor to add or remove the RAID array entries.

Update the initial RAM filesystem.

# update-initramfs -u

The above command updates the initial RAM filesystem. It ensures the availability of the RAID arrays listed in mdadm.conf during the early boot process.

Format the RAID array.

# mkfs.xfs /dev/md0

The above command formats the logical storage device with the xfs filesystem.

Create a new directory.

# mkdir /mnt/md0

The above command creates a new directory in /mnt named md0, which is a mount point for the logical storage device.

Edit the filesystem table file.

# nano /etc/fstab

Add the following content to the file, then save it using CTRL + X, then ENTER.

/dev/md0  /mnt/md0  xfs  defaults  0  0

The above configuration maps the /dev/md0 logical storage device to the /mnt/md0 mount point. It ensures that the array gets mounted during the boot process.

Mount the RAID array.

# mount -a

The above command reads the filesystem table configuration file to mount all the disks.

Verify the mounted RAID array.

# lsblk /dev/md0

Output.

NAME MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
md0    9:0    0  1.5T  0 raid1 /mnt/md0

Replace A Failed Disk In A RAID Array

Individual disks in the RAID array may fail, causing a loss of redundancy. It is best to replace the failed disks as soon as they fail to keep the RAID array healthy. This section explains the steps to mark a failed disk and add it to a RAID array. You cannot replace a failed disk in a RAID0 environment, because the array is not redundant and will fail as soon as a disk fails.

Information: You can manually trigger a disk failure to test your RAID configuration. Refer to the RAID documentation overview for more information.

Mark the disk as failed.

# mdadm /dev/md0 --fail /dev/nvme1n1

The above command marks the specified disk as failed. Marking a failed disk allows the array to rebuild itself using existing spare disks or a new disk added to the array, restoring the array to a healthy state.

Remove the disk from the array.

# mdadm /dev/md0 --remove /dev/nvme1n1

The above command removes the specified disk from the array. If you do not have any additional disks attached to the server, shut down the server and replace the failed disk with a new one before proceeding with the next steps.

Add the new disk to the array.

# mdadm /dev/md0 --add /dev/nvme2n1

The above command adds the specified disk to the array. The RAID array rebuilds itself using the new disk and restores the array to a healthy state.

Verify the update.

# mdadm --detail /dev/md0

You can also monitor the rebuilding process using the watch mdadm --detail /dev/md0 command. The total time to complete the rebuilding process depends on the disk size and speed.

Add A Spare Disk to a RAID Array

You can add spare disks to redundant arrays such as RAID 1RAID 5, and RAID 10. The spare disk replaces a failed disk as soon as a failure occurs, keeping the RAID in a healthy state. This section explains the steps to add a spare disk to a RAID array. However, you cannot add a spare disk to a RAID0 environment, as the array is not redundant and will fail as soon as a disk fails.

Information: You can also increase the size of your RAID array with the --grow flag. Refer to the RAID documentation overview for more information.

Add a spare disk to the RAID array.

# mdadm /dev/md0 --add /dev/nvme3n1

Verify the update.

# mdadm --details /dev/md0

Delete A RAID Array

You can repurpose the individual disks in a RAID array when the array is no longer required by deleting it and removing the superblock from all associated disks.

Stop and remove the RAID array.

# mdadm --stop /dev/md0
# mdadm --remove /dev/md0

The above commands stop and remove the specified RAID array from the server.

Update the mdadm configuration file.

# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

The above command regenerates the mdadm configuration, excluding the deleted RAID array.

Update the initial RAM filesystem.

# update-initramfs -u

Using a text editor, remove the entry from the filesystem table file.

# nano /etc/fstab

Remove superblock from associated disks.

# mdadm --zero-superblock /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1

The mdadm utility uses the superblock header to assemble and manage the disk as part of an array. The above command removes the superblock header from the specified storage devices.

Conclusion

You learned about the different disk configuration options available for various AKLWEB HOST Bare Metal Servers, along with an overview of RAID arrays. You also performed the steps to create, mount, and delete a RAID array. This information helps you implement RAID on your bare-metal server. Refer to the mdadm documentation for more information about configuring software RAID arrays. If you plan to use a RAID array in a production environment, you should configure email notifications to be alerted in the event of a disk failure. Refer to the RAID documentation overview for more information.