Answered by the Webhosting Experts
Tags +

Managed Services

Just Leave Linux to Us
Is maintaining your Linux server keeping your team bogged down? Alleviate unnecessary stress with one of Hivelocity’s Linux Managed Services plans. With reboots, monitoring, updates, and more, your team can rest easy knowing your hardware and OS are in the hands of hosting experts.

Configure your server today and see the benefits a Hivelocity managed Linux solution can offer you!

Tags +

Software RAID – Linux


Managing your server with Software RAID is generally not too difficult. Typically and more commonly, we recommend hardware RAID (generally with a MegaCLI controller), however, if you are utilizing software RAID, it’s important to understand some fundamentals. If you are here in reference to seeing failed or missing drive(s) on your server or from your myvelocity panel, feel free to skip to (4) checking / recovery section.

Sections Covered:
1) RAID Basics
2) RAID Types
3) Setup Software RAID
4) Checking / Recovering Software RAID

1) RAID BasicsBack to Top

In brief, RAID is technology that allows you to build a virtual drive from two or more physical disks, whether it be for added performance and/or added redundancy.

Inevitably, server hard drives fail. Regardless of how well built they are or how highly rated, every drive will fail at some point in time. The problem is we never know when a drive will fail and sometimes can happen instantly or faster than SMART reporting/monitoring can detect. What we do know is that a failure will come and to help combat this, it is common practice to run mirrored drives, referred to as RAID1 or another RAID variety (1, 5, 6, 10) that offers redundancy in some form.

For Software RAID, our most common setup will be a mirror of two drives (RAID1) so that if one drive fails, the system remains online with the other until it can be scheduled for replacement. We will use RAID1 of two drives for our example.

2) RAID TypesBack to Top

RAID0 – Data is striped across included drives for increasing performace. There is no redudancy with RAID0, if one drive fails in a RAID0 array, your data is likely unrecoverable

RAID1 – Most common implementation for redundancy and minor increased read performance. Typically this is run with 2x drives, each drive is a mirror of one another.

RAID5 – Less common in our experience but can be a good solution, requires 3 or more drives to implement. Data is striped across all drives but with an added parity block for redundancy. If for example running a 3x drive RAID5, you can lose one drive without data loss.

RAID6 – Similar to RAID5 but with a minimum of 4 disk requirement as there are two parity blocks used for redundancy with the ability to recover from 2 drives failures in a 4 drive array

RAID10 – If running 4 or more drives, this is typically our recommended configuration. RAID10 is a combination of striping/mirroring data across a minimum 4 or more drive array. You can lose up to two drives depending on which drives. Since this is a stripe of mirrors, you can lose up to one of each mirror drive essentially in a 4 drive array.

3) Setup Software RAIDBack to Top

Most ideally, Software RAID would be completed during server provisioning or upon requesting a server reload. However, if your server was provisioned with two identical secondary drives that you are wanting to dedicate to additional mirrored (RAID1) or striped (RAID0) storage, you may follow this section during your setup process to accomplish.

This is written for 2 drive mirrored setup, referred to as RAID1

Install mdadm
yum install mdadm

Locate your additional unused disks and notate the device mounts (Resemble: /dev/sdX)
# or
fdisk -l
# or

Examine the disks for any existing raid data (Example drives: /dev/sdb and /dev/sdc)
mdadm -E /dev/sdb
mdadm -E /dev/sdc

Create partition(s) on each drive (Ensure they are identical setups)
fdisk /dev/sdb
fdisk /dev/sdc

Recheck the drives with above tools to ensure setup propery (lsblk, fdisk)
# or
fdisk -l

Create RAID1 device
mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sdb1 /dev/sdc1

Check the RAID device status
mdadm –details /dev/md0

Create file system on the RAID Device
mfs.xfs /dev/md0

Mount the device
mkdir /raidstorage
mount /dev/md0 /raidstorage

Test the mount point
cd /raidstorage
touch newfile.txt
echo “Test” > /raidstorage/newfile.txt
cat /raidstorage/newfile.txt

Auto-mount (If you wish for the additional raid device to mount on startup)

Make a backup of your fstab file
cp /etc/fstab /root/fstab_backup

Check to ensure the backup and current fstab are identical
cat /etc/fstab
cat /root/fstab_backup
diff /etc/fstab /root/fstab_backup

Add below line to your /etc/fstab file (You can use below or a text editor such as nano or vi directly)
echo ‘/dev/md0 /raidstorage xfs defaults 0 0’ >> /etc/fstab

Check to ensure it looks good
cat /etc/fstab

If everything looks great, try a reboot test when convenient
reboot now

4) Checking / Recovering Software RAIDBack to Top

Suppose you have two drives and you are finding one or a single parittion is missing. This could be either total drive faiure or perhaps the drive or partition has fallen out of sync. Falling out of sync can happen, particularly under heavy sustained load or if the drive is developing hardware problems.

To get a better idea of what disks we have physically attached the system, a great tool you may utilize is: lsblk

[[email protected] ~]# lsblk
sda                 8:0    0 447.1G  0 disk  
├─sda1              8:1    0     1G  0 part  
│ └─md0             9:0    0  1022M  0 raid1 /boot
├─sda2              8:2    0     4G  0 part  
│ └─md1             9:1    0     4G  0 raid1 [SWAP]
└─sda3              8:3    0 442.1G  0 part  
  └─md2             9:2    0   442G  0 raid1 
    └─vg0-lv_root 253:0    0   442G  0 lvm   /
sdb                 8:16   0 447.1G  0 disk  
├─sdb1              8:17   0     1G  0 part  
│ └─md0             9:0    0  1022M  0 raid1 /boot
├─sdb2              8:18   0     4G  0 part  
│ └─md1             9:1    0     4G  0 raid1 [SWAP]
└─sdb3              8:19   0 442.1G  0 part  

We can see from above example, we do appear to have both disks (sda and sdb) for our 2 drive RAID1 example, but we should expect to see sdb3 mounted with a vg on / path, but all we see is part

NOTE – Issuing a support ticket: If you are seeing a missing physical drive here, it is possible the drive has failed or perhaps the connection with the drive is loose. At this point we would likely recommend issuing a support ticket. We recommend ensuring that you have up to date backups and to provide a best timeframe you can afford a 1-4 hour maintenance window for further inspection.

Proceeding with our example, to check this RAID further, we can locate virtual drives using below:

[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdb1[1] sda1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sda3[0]
      463474688 blocks super 1.2 [2/1] [U_]
      bitmap: 3/4 pages [12KB], 65536KB chunk

md1 : active raid1 sda2[0] sdb2[1]
      4189184 blocks super 1.2 [2/2] [UU]
unused devices: 

Here again, for md2, we only find sda3, no sdb3

For additional insight, we can use mdadm –detail

[[email protected] ~]# mdadm --detail /dev/md2
           Version : 1.2
     Creation Time : Thu Apr  1 11:28:15 2021
        Raid Level : raid1
        Array Size : 463474688 (442.00 GiB 474.60 GB)
     Used Dev Size : 463474688 (442.00 GiB 474.60 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Apr 19 15:04:40 2021
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name :  (local to host
              UUID : 957b2d7f:0dc2cda2:d55bc5e1:ae4abc41
            Events : 309108

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       -       0        0        1      removed

We can see the the array is in clean but degraded state and only /dev/sda3 is showing attached (No /dev/sdb3).

In this case, since the drive is still attached to the system – just removed from the array, we can readd the partition using below as well and recheck the above –detail command periodically if needed to check for updates.

NOTE: Before applying a re-add or if having to re-add more often than reasonably, it would be recommended to check the drives SMART health. If the drive is reporting bad health or issues, it would be recommended to schedule a 1-4 hour maintenance window with support (Details above).

Check SMART health using smartctl -a /dev/{device}

[[email protected] ~]# smartctl -a /dev/sdb
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1160.24.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke,

Device Model:     KINGSTON SEDC500R480G
Serial Number:    500xxxxxxxxx60CE
LU WWN Device Id: 5 0026b7 683ff60ce
Firmware Version: SCEKJ2.7
User Capacity:    480,103,981,056 bytes [480 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 (minor revision not indicated)
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Apr 22 09:37:28 2021 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SMART overall-health self-assessment test result: PASSED
SMART Error Log Version: 1
No Errors Logged

This is a large output with a lot of details about the drive but above is sort of what we usually look for – Passing SMART overall and whether or not errors are logged. Having some errors does not necessarily mean there is drive failure but failing SMART overall can be a good indicator. Diving into the details further can tell you thigs like power on hours, power cycle counts, wearout indicator and much more.

If all looks well, you can try to re-add drive to array

mdadm --manage /dev/md2 --re-add /dev/sdb3

If the drive was replaced by our team, we can usually assist you with re-adding the replaced drive. If you prefer to perform yourself, you would want to partition the replaced disk to match exactly your existing. You can use utililties such as fdisk / parted to print partition layout of existing and to then create matching partitions on your replacement disk.

You may then need to remove the failed device:

mdadm /dev/md2 -r /dev/sdb3

Partition the new drive with matching partitions, then readd them:

mdadm --manage /dev/md2 --re-add /dev/sdb1
mdadm --manage /dev/md2 --re-add /dev/sdb2
mdadm --manage /dev/md2 --re-add /dev/sdb3

Thank you for taking the time to checkout this article! We hope this has helped you, whether it be RAID fundamentals, process understanding, setup or checking/recovering.



Need More Personalized Help?

If you have any further issues, questions, or would like some assistance checking on this or anything else, please reach out to us from your account and provide your server credentials within the encrypted field for the best possible security and support.

If you are unable to reach your account or if you are on the go, please reach out from your valid account email to us here at: [email protected] We are also available to you through our phone and live chat system 24/7/365.

Rapid Restore

Backup your entire server’s data every night and have access to 5 days of rolling restore points.  Restore your server’s data, OS and configuration any time you need it.

Our Rapid Restore service saves the day during accidental data loss, hardware failures and virus contraction. Simply pick your recovery point and restore the data from that day. 

DDoS Protection

While our competitors may advertise DDoS protection, most often, they are merely implementing easily evaded router rules or simply black-holing targeted servers. They consider this “DDoS protecting their network.” However, neither of these solutions should give comfort to any online business. Should your site be attacked, chances are likely both of these options will end with your server being taken offline. At Hivelocity, we take the responsibility of keeping your servers online very seriously. For this reason, we offer two very serious forms of DDoS protection.


Every solution we provide includes our Filtering Edge of Network System (FENS). FENS is a series of proprietary systems that proactively monitors and protects the entire Hivelocity Network from most common Denial of Service (DOS) and Distributed Denial of Service (DDoS) attacks.


For an extra fee, you can enhance your server’s protection further with the addition of our Server Defense System. Our Server Defense System sits in front of your server, inspecting inbound data and looking for malicious traffic. The moment an attack is detected, it instantly begins scrubbing each data packet. Hivelocity’s Server Defense System delivers business continuity even in the face of massive and complex attacks.

Our Server Defense System is like adding an alarm and armed guard to your business, alerting you to and destroying anything attempting to jump that fence. Our Server Defense System utilizes internally developed proprietary systems in addition to Corero’s Threat Defense Smartwalls for data packet scrubbing. Each of our data centers is a scrubbing center with Corero Smartwalls on-premise, allowing us to provide on-prem zero-lag data scrubbing.

SSL Certificates

The security of your online commerce and protecting your customers’ data is as important to us as it is to you. When your customers see the green bar, they will know their connection to you is protected. We offer single domain, multi-domain, and wild-card certificates.

We offer industry leading 128-bit encryption certificates, allowing you to conduct e-commerce with complete security. Inspire confidence in your customers by displaying any number of seals and indicators certifying that your site is secure.

Load Balancing

Adding this service to two servers with identical content will allow you to distribute your load evenly across your hardware. Don’t lose business because you couldn’t handle the demand. Load balance and handle your biggest resource spikes with ease.


Stop attacks, prevent unauthorized access, and achieve regulatory compliance. Our Juniper hardware firewalls offload the work so your server never has to consume resources protecting itself from malicious traffic. A single firewall can be used to protect multiple servers.

Cloud Storage

Cloud storage offers users redundancy and easy accessibility, ensuring your data remains secure and readily available. Scale to as much as you need for only a 20¢/GB.

Cloud Storage is distributed and replicated across many servers, protecting your data from hardware failure. Highly scalable, it can handle thousands of client connections via TCP/IP. Connect to your virtual drive with SFTP, FTP, and SSHMount and in the future NFS and AFP. Cloud Storage is based on a stackable design which is upgradeable up to 2TB per instance.