Linux - Summary of the example design
-
Physical disks:
-
SSDs:
/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd(1TB each) -
HDDs:
/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh(4TB each)
-
-
RAID:
-
/dev/md0— RAID1 for/boot(mirrored, safe boot) -
/dev/md1— RAID10 (SSDs) for OS/DB performance -
/dev/md2— RAID6 (HDDs) for/data(bulk storage)
-
-
LVM:
-
VG
vg_ssdon/dev/md1→ LVs:lv_root,lv_var,lv_db,lv_swap -
VG
vg_hddon/dev/md2→ LVs:lv_data
-
-
Filesystems:
-
/and/var→ext4 -
/srv/db→xfs(high I/O large-file optimized) -
/data→xfs
-
-
Snapshots & Backups:
-
LVM snapshots for short-term consistent backups
-
rsync/Borg/CIFS backup to remote cluster or object storage
-
-
Monitoring:
-
mdadm --detail,/proc/mdstat,smartctl, Prometheus node exporter
-
Design rationale (short)
-
RAID10 on SSDs → best balance of performance + redundancy for OS and DB.
-
RAID6 on HDDs → survive 2 disk failures for large capacity.
-
Keep
/booton a small RAID1 (bootloaders don’t always like complex arrays). -
LVM on top of RAID gives flexible resizing & snapshots.
-
Use
xfsfor DB/data (scales well with big files & parallel I/O),ext4for root/var.
Step-by-step setup (commands — adapt device names & sizes)
Warning: these commands will erase disks. Run only on a system where these specific devices are the correct targets.
1. Partitioning (simple approach)
Create a small /boot partition (1G) on each disk and use the remainder for RAID. Here’s an example using parted for /dev/sda — repeat for every disk:
# example for /dev/sda — repeat for sdb, sdc, sdd, sde..sdh
sudo parted /dev/sda --script mklabel gpt \
mkpart primary 1MiB 1025MiB name 1 boot \
mkpart primary 1025MiB 100% name 2 raid
Now each disk has partition 1 (1 GiB boot) and partition 2 (rest for RAID).
Device names become e.g. /dev/sda1, /dev/sda2.
2. Create RAID arrays with mdadm
RAID1 for /boot (use the small partition /dev/sd?1)
sudo mdadm --create /dev/md0 --level=1 --raid-devices=4 \
/dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
(Using 4-way RAID1 is overkill but ensures boot mirrored across all SSDs; you could choose only two disks instead.)
RAID10 for SSD storage (use /dev/sd?2 on SSDs)
sudo mdadm --create /dev/md1 --level=10 --raid-devices=4 \
/dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
RAID6 for HDDs
sudo mdadm --create /dev/md2 --level=6 --raid-devices=4 \
/dev/sde2 /dev/sdf2 /dev/sdg2 /dev/sdh2
Check status:
cat /proc/mdstat
sudo mdadm --detail /dev/md1
Save mdadm config so arrays assemble at boot:
sudo mdadm --detail --scan | sudo tee /etc/mdadm/mdadm.conf
sudo update-initramfs -u # Debian/Ubuntu
3. Create LVM on top of RAID
Install LVM tools if needed
sudo apt install lvm2 # Debian/Ubuntu
# or sudo yum install lvm2
PV / VG / LV (SSD VG: vg_ssd)
sudo pvcreate /dev/md1
sudo vgcreate vg_ssd /dev/md1
# create logical volumes (adjust sizes as needed)
sudo lvcreate -L 40G -n lv_root vg_ssd
sudo lvcreate -L 20G -n lv_var vg_ssd
sudo lvcreate -L 200G -n lv_db vg_ssd
sudo lvcreate -L 16G -n lv_swap vg_ssd
(Hint: use -l 100%FREE to consume remaining space if you want one LV for everything)
HDD VG
sudo pvcreate /dev/md2
sudo vgcreate vg_hdd /dev/md2
sudo lvcreate -L 14T -n lv_data vg_hdd # example: use most of the 4×4TB array
4. Format filesystems
# /boot on md0 — use ext4
sudo mkfs.ext4 /dev/md0
# root and var on ext4
sudo mkfs.ext4 /dev/vg_ssd/lv_root
sudo mkfs.ext4 /dev/vg_ssd/lv_var
# DB data: XFS
sudo mkfs.xfs /dev/vg_ssd/lv_db
# Data store: XFS
sudo mkfs.xfs /dev/vg_hdd/lv_data
# swap
sudo mkswap /dev/vg_ssd/lv_swap
sudo swapon /dev/vg_ssd/lv_swap
5. Mount points and fstab (use UUIDs)
Find UUIDs:
blkid
Add entries to /etc/fstab (example lines — replace UUID=... with actual values from blkid):
# /boot
UUID=<uuid-md0> /boot ext4 defaults 0 2
# root
UUID=<uuid-lv_root> / ext4 defaults,relatime 0 1
# /var
UUID=<uuid-lv_var> /var ext4 defaults 0 2
# db mount
UUID=<uuid-lv_db> /srv/db xfs defaults,noatime,allocsize=512m 0 2
# data
UUID=<uuid-lv_data> /data xfs defaults,noatime 0 2
# swap
UUID=<uuid-swap> none swap sw 0 0
Mount:
sudo mount -a
6. Bootloader
-
If using GRUB, install it on each SSD so boot works even if one disk dies:
sudo grub-install /dev/sda
sudo grub-install /dev/sdb
sudo grub-install /dev/sdc
sudo grub-install /dev/sdd
sudo update-grub
-
If
/bootis on RAID, ensure initramfs includes mdadm & md arrays.
7. LVM snapshots for backups (example)
Create a snapshot of the DB LV before backup:
sudo lvcreate -L 10G -s -n lv_db_snap /dev/vg_ssd/lv_db
# mount snapshot read-only
sudo mkdir /mnt/db_snap
sudo mount -o ro /dev/vg_ssd/lv_db_snap /mnt/db_snap
# run backup from /mnt/db_snap (rsync, xtrabackup, etc)
After backup, remove snapshot:
sudo umount /mnt/db_snap
sudo lvremove /dev/vg_ssd/lv_db_snap
8. Monitoring & health checks
-
RAID status:
cat /proc/mdstat
sudo mdadm --detail /dev/md1
-
SMART (install
smartmontools):
sudo smartctl -a /dev/sda
-
Schedule periodic SMART tests and set up alerting if
FAILEDorPRE-FAIL. -
Use Prometheus node exporter + alertmanager to track disk usage, mdadm status, and LV sizes.
-
For mdadm event notifications:
sudo apt install mdadm
# mdadm --monitor can be configured to send email on failure (mdadm.conf)
9. Backup plan (example)
-
Short-term (daily): LVM snapshots + backup to remote host via
rsyncorborg. -
Long-term (weekly/monthly): replicate to remote object store or cloud (S3/Backblaze) with lifecycle rules.
-
Keep at least 3 copies: onsite, offsite, and versioned snapshots.
-
Test restores regularly.
Sizing & tuning suggestions
-
Swap: on systems with lots of RAM, keep swap small (e.g., 8–16GB) or use zram. For hibernation, swap >= RAM.
-
DB LV: allocate based on DB size + growth; give DB dedicated LV for easier backups.
-
Filesystem options: use
noatimeon xfs/ext4 for better write performance. -
Mount options for XFS:
noatime,allocsize=1m(tune to workload). -
Keep
/var/logon its own LV if logs may grow uncontrolled.
Alternative (small server with 2 disks)
If you have only 2 disks, simplified:
-
/boot → RAID1 on sda1+sdb1
-
RAID1 for everything else (sda2+sdb2) → LVM → root, var, swap, data
Commands are similar but create md arrays with 2 devices and use RAID1.
Recovery notes & best practices
-
Keep
mdadm.confand runupdate-initramfsso arrays assemble at boot. -
Test disk failure/rebuild in a lab before production:
mdadm --fail /dev/md1 /dev/sdb2thenmdadm --remove /dev/md1 /dev/sdb2then replace &mdadm --add. -
Always have remote backups — RAID is not a substitute for backups.
-
Maintain spare drives (hot spares) if your setup supports automatic rebuilds.
Quick checklist you can copy
-
Partition disks (
parted) — create boot partition + raid partitions -
Create md arrays (
mdadm) — md0 (boot), md1 (RAID10), md2 (RAID6) -
Save mdadm config + update initramfs
-
Create PVs → VGs → LVs (
pvcreate/vgcreate/lvcreate) -
Format filesystems (
mkfs.ext4,mkfs.xfs) -
Add to
/etc/fstabby UUID -
Install GRUB on all boot devices
-
Configure monitoring + backups + test restores