SSD Configuration and Resizing


This document explains how to configure local SSD disks for the on-premises StorReduce virtual appliances (StorReduce Single or StorReduce Cluster), and how to resize them if running short on space.

SSD Disk Storage Preparation

The on-premises StorReduce virtual appliances come pre-configured with two disks. The default disk sizes are relatively small but are suitable for evaluation of StorReduce.


VMware Name

Unix Device Name

Mount Point

Boot ‘Hard Disk’ /dev/sda1 /
Index Disk ‘Hard Disk 2’ /dev/sdb1 /mnt/database

For production servers the size of the SSD index disk should be expanded to match your expected data volumes. Within the StorReduce virtual appliance you will need to expand the file system on these disks to cover the full allocated capacity - see the ‘Expanding Local Disks’ section below.

Expanding SSD Disks

Whenever you wish to use an SSD disk larger than the default size, or when an existing SSD disk is close to being full, you will need to expand the SSD disk and the filesystem it contains. Here are the steps involved:

  1. Shut down the VM

  2. Expand the disk in VMware. Shown in the picture below is the VMware vSphere client expanding the SSD Index disk (Disk 2) from 100GB to 200GB.

  3. Start the VM and SSH in.

  4. Run lsblk - you should notice the disk size for sdb has increased but the partition size for sdb1 has remained the same:

    [root@localhost ~]# lsblk
    sda               8:0    0   20G  0 disk
    |-sda1            8:1    0  500M  0 part /boot
    |-sda2            8:2    0 19.5G  0 part
    sdb               8:16   0  200G  0 disk
    |-sdb1            8:17   0  100G  0 part /mnt/database
  5. Stop the StorReduce server by typing storreducectl server stop

  6. Unmount the partition you want to resize, in this case by typing umount /dev/sdb1

  7. Run lsblk which should now show that the partition is no longer mounted:

    [root@localhost ~]# lsblk
    sda               8:0    0   20G  0 disk
    |-sda1            8:1    0  500M  0 part /boot
    |-sda2            8:2    0 19.5G  0 part
    sdb               8:16   0  200G  0 disk
    |-sdb1            8:17   0  100G  0 part
  8. Resize the partition with fdisk

    fdisk /dev/sdb - specify the device here e.g. sdb

    use d to delete the existing partition (not this will NOT delete your data)

    use n to create a new partition - by default this will use all available space

    use w to write the new partition table

    [root@localhost ~]# fdisk /dev/sdb
    Welcome to fdisk (util-linux 2.23.2).
    Command (m for help): d
    Selected partition 1
    Partition 1 is deleted
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-419430399, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-419430399, default 419430399):
    Using default value 419430399
    Partition 1 of type Linux and of size 200 GiB is set
    Command (m for help): p
    Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd959dac1
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048   419430399   209714176   83  Linux
    Command (m for help): w
    The partition table has been altered!
    Calling ioctl() to re-read partition table.
    Syncing disks.
  9. Ensure that the filesystem is unmounted, by typing umount /dev/sdb1

  10. Verify the existing filesystem by typing e2fsck -f /dev/sdb1

  11. Resize the filesystem by typing resize2fs /dev/sdb1
    (note that this will default to using all available space)

    [root@localhost ~]# resize2fs /dev/sdb1
    resize2fs 1.42.9 (28-Dec-2013)
    Resizing the filesystem on /dev/sdb1 to 52428544 (4k) blocks.
    The filesystem on /dev/sdb1 is now 52428544 blocks long.
  12. Re-mount the filesystem mount /dev/sdb1 /mnt/database

  13. Use ‘lsblk’ to verify the new filesystem size:

    [root@localhost ~]# lsblk
    sda               8:0    0   20G  0 disk
    |-sda1            8:1    0  500M  0 part /boot
    |-sda2            8:2    0 19.5G  0 part
    sdb               8:16   0  200G  0 disk
    |-sdb1            8:17   0  200G  0 part /mnt/database
  14. Start storreduce by typing storreducectl server start

RAID for Multiple SSDs

If more than one SSD is available to the StorReduce server then they should be combined into a single RAID 0 device and mounted at the Index disk mount mount (/mnt/database).

One way to do this is to use the Linux Software raid module, using a command like this:

# mdadm --create --verbose /dev/md0 --level=0 -c256 --raid-devices=2 /dev/AAA /dev/BBB
# echo DEVICE /dev/AAA /dev/BBB > /etc/mdadm.conf
# mdadm --detail --scan >> /etc/mdadm.conf

where /dev/AAA and /dev/BBB are the SSDs, 2 is the number of SSDs and /dev/md0 is the RAID device to be created. The SSD(s) should be formatted with the EXT2 filesystem (it offers the best performance) and mounted at /mnt/database. E.g.,

# mkfs -t ext2 /dev/XXX
# mkdir -p /mnt/database
# e2label /dev/XXX SRDB
# echo "LABEL=SRDB /mnt/database ext2 noatime 0 0" >> /etc/fstab

Where /dev/XXX is the SSD device (e.g., /dev/sdb) or RAID device (e.g., /dev/md0).