SOFTRAID(4) | Device Drivers Manual | SOFTRAID(4) |
softraid
—
software RAID
softraid0 at root
The softraid
device emulates a Host Bus
Adapter (HBA) that provides RAID and other I/O related services. The
softraid
device provides a scaffold to implement
more complex I/O transformation disciplines. For example, one can tie chunks
together into a mirroring discipline. There really is no limit on what type
of discipline one can write as long as it fits the SCSI model.
softraid
supports a
number of
disciplines.
A discipline is a collection of functions that provides specific I/O
functionality. This includes I/O path, bring-up, failure recovery, and
statistical information gathering. Essentially a discipline is a lower level
driver that provides the I/O transformation for the softraid device.
A volume is a virtual disk device that is made up of a collection of chunks.
A chunk is a partition or storage area of fstype “RAID”. disklabel(8) is used to alter the fstype.
Currently softraid
supports the following
disciplines:
softraid
supports the
use of more than two chunks in a RAID 1 setup.installboot(8)
may be used to install boot(8)
in the boot storage area of the softraid
volume.
Boot support is currently limited to the CRYPTO and RAID 1 disciplines on
amd64, i386, and sparc64 platforms. On sparc64, bootable chunks must be RAID
partitions using the letter ‘a’. At the
boot(8) prompt, softraid
volumes have names beginning with ‘sr’ and can be booted from
like a normal disk device. CRYPTO volumes will require a decryption
passphrase or keydisk at boot time.
An example to create a 3 chunk RAID 1 from scratch is as follows:
Initialize the partition tables of all disks:
# fdisk -iy wd1 # fdisk -iy wd2 # fdisk -iy wd3
Now create RAID partitions on all disks:
# printf "a\n\n\n\nRAID\nw\nq\n" | disklabel -E wd1 # printf "a\n\n\n\nRAID\nw\nq\n" | disklabel -E wd2 # printf "a\n\n\n\nRAID\nw\nq\n" | disklabel -E wd3
Assemble the RAID volume:
# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0
The console will show what device was added to the system:
scsibus0 at softraid0: 1 targets sd0 at scsibus0 targ 0 lun 0: <OPENBSD, SR RAID 1, 001> SCSI2 sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total
It is good practice to wipe the front of the disk before using it:
# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
Initialize the partition table and create a filesystem on the new RAID volume:
# fdisk -iy sd0 # printf "a\n\n\n\n4.2BSD\nw\nq\n" | disklabel -E sd0 # newfs /dev/rsd0a
The RAID volume is now ready to be used as a normal disk device. See bioctl(8) for more information on configuration of RAID sets.
Install boot(8) on the RAID volume:
# installboot sd0
At the boot(8) prompt, load the /bsd kernel from the RAID volume:
boot> boot sr0a:/bsd
bio(4), bioctl(8), boot_sparc64(8), disklabel(8), fdisk(8), installboot(8), newfs(8)
The softraid
driver first appeared in
OpenBSD 4.2.
Marco Peereboom.
The driver relies on underlying hardware to properly fail chunks.
The RAID 1 discipline does not initialize the mirror upon creation. This is by design because all sectors that are read are written first. There is no point in wasting a lot of time syncing random data.
The RAID 5 discipline does not initialize parity upon creation, instead parity is only updated upon write.
Currently there is no automated mechanism to recover from failed disks.
Certain RAID levels can protect against some data loss due to component failure. RAID is not a substitute for good backup practices.
June 27, 2017 | OpenBSD-6.8 |