RAID (redundant array of independent disks; originally redundant array of inexpensive disks) is a way of storing the same data in different places on multiple hard disks to protect data in the case of a drive failure. However, not all RAID levels provide redundancy.
History of RAID
The term RAID was coined in 1987 by David Patterson, Randy Katz and Garth A. Gibson. In their 1988 technical report, “A Case for Redundant Arrays of Inexpensive Disks (RAID),” the three argued that an array of inexpensive drives could beat the performance of the top disk drives of the time. By utilizing redundancy, a RAID array could be more reliable than any one disk drive.
While this report was the first to put a name to the concept, the use of redundant disks was already being discussed by others. Geac Computer Corp.’s Gus German and Ted Grunau first referred to this idea as MF-100. IBM’s Norman Ken Ouchi filed a patent in 1977 for the technology, which was later named RAID 4. In 1983, Digital Equipment Corp. shipped the drives that would become RAID 1, and in 1986, another IBM patent was filed for what would become RAID 5. Patterson, Katz and Gibson also looked at what was being done by companies such as Tandem Computers, Thinking Machines and Maxstor to define their RAID taxonomies.
While the levels of RAID listed in the 1988 report essentially put names to technologies that were already in use, creating common terminology for the concept helped stimulate the data storage market to develop more RAID array products.
According to Katz, the term inexpensive in the acronym was soon replaced with independent by industry vendors due to the implications of low costs.
How RAID works
RAID works by placing data on multiple disks and allowing input/output (I/O) operations to overlap in a balanced way, improving performance. Because the use of multiple disks increases the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.
RAID arrays appear to the operating system (OS) as a single logical hard disk. RAID employs the techniques of disk mirroring or disk striping. Mirroring copies identical data onto more than one drive. Striping partitions each drive’s storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.
In a single-user system where large records, such as medical or other scientific images, are stored, the stripes are typically set up to be small (perhaps 512 bytes) so that a single record spans all the disks and can be accessed quickly by reading all the disks at the same time.
In a multiuser system, better performance requires that you establish a stripe wide enough to hold the typical or maximum size record. This allows overlapped disk I/O across drives.
Disk mirroring and disk striping can be combined on a RAID array. Mirroring and striping are used together in RAID 01 and RAID 10.
A RAID controller can be used as a level of abstraction between the OS and the physical disks, presenting groups of disks as logical units. Using a RAID controller can improve performance and help protect data in case of a crash.
A RAID controller can be used in both hardware- and software-based RAID arrays. In a hardware-based RAID product, a physical controller manages the array. When in the form of a Peripheral Component Interconnect or PCI Express card, the controller can be designed to support drive formats such as SATA and SCSI. A physical RAID controller can also be part of the motherboard.
With software-based RAID, the controller uses the resources of the hardware system. While it performs the same functions as a hardware-based RAID controller, software-based RAID controllers may not enable as much of a performance boost.
If a software-based RAID implementation isn’t compatible with a system’s boot-up process, and hardware-based RAID controllers are too costly, firmware- or driver-based, RAID is another implementation option.
A firmware-based RAID controller chip is located on the motherboard, and all operations are performed by the CPU, similar to software-based RAID. However, with firmware, the RAID system is only implemented at the beginning of the boot process. Once the OS has loaded, the controller driver takes over RAID functionality. A firmware RAID controller isn’t as pricy as a hardware option, but puts more strain on the computer’s CPU. Firmware-based RAID is also called hardware-assisted software RAID, hybrid model RAID and fake RAID.
In the 1988 paper that coined the term and cemented the concept, the authors distinguished six levels of RAID, 0 through 5. This numbered system allowed them to differentiate the versions and how they used redundancy and spread data across the array. The number of levels has since expanded and has been broken into three categories: standard, nested and nonstandard RAID levels.
Standard RAID levels
RAID 0: This configuration has striping, but no redundancy of data. It offers the best performance, but no fault tolerance.
RAID 1: Also known as disk mirroring, this configuration consists of at least two drives that duplicate the storage of data. There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage.
RAID 2: This configuration uses striping across disks, with some disks storing error checking and correcting (ECC) information. It has no advantage over RAID 3 and is no longer used.
RAID 3: This technique uses striping and dedicates one drive to storing parity information. The embedded ECC information is used to detect errors. Data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an I/O operation addresses all the drives at the same time, RAID 3 cannot overlap I/O. For this reason, RAID 3 is best for single-user systems with long record applications.
RAID 4: This level uses large stripes, which means you can read records from any single drive. This allows you to use overlapped I/O for read operations. Since all write operations have to update the parity drive, no I/O overlapping is possible. RAID 4 offers no advantage over RAID 5.
RAID 5: This level is based on block-level striping with parity. The parity information is striped across each drive, allowing the array to function even if one drive were to fail. The array’s architecture allows read and write operations to span multiple drives. This results in performance that is usually better than that of a single drive, but not as high as that of a RAID 0 array. RAID 5 requires at least three disks, but it is often recommended to use at least five disks for performance reasons.
RAID 5 arrays are generally considered to be a poor choice for use on write-intensive systems because of the performance impact associated with writing parity information. When a disk does fail, it can take a long time to rebuild a RAID 5 array. Performance is usually degraded during the rebuild time, and the array is vulnerable to an additional disk failure until the rebuild is complete.
RAID 6: This technique is similar to RAID 5, but includes a second parity scheme that is distributed across the drives in the array. The use of additional parity allows the array to continue to function even if two disks fail simultaneously. However, this extra protection comes at a cost. RAID 6 arrays have a higher cost per gigabyte (GB) and often have slower write performance than RAID 5 arrays.
Nested RAID levels
Some RAID levels are referred to as nested RAID because they are based on a combination of RAID levels. Here are some examples of nested RAID levels.
RAID 10 (RAID 1+0): Combining RAID 1 and RAID 0, this level is often referred to as RAID 10, which offers higher performance than RAID 1, but at a much higher cost. In RAID 1+0, the data is mirrored and the mirrors are striped.
RAID 01 (RAID 0+1): RAID 0+1 is similar to RAID 1+0, except the data organization method is slightly different. Rather than creating a mirror and then striping the mirror, RAID 0+1 creates a stripe set and then mirrors the stripe set.
RAID 03 (RAID 0+3, also known as RAID 53 or RAID 5+3): This level uses striping (in RAID 0 style) for RAID 3’s virtual disk blocks. This offers higher performance than RAID 3, but at a much higher cost.
RAID 50 (RAID 5+0): This configuration combines RAID 5 distributed parity with RAID 0 striping to improve RAID 5 performance without reducing data protection.
Nonstandard RAID levels
RAID 7: This RAID level is based on RAID 3 and RAID 4, but adds caching to the mix. It includes a real-time embedded OS as a controller, caching via a high-speed bus and other characteristics of a stand-alone computer. It is a nonstandard, trademarked RAID level owned by the now defunct Storage Computer Corp.
Adaptive RAID: Adaptive RAID lets the RAID controller decide how to store the parity on the disks. It will choose between RAID 3 and RAID 5, depending on which RAID set type will perform better with the type of data being written to the disks.
RAID S (also known as parity RAID): This is an alternate, proprietary method for striped parity RAID from EMC Symmetrix that is no longer in use on current equipment. It appears to be similar to RAID 5 with some performance enhancements, as well as the enhancements that come from having a high-speed disk cache on the disk array.
Linux MD RAID 10: This level, provided by the Linux kernel, supports the creation of nested and nonstandard RAID arrays. Linux software RAID can also support the creation of standard RAID 0, RAID 1, RAID 4, RAID 5 and RAID 6 configurations.
Benefits of RAID
Performance, resiliency and cost are among the major benefits of RAID. By putting multiple hard drives together, RAID can improve on the work of a single hard drive and, depending on how it is configured, can increase computer speed and reliability after a crash.
With RAID 0, files are split up and distributed across drives that work together on the same file. As such, reads and writes can be performed faster than with a single drive. RAID 5 arrays break data into sections, but also devote another drive to parity. This parity drive can see what is working when one nonparity drive fails, and can figure out what was on that failed drive. This function allows RAID to provide increased availability. With mirroring, RAID arrays can have two drives containing the same data, ensuring one will continue to work if the other fails.
Although the term inexpensive was removed from the acronym, RAID can still result in lower costs by using lower-priced disks in large numbers.
Downsides of using RAID
Nested RAID levels are more expensive to implement than traditional RAID levels because they require a greater number of disks. The cost per GB of storage is also higher for nested RAID because so many of the drives are used for redundancy. Nested RAID has become popular in spite of its cost because it helps to overcome some of the reliability problems associated with standard RAID levels.
Initially, all the drives in a RAID array are installed at the same time. This makes the drives the same age and subject to the same operating conditions and amount of wear. But when a drive fails, there is a high probability that another drive in the array will also soon fail.
Some RAID levels (such as RAID 5 and RAID 1) can only sustain a single drive failure, although some RAID 1 implementations consist of multiple mirrors, and can therefore sustain multiple failures. The problem is that the RAID array and the data it contains are left in a vulnerable state until a failed drive is replaced and the new disk is populated with data. Because drives have much greater capacity now than when RAID was first implemented, it takes a lot longer to rebuild failed drives. Longer rebuild times increase the chance that a second drive will fail before the first drive is rebuilt.
Even if a second disk failure does not occur while the failed disk is being replaced, there is a chance the remaining disks may contain bad sectors or unreadable data. These types of conditions may make it impossible to fully rebuild the array.
Nested RAID levels address these problems by providing a greater degree of redundancy, greatly decreasing the chances of an array-level failure due to simultaneous disk failures.
The future of RAID
RAID is not quite dead, but many analysts say the technology has become obsolete in recent years. Alternatives such as erasure coding offer better data protection (albeit at a higher price), and have been developed with the intention of addressing the weaknesses of RAID. As drive capacity increases, so does the chance for error with a RAID array, and capacities are consistently increasing.
The rise of solid-state drives (SSDs) is also seen as alleviating the need for RAID. SSDs have no moving parts and do not fail as often as hard disk drives. SSD arrays often use techniques such as wear leveling instead of relying on RAID for data protection. Hyperscale computing also removes the need for RAID by using redundant servers instead of redundant drives.
Still, RAID remains an ingrained part of data storage for now, and major technology vendors still release RAID products. IBM has released IBM Distributed RAID with its Spectrum Virtualize V7.6, which promises to boost RAID performance. The latest version of Intel Rapid Storage Technology supports RAID 0, RAID 1, RAID 5 and RAID 10, and NetApp ONTAP management software uses RAID to protect against up to three simultaneous drive failures. The Dell EMC Unity platform also supports RAID 1/0, RAID 5 and RAID 6.