Saturday November 28th, 2015



Although all implementations of RAID differ from the idealized specification to some extent, some companies have developed non-standard RAID implementations that differ substantially from the rest of the crowd. Most of these are proprietary. Below is a detailed description of the most common specialized arrays that have been claimed by various organizations.


One common addition to the existing RAID levels is double parity, sometimes implemented and known as diagonal parity[1]. As in traditional RAID 6, there are two sets of parity check information created. Unlike traditional RAID 6, the second set is not another set of points in the overdefined polynomial which characterizes the data. Rather, double parity calculates the extra parity against a different group of blocks. Recently SNIA[2] updated the RAID6 definition, and double parity can now be considered RAID6. For example, in our graph both RAID 5 and RAID 6 calculate against all A-lettered blocks to produce one or more parity blocks. However, as it is fairly easy to calculate parity against multiple groups of blocks, instead of just A-lettered blocks, one can calculate all A-lettered blocks and a permuted group of blocks.

This is more easily illustrated using RAID 4, Twin Syndrome RAID 4 (RAID 6 with a RAID 4 layout which is not actually implemented), and double parity RAID 4.

Note: A1, B1, et cetera each represent one data block; each column represents one disk.

The n blocks are the double parity blocks. The block 2n would be calculated as A2 xor B3 xor Cp, while 3n would be calculated as A3 xor Bp xor C1 and 1n would be calculated as A1 xor B2 xor C3. Because the double parity blocks are correctly distributed it is possible to reconstruct two lost data disks through iterative recovery. For example, B2 could be recovered without the use of any x1 or x2 blocks by computing B3 xor Cp xor 2n = A2, and then A1 can be recovered by A2 xor A3 xor Ap. Finally, B2 = A1 xor C3 xor 1n. Running in degraded mode with a double parity system is not advised.


- Diagram under construction -

RAID-DP is the Network Appliance implementation of RAID double parity for data protection and falls within SNIA’s definition of RAID 6. Unlike many RAID6 implementations which can suffer a performance hit of over 30%, the performance impact of RAID-DP is typically under 2% due to the behavior of the storage controller software . All file system requests are first written to the battery backed NVRAM to ensure there is no data loss should the system lose power. Blocks are never updated in place, so when incoming write operations are performed, writes are aggregated and the storage controller tries to write only complete stripes including both parity blocks. RAID-DP provides better protection than RAID1/0, and even enables disk firmware updates to occur in real-time without any outage.

RAID 1.5

- Diagram under construction -

RAID 1.5 is a proprietary RAID by HighPoint and is sometimes incorrectly called RAID 15. From the limited information available it appears that it's just a correct implementation of RAID 1. When reading, the data is read from both disks simultaneously and most of the work is done in hardware instead of the driver.

RAID 5E, 5EE and 6E

- Diagram under construction -

RAID 5E, RAID 5EE and RAID 6E (with the added E standing for Enhanced) generally refer to variants of RAID 5 or RAID 6 with an integrated hot-spare drive, where the spare drive is an active part of the block rotation scheme. This allows the I/O to be spread across all drives, including the spare, thus reducing the I/O bandwidth per drive, allowing for higher performance. It does, however, mean that a spare drive cannot be shared among multiple arrays, which is occasionally desirable. The scheme was introduced by IBM ServeRAID around 2001.

In RAID 5E, RAID 5EE and RAID 6E, there is actually no dedicated "spare drive", just like there is no dedicated "parity drive" in RAID 5 or RAID 6. Instead, the spare blocks are distributed across all the drives, so that in a 10-disk RAID 5E with one spare, each and every disk is 80% data, 10% parity, and 10% spare. The spare blocks in RAID 5E and RAID 6E are at the end of the array, while in RAID 5EE the spare blocks are integrated into the array. RAID 5EE level can sustain a single drive failure. RAID 5EE requires at least four disks and can expand up to 16 disks. If one drive fails in a RAID 5E/5EE array then the array is 'compressed' and re-built (re-striped) into a standard RAID 5 array. This process can be very intensive on the drive I/O and the process may take several hours or days depending on the speed, size and number of the drives. Only after the compression completes can a second drive fail without data loss. The replacement and rebuild of the second failed drive works the same as a standard RAID 5.

Once the original failed drive is replaced and the compression has completed, the array is this time 'decompressed' and re-built (re-stripped) back into a RAID 5E/5EE array. The process may take several hours or days depending on the speed, size and number of the drives. During the 'compressing' and 'decompressing' stages the array is at risk of a second disk failure as the array is not protected by redundancy during those stages. Due to the length of time and intense I/O activity of both the compression and decompression, a practical limit of between 4 to 8 drives is recommended. The performance boost of incorporating the hot spare drive into the data array diminishes after 8 drives and the benefit is severely reduced by the length of the rebuild of a failed drive and the risk of data loss incurred by a second drive failure during the compression.


- Diagram under construction -

RAID 7 is a trademark of Storage Computer Corporation. It adds caching to a derivative of RAID 3 and RAID 4 to improve performance.


RAID S is EMC Corporation's proprietary striped parity RAID system used in their Symmetrix storage systems. Each volume exists on a single physical disk, and multiple volumes are arbitrarily combined for parity purposes. EMC originally referred to this capability as RAID S, and then renamed it Parity RAID for the Symmetrix DMX platform. EMC now offers standard striped RAID 5 on the Symmetrix DMX as well. RAID-S is not used anymore in EMC products (except for products already sold).

Note: A1, B1, et cetera each represent one data block; each column represents one disk. A, B, et cetera are entire volumes.


Matrix RAID is a feature that first appeared in the Intel ICH6R RAID BIOS. It is not a new RAID level. Matrix RAID utilizes two physical disks. Part of each disk is assigned to a level 0 array, the other part to a level 1 array. Currently, most (all?) of the other cheap RAID BIOS products only allow one disk to participate in a single array. The recommended (by Intel) setup is to put the operating system, critical application programs and data on the RAID 1 volume. The thinking being that protection from losing the configured OS, programs and data is more important than a performance increase. Mostly, the RAID 0 volume in Matrix RAID is promoted for working with large files, such as videos during editing, and for non-critical files where fast storage will increase performance (swap files, for example).


The Linux kernel software RAID driver (called md, for "multiple disk") can be used to build a classic RAID 1+0 array, but also (since version 2.6.9) has a single level[3] with some interesting extensions.

The standard "near" layout, where each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID-10 arrangement, but it does not require that n divide k. For example an n2 layout on 3 drives and 4 drives would look like:

- Diagram under construction -

The 4-drive example is identical to a standard RAID-10 array, but the 3-drive one is novel.

The driver also supports a "far" layout where all the drives are divided into f sections. All the chunks are repeated in each section but offset by one device. For example an f2 layout on 3 drives would look like:

- Diagram under construction -

This is designed for rarely-written data; writes are slower because they are scattered, but the first 1/f of each drive is a standard RAID-0 array.

The near and far options can both be used at the same time. The chunks in each section are offset by n device(s). For example n2 f2 layout stores 2×2 = 4 copies of each sector, so requires at least 4 drives:

- Diagram under construction -

As of Linux 2.6.18 the driver also supports an offset layout where each stripe is repeated o times. For example an o2 layout on 3 drives:

- Diagram under construction -

Note: k is the number of drives, n#, f# and o# are parameters in the mdadm --layout option.

Linux can also create other standard RAID configurations using the md driver (0, 1, 4, 5, 6) as well as non-raid uses like multipath and LVM2. This md driver should not be confused with the dm driver, which is for IDE/ATA chipset based software raid (i.e., fakeraid).


The IBM ServeRAID adapter series supports 2-way mirroring on an arbitrary number of drives.

This configuration is tolerant of non-adjacent drives failing. Other storage systems including Sun's StorEdge T3 supports this mode as well.


- Diagram under construction -

Kaleidescape's KSERVER-5000 and KSERVER-1500 use a proprietary RAID-K in their media storage units. RAID-K is similar to RAID 4 in using double parity, but RAID-K also uses another, proprietary method of maintaining fault tolerance. The system is easily-modified, as users can expand the array simply by inserting additional hard disks. Additionally, if any hard disk is inserted with data already on it, the data is automatically added to the array instead of deleting the data, as many other RAID methods would require.


- Diagram under construction -

Sun's ZFS implements an integrated redundancy scheme similar to RAID 5 which it calls RAID-Z. RAID-Z avoids the RAID 5 "write hole"[4] by its copy-on-write policy: rather than overwriting old data with new data, it writes new data to a new location and then automatically overwrites the pointer to the old data. It avoids the need for read-modify-write operations for small writes by only ever performing full-stripe writes; small blocks are mirrored instead of parity protected, which is possible because the file system is aware of the underlying storage structure and can allocate extra space if necessary. There is also RAID-Z2 which uses two forms of parity to achieve similar results as RAID 6: the ability to lose up to two drives without losing data.


- Diagram under construction -

UNRAID officially called "unRAID" is a RAID scheme developed by Lime technology LLC. This scheme, unlike many others, has no requirement that drives involved in the RAID array be of matching size or speed. Another unusual feature is the system's support for mixing PATA and SATA drives in the same RAID set, although this is also supported in most software RAID implementations. Both elements represent a deviation from the norm. In addition, UNRAID utilizes a dedicated parity drive and does not stripe data across the other drives in the array. In the event of multiple simultaneous drive failure, only data on the failed drives is lost, while parity provides data recovery when only a single drive fails. UNRAID is implemented as an add-on to the Linux MD layer.

Drive Extender

- Diagram under construction -

Windows Home Server Drive Extender is a specialized case of JBOD RAID 1 implemented at the file system level, separate from what is offered in Windows' Logical Disk Manager. When a file is stored that is to be duplicated, a special pointer called a tombstone is created on the main storage drive's NTFS partition that points to data residing on other disk(s). When the system is idle, the OS re-balances the storage to provide the required redundancy while maximizing the storage capacity in each drive. Although not as robust as true RAID, it provides many of the benefits that RAID offers, including a single hierarchical view of the file system regardless of which physical disk the data is stored on, the ability to swap out failed disk without losing redundant data, and the system seemlessly duplicating the data on the replacement disk in the background.

It is also possible to tell Windows Home Server to not duplicate data on a per-share basis. In this case, Driver Extender will store files on different disks and use tombstones to point to them, providing faster read access when the end-user requests multiple files located on different disks, similar to the speed benefit provided by RAID 0.

RAID Data Recovery Compliance

Home | Recovery Services | Testimonials | Release Form | Contact | RAID Definitions | Industry Links | About us
© 2007 - 2013 RAID RECOVERY LABS, Inc. All Rights Reserved.