Raid rebuild using most vulnerable data redundancy scheme first

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S015000

Reexamination Certificate

active

06516425

ABSTRACT:

FIELD OF THE INVENTION
This invention relates in general to computer mass storage systems and, more particularly, to prioritizing data rebuild in the event of a disk failure in a hierarchical system utilizing a Redundant Array of Independent Disks (RAID).
BACKGROUND OF THE INVENTION
Conventional disk array data storage systems have multiple disk storage devices that are arranged and coordinated to form a single mass storage system. A Redundant Array of Independent Disks (RAID) system is an organization of data in an array of mass data storage devices, such as hard disk drives, to achieve varying levels of data availability and system performance. Data availability refers to the ability of the RAID system to read and write data in the array of data storage devices even in the event of a failure of one or more of the individual data storage devices or components in the array. A measurement of system performance is the rate at which data can be sent to or received from the RAID system.
Data availability is often provided through the use of redundancy schemes where data, or relationships among data, are stored in multiple locations on the storage system. In the event of a disk or component failure, redundant data is retrieved from the operable portion of the system and used to regenerate the original data that is lost due to the failure. There are two common methods for storing redundant data: mirror and parity. In mirror redundancy, data is duplicated and stored in two or more separate areas of the storage system. In parity redundancy, redundant data is stored in one or more areas of the storage system, but the size of the redundant storage area is less than the storage space used to store the original data.
RAID systems typically designate part of the physical storage capacity in the array to store redundant data, either mirror or parity. The redundant information enables regeneration of user data in the event that one or more of the array's member disks, components, or the access paths to the disk(s) fail. Typically, the disks are divided into equally sized address areas referred to as “blocks.” A set of blocks that has the same unit address ranges from each disk is referred to as a “stripe” or “stripe set.” A set (or subset) of disks in the array over which a stripe or stripe set spans is referred to as a redundancy group. Traditionally, RAID arrays employ one or more redundancy groups and a single redundancy scheme for each redundancy group, although the schemes may vary among the redundancy groups. However, as will be discussed subsequently herein, hierarchical RAID arrays employ one or more redundancy schemes (i.e., RAID levels) for each redundancy group in an array.
From a data management and data redundancy perspective, RAID levels are typically characterized as one of six architectures, or redundancy schemes, enumerated as RAID levels
1
-
6
. Although other RAID levels exist, levels
1
-
6
are the most commonly used and will be discussed herein with respect to the present invention. However, it should be noted that the present invention is applicable to any RAID level or data redundancy scheme.
The use of disk mirroring is referred to as RAID Level
1
, where original data is stored on one set of disks and a duplicate copy of the data is kept on separate disks. The use of parity checking is referred to as RAID Levels
2
,
3
,
4
,
5
, and
6
. In general, although RAID
1
provides higher data reliability and may provide the best small-write input/output (I/O) performance over RAID Levels
2
,
3
,
4
and
5
, it uses the most storage space because all data is duplicated. In contrast, RAID Levels
2
-
5
provide a lesser amount of data reliability (relative to RAID
1
) and, typically, reduced small-write performance. However, they don't consume as much disk space as a RAID
1
technique because data is not duplicated but rather interleaved and parity checked across the disk array in a stripe set. A parity stripe set interleaves data and redundant (parity) data on multiple member disks. The parity stripe set presents a single virtual disk whose user data capacity is approximately the sum of the capacities of its members, less the storage used for holding the parity (redundant) data of the user data. For RAID levels
3
-
5
, parity is commonly calculated using a bit by bit Exclusive OR function of corresponding data chunks in a stripe set from all of the data disks. This corresponds to a one equation, one unknown, sum of products calculation. The mirror set in a RAID
1
architecture presents a single virtual disk whose user data capacity is the sum of the capacity of one-half of its members, the other half holding the mirrored (redundant) data of the user data.
In addition to RAID mirror level
1
, RAID parity levels
4
,
5
and
6
are of particular interest for the present discussion. Specifically, for example, a RAID
4
uses a stripe set or redundancy group and a single dedicated parity disk to store redundant information about the data existing on the other data disks in the stripe set or redundancy group. Segments of data from each virtual disk sector are distributed across corresponding sectors of all but one of the stripe set members (i.e., the parity disk), and the parity of the distributed segments is written in the corresponding sector of the parity disk.
Because a RAID
4
system stores all parity blocks on a single unit in the stripe set, the single unit containing the parity blocks is accessed disproportionately relative to the other data storage devices in the stripe set or redundancy group. To eliminate the resulting constriction of data flow in a RAID
4
system, a RAID
5
architecture distributes the parity blocks across all of the data storage devices in the stripe set or redundancy group. Typically in a RAID
5
system, a set of N+1 data storage devices forms the stripe set or redundancy group. Each stripe has N blocks of data and one block of parity data. The block of parity data is stored in one of the N+1 data storage devices. The parity blocks corresponding to the remaining stripes of the stripe set or redundancy group are stored across the data storage devices within the stripe set or redundancy group. For example, in a RAID
5
system using five data storage devices in a given stripe set or redundancy group, the parity block for the first stripe of blocks may be written to the fifth device; the parity block for the second stripe of blocks may be written to the fourth device; the parity block for the third stripe of blocks may be written to the third device; etc. Typically, the location of the parity block for succeeding blocks shifts to the succeeding logical device in the stripe set or redundancy group, although other patterns may be used.
A RAID
6
architecture is similar to RAID
4
and
5
in that data is striped, but is dissimilar in that it utilizes two independent and distinct parity values for the original data, referred to here as P & Q. The P parity is commonly calculated using a bit by bit Exclusive OR function of corresponding data chunks in a stripe from all of the data disks. This corresponds to a one equation, one unknown, sum of products calculation. On the other hand, the Q parity is calculated linearly independent of P, but again using a different algorithm for sum of products calculation. As a result, each parity value is calculated using an independent algorithm and each is stored on a separate disk in the stripe set or redundancy group. Consequently, a RAID
6
system can rebuild data (assuming rebuild space is available) even in the event of a failure of two separate disks within a stripe set or redundancy group, whereas a RAID
5
system can rebuild data only in the event of no more than a single disk failure within a stripe set or redundancy group.
Similar to RAID
5
, a RAID
6
architecture distributes the two parity blocks across all of the data storage devices in the stripe set or redundancy group. Thus, in a stripe set or redundancy group of N+2 data storage devices, each stripe has

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Raid rebuild using most vulnerable data redundancy scheme first does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Raid rebuild using most vulnerable data redundancy scheme first, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Raid rebuild using most vulnerable data redundancy scheme first will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3137514

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.