Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output command process
Reexamination Certificate
1998-07-27
2001-09-11
Lee, Thomas (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output command process
C710S005000, C711S112000, C711S113000, C711S114000
Reexamination Certificate
active
06289398
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to data storage systems, and more particularly to a method and apparatus for storing data on multiple redundant data storage devices.
2. Description of Related Art
As computer use increases, data storage needs have increased even more. In an attempt to provide large amounts of data storage that is both inexpensive and reliable, it is becoming increasingly common to use large numbers of small, inexpensive data storage devices which work in unison to make available a reliable large data storage capacity. In a paper entitled “A Case for Redundant Arrays of Inexpensive Disks (RAID)”, Patterson, et al.,
Proc. ACM SIGMOD,
June 1988, the University of California at Berkeley has catalogued a set of concepts to address the problems of pooling multiple small data storage devices. The Patterson reference characterizes arrays of disk drives in one of five architectures under the acronym “RAID”.
A RAID
1
architecture involves providing a duplicate set of “mirror” storage units and keeping a duplicate copy of all data on each pair of storage units. While such a solution solves the reliability problem, it doubles the cost of storage. A number of implementations of RAID
1
architectures have been made, in particular by Tandem Corporation.
A RAID
2
architecture stores each bit of each word of data, plus Error Detection and Correction (EDC) bits for each word, on separate disk drives. For example, U.S. Pat. No. 4,722,085 to Flora et al. discloses a disk drive memory using a plurality of relatively small, independently operating disk subsystems to function as a large, high capacity disk drive having an unusually high fault tolerance and a very high data transfer bandwidth. A data organizer adds 7 EDC bits (determined using the well-known Hamming code) to each 32-bit data word to provide error detection and error correction capability. The resultant 39-bit word is written, one bit per disk drive, on to 39 disk drives. If one of the 39 disk drives fails, the remaining 38 bits of each stored 39-bit word can be used to reconstruct each 32-bit data word on a word-by-word basis as each data word is read from the disk drives, thereby obtaining fault tolerance.
An obvious drawback of such a system is the large number of disk drives required for a minimum system (since most large computers use a 32-bit word), and the relatively high ratio of drives required to store the EDC bits (7 drives out of 39). A further limitation of a RAID
2
disk drive memory system is that the individual disk actuators are operated in unison to write each data block, the bits of which are distributed over all of the disk drives. This arrangement has a high data transfer bandwidth, since each individual disk transfers part of a block of data, the net effect being that the entire block is available to the computer system much faster than if a single drive were accessing the block. This is advantageous for large data blocks. However, this arrangement effectively provides only a single read/write head actuator for the entire storage unit. This adversely affects the random access performance of the drive array when data files are small, since only one data file at a time can be accessed by the “single” actuator. Thus, RAID
2
systems are generally not considered to be suitable for computer systems designed for On-Line Transaction Processing (OLTP), such as in banking, financial, and reservation systems, where a large number of random accesses to many small data files comprises the bulk of data storage and transfer operations.
A RAID
3
architecture is based on the concept that each disk drive storage unit has internal means for detecting a fault or data error. Therefore, it is not necessary to store extra information to detect the location of an error; a simpler form of parity-based error correction can thus be used. In this approach, the contents of all storage units subject to failure are “Exclusive OR'd” (XOR'd) to generate parity information. The resulting parity information is stored in a single redundant storage unit. If a storage unit fails, the data on that unit can be reconstructed onto a replacement storage unit by XOR'ing the data from the remaining storage units with the parity information. Such an arrangement has the advantage over the mirrored disk RAID
1
architecture in that only one additional storage unit is required for “N” storage units. A further aspect of the RAID
3
architecture is that the disk drives are operated in a coupled manner, similar to a RAID
2
system, and a single disk drive is designated as the parity unit.
One implementation of a RAID
3
architecture is the Micropolis Corporation Parallel Drive Array, Model 1804 SCSI, that uses four parallel, synchronized disk drives and one redundant parity drive. The failure of one of the four data disk drives can be remedied by the use of the parity bits stored on the parity disk drive. Another example of a RAID
3
system is described in U.S. Pat. No. 4,092,732 to Ouchi.
A RAID
3
disk drive memory system has a much lower ratio of redundancy units to data units than a RAID
2
system. However, a RAID
3
system has the same performance limitation as a RAID
2
system in that the individual disk actuators are coupled, operating in unison. This adversely affects the random access performance of the drive array when data files are small, since only one data file at a time can be accessed by the “single” actuator. Thus, RAID
3
systems are generally not considered to be suitable for computer systems designed for OLTP purposes.
A RAID
4
architecture uses the same parity error correction concept of the RAID
3
architecture, but improves on the performance of a RAID
3
system with respect to random reading of small files by “uncoupling” the operation of the individual disk drive actuators, and reading and writing a larger minimum amount of data (typically, a disk sector) to each disk (this is also known as block striping). A further aspect of the RAID
4
architecture is that a single storage unit is designated as the parity unit.
A limitation of a RAID
4
system is that writing a data block an any of the independently operating storage units also requires writing a new parity block on the parity unit. The parity Information stored on the parity unit must be read and XOR'd with the old data (to “remove” the information content of the old data), and the resulting sum must then be XOR'd with the new data (to provide new parity information). Both the data and the parity records then must be rewritten to the disk drives. This process is commonly referred to as a “Read-Modify-Write” (RMW) operation.
Thus, a read and a write operation on the single parity unit occurs each time a record is changed on any of the storage units covered by a parity record on the parity unit. The parity unit becomes a bottle-neck to data writing operations since the number of changes to records which can be made per unit of time is a function of the access rate of the parity unit, as opposed to the faster access rate provided by parallel operation of the multiple storage units. Because of this limitation, a RAID
4
system is generally not considered to be suitable for computer systems designed for OLTP purposes. Indeed, it appears that a RAID
4
system has not been implemented for any commercial purpose.
A RAID
5
architecture uses the same parity error correction concept of the RAID
4
architecture and independent actuators, but improves on the writing performance of a RAID
4
system by distributing the data and parity information across all of the available disk drives. Typically, “N+1” storage units in a set (also known as a “redundancy group”) are divided into a plurality of equally sized address areas referred to as blocks. Each storage unit generally contains the same number of blocks. Blocks from each storage unit in a redundancy group having the same unit address ranges are referred to as “stripes”. Each stripe has N blocks of data, plus one parity block
Brant William A.
Hall Randy
Stallmo David C.
Cao Chun
EMC Corporation
Howrey Simon Arnold & White , LLP
Lee Thomas
LandOfFree
Distributed storage array system having plurality of storage... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Distributed storage array system having plurality of storage..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed storage array system having plurality of storage... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2537190