Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability
Reexamination Certificate
1995-12-27
2001-09-11
Le, Dieu-Minh (Department: 2184)
Error detection/correction and fault detection/recovery
Data processing system error or fault handling
Reliability and availability
C714S718000
Reexamination Certificate
active
06289471
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to computer system data storage, and more particularly to a fault-tolerant storage device array using a solid-state storage unit for storage of redundancy information.
2. Description of Related Art
A typical data processing system generally includes one or more storage units which are connected to a Central Processor Unit (CPU) either directly or through a control unit and a channel. The function of the storage units is to store data and programs which the CPU uses in performing particular data processing tasks.
Various type of storage units are used in current data processing systems. A typical system may include one or more large capacity tape units and/or disk drives (magnetic, optical, or semiconductor) connected to the system through respective control units for storing data.
However, a problem exists if one or of the large capacity storage units falls such that information contained in that unit is no longer available to the system. Generally, such a failure will shut down the entire computer system.
The prior art has suggested several ways of solving the problem of providing reliable data storage. In systems where records are relatively small, it is possible to use error correcting codes which generate ECC syndrome bits that are appended to each data record within a storage unit. With such codes, it is possible to correct a small amount of data that may be read erroneously. However, such codes are generally not suitable for correcting or recreating long records which are in error, and provide no remedy at all if a complete storage unit fails. Therefore, a need exists for providing data reliability external to individual storage units.
Other approaches to such “external” reliability have been described in the art. A research group at the University of California, Berkeley, in a paper entitled “A case for Redundant Arrays of Inexpensive Disks (RAID)”, Patterson, et al., Proc. ACM SIGMOD, June 1988, has catalogued a number of different approaches for providing such reliability when using disk drives as storage units. Arrays of disk drives are characterized in one of five architectures, under the acronym “RAID” (for Redundant Arrays of Inexpensive Disks).
A RAID
1
architecture involves providing a duplicate set of “mirror” storage units and keeping a duplicate copy of all data on each pair of storage units. While such a solution solves the reliability problem, it doubles the cost of storage. A number of implementations of RAID
1
architectures have been made, in particular by Tandem Corporation.
A RAID
2
architecture stores each bit of each word of data, plus Error Detection and Correction (EDC) bits for each word, on separate disk drives (this is also known as “bit striping”). For example, U.S. Pat. No. 4,722,085 to Flora et al. discloses a disk drive memory using a plurality of relatively small, independently operating disk subsystems to function as a large, high capacity disk drive having an unusually high fault tolerance and a very high data transfer bandwidth. A data organizer adds 7 EDC bits (determined along the well-known Hamming code) to each 32-bit data word to provide error detection and error correction capability. The resultant 39-bit word is written, one bit per disk drive, on to 39 disk drives. If one of the 39 disk drives fails, the remaining 38 bits of each stored 39-bit word can be used to reconstruct each 32-bit data word on a word-by-word basis as each data word is read from the disk drives, thereby obtaining fault tolerance.
An obvious drawback of such a system is the large number of disk drives required for a minimum system (since most large computers use a 32-bit word), and the relatively high ratio of drives required to store the EDC bits (7 drives out of 39). A further limitation of a RAID
2
disk drive memory system is that the individual disk actuators are operated in unison to write each data block, the bits of which are distributed over all of the disk drives. This arrangement has a high data transfer bandwidth, since each individual disk transfers part of a block of data, the net effect being that the entire block is available to the computer system much faster than if a single drive were accessing the block. This is advantageous for large data blocks. However, this arrangement also effectively provides only a single read/write head actuator for the entire storage unit. This adversely affects the random access performance of the drive array when data files are small, since only one data file at a time can be accessed by the “single” actuator. Thus, RAID
2
systems are generally not considered to be suitable for computer systems designed for On-Line Transaction Processing (OLTP), such as in banking, financial, and reservation systems, where a large number of random accesses to many small data files comprises the bulk of data storage and transfer operations.
A RAID
3
architecture is based on the concept that each disk drive storage unit has internal means for detecting a fault or data error. Therefore, it is not necessary to store extra information to detect the location of an error; a simpler form of parity-based error correction can thus be used. In this approach, the contents of all storage units subject to failure are “Exclusive ORed” (XOR'd) to generate parity information. The resulting parity information is stored in a single redundant storage unit. If a storage unit fails, the data on that unit can be reconstructed on to a replacement storage unit by XOR'ing the data from the remaining storage units with the parity information. Such an arrangement has the advantage over the mirrored disk RAID
1
architecture in that only one additional storage unit is required for “N” storage units. A further aspect of the RAID
3
architecture is that the disk drives are operated in a coupled manner, similar to a RAID
2
system, and a single disk drive is designated as the parity unit.
One implementation of a RAID
3
architecture is the Micropolis Corporation Parallel Drive Array, Model 1804 SCSI, that uses four parallel, synchronized disk drives and one redundant parity drive. The failure of one of the four data disk drives can be remedied by the use of the parity bits stored on the parity disk drive. Another example of a RAID
3
system is described in U.S. Pat. No. 4,092,732 to Ouchi.
A RAID
3
disk drive memory system has a much lower ratio of redundancy units to data units than a RAID
2
system. However, a RAID
3
system has the same performance limitation as a RAID
2
system, in that the individual disk actuators are coupled, operating in unison. This adversely affects the random access performance of the drive array when data files are small, since only one data file at a time can be accessed by the “single” actuator. Thus, RAID
3
systems are generally not considered to be suitable for computer systems designed for OLTP purposes.
A RAID
4
architecture uses the same parity error correction concept of the RAID
3
architecture, but improves on the performance of a RAID
3
system with respect to random reading of small files by “uncoupling” the operation of the individual disk drive actuators, and reading and writing a larger minimum amount of data (typically, a disk sector) to each disk (this is also known as block striping). A further aspect of the RAID
4
architecture is that a single storage unit is designated as the parity unit.
A limitation of a RAID
4
system is that Writing a data block on any of the independently operating data storage units also requires writing a new parity block on the parity unit. The parity information stored on the parity unit must be read and XOR'd with the old data (to “remove” the information content of the old data), and the resulting sum must then be XOR'd with the new data (to provide new parity information). Both the data and the parity records then must be rewritten to the disk drives. This process is commonly referred to as a “Read-Modify-Write” sequence.
Thus, a Read and a Write on the singl
EMC Corporation
Howrey Simon Arnold & White , LLP
Le Dieu-Minh
LandOfFree
Storage device array architecture with solid-state... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Storage device array architecture with solid-state..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Storage device array architecture with solid-state... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2461101