Disk-array controller

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S114000, C710S021000

Reexamination Certificate

active

06185697

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to a disk-array controller for a mass magnetic disk storage, a mass optical disk storage, or the like, and especially to a disk-array controller having an array input/output control unit for a plurality of disk units of a computer system.
Heretofore, a disk-array controller of this nature has been used for improving the system performance and the cost performance. Here, a plurality of inexpensive small disk units are substituted for a single expensive large disk unit and a set of the small disk units can be appeared as an expensive large high-speed disk unit.
There are several methods for designing a disk array, for example, David A. Patterson, Garth Gibson and Randy H. Katz, “A case for Redundant Arrays of Inexpensive Disks”, California University report No. UCB/CSD 87/391 (December, 1987).
In this article, methods of constructing disks including redundancies are classified into five groups as different levels of RAID (Redundant Arrays of Inexpensive Disks) systems. As an example of such levels, an array of five disk units is described in the above article.
A first-level RAID (RAID 1) system uses disk units (N in total) for storing data and mirror disk units (N in total) for mirroring the data. The RAID 1 system stores duplicated copies of information obtained by writing the information duplicatively on data disk units.
A second-level RAID (RAID 2) system provides a configuration of redundant disk using Hamming code which is one of ECCs (error-correcting codes), for example including four data disk units and three ECC disk units. However, the RAID 2 system has been hardly implemented because of its high level of redundancy.
A third-level RAID (RAID 3) system comprises a group (rank) of disk units (N+1 in total). In this case, each of data blocks is divided into N chunks. Then, the divided chunks are distributed across different data disk units (N in total) and stored therein. For reading the data without any loss when one of the disk units is troubled, parity information that corresponds to each of the divided chunks is also stored in a dedicated parity disk unit.
A fourth-level RAID (RAID 4) system also comprises a group of disk units (N+1 in total) but it is different from the RAID 3 system in that the RAID 4 system stores data in data disk units in a manner that the data to be stored in the corresponding data disk unit are divided into any data blocks and then the blocks are stored in the corresponding data unit without spreading across several disks. Thus, only one disk unit is used at the time of reading the data, so that a total throughput is increased by having independently accesses to the disk units if a data-transfer rate is short. The RAID 4 system further comprises a parity disk unit as in the case with the RAID 3 system. At the time of writing, however, four steps are required for updating parity information. Thus, the parity disk unit can be accessed whenever any of the data disk units are updated, so that it tends to become a bottleneck in writing. Accordingly, there have been a small number of reported cases of using the RAID 4 system.
A fifth-level RAID (RAID 5) system has almost the same configuration as that of the RAID 4 system, except the difference in handling the parity information. In the RAID 5 system, parity information is not concentrated in one disk unit but distributed across the disk units (N+1 in total) to correspond with the distribution of data. In the above article, an example of the RAID 5 system is disclosed but there is a disadvantage in which a parity update requires four steps. In the example, a write operation is performed on a nonvolatile memory unit to indicate the completion of write operation to the host and then the parity is substantially updated at a later spare time.
Regarding the above RAID 1-5 systems, for example, Japanese Patent Application Laid-Open No. 6-180623 (1994) discloses a multiplex data bus architecture for a disk-array controller.
FIG. 20
is a block diagram that shows an example of the conventional disk-array controller. The disk-array controller comprises a data bus architecture which can be adjusted to execute a data-transfer operation between a host device and a plurality of disk units. The disk units are arranged as a disk array with RAID 1, 3, 4, or 5.
By way of multiplexers
135
-
140
, an exclusive-OR gate circuit
134
(hereinafter, also referred as an XOR circuit), which is provided as a circuit for generating parity, can receive:
data from the host device through double registers
110
-
114
and a host SCSI adapter
143
which is provided as a DMA interface for communicating between a host device system data pass
144
connecting to the host device and SCSI data passes
141
,
142
;
data from the disk units through SCSI bus interface chips
128
-
132
connecting their respective disk units; and
data from a static RAM (SRAM)
133
.
Furthermore, outputs from the XOR circuit
134
can be transferred to: the host device through double registers
110
-
114
and the host SCSI adapter
143
; the disk units through three-state buffers
115
-
119
and SCSI bus interface chips
128
-
132
; and the SRAM
133
through a three-state buffer
120
. Therefore, a series of data passes from the host device to the disk units can be provided by setting whether each of those passes is to be used or not, independently.
A writing operation with the RAID 5 system will be described as an example of using the disk-array controller shown in FIG.
20
. In the writing operation with the RAID 5 system, both of read and write procedures are involved. More concretely, old data and old parity are read out, while new data and new parity are written. The operation will be described on the assumption that the data is written to the disk unit of channel
2
while the parity information is updated on the disk unit of channel
1
. Initially, information is read out from each of the disk units of channels
1
and
2
and then provided to the XOR circuit
134
(i.e., parity-generating circuit) through the multiplexers
135
,
136
. An output from the XOR circuit
134
is stored in the external SRAM
133
through a bus
126
, an available three-state buffer
120
, and a bus
127
. Subsequently, the double register
110
receives new data from the host device and then the new data is written on the disk unit of channel
2
through a bus
122
and the SCSI bus interface. The new data is also provided to the XOR circuit
134
. The information written in the SRAM
133
is also provided to the XOR circuit
134
through the multiplexer
140
. An output of XOR circuit (i.e., the parity-generating circuit)
134
is new parity. The parity can be provided to the disk units through the available three-state buffer
115
, a bus
121
, and the available SCSI bus interface chip
128
.
However, the process described above has the following problems. The first problem is that multiple different transmissions cannot be carried out in parallel. The data transmission to the host device is carried out without any break, so that the settings of each of the double registers, the three-state buffers, the multiplexers, and so on should be in a fixed state during the data transmission. If each of the settings is changed in the course of the data transmission, the data does not flow correctly. Notably, only one XOR circuit (i.e., one parity-generating circuit) is equipped in the system. If data transmissions using the XOR circuit are concurrently started, each of the data transmissions may be occurred in succession.
In addition, the second problem is that an XOR calculation cannot be performed on data in one operation if the number of data exceeds the number of inputs of the XOR circuit (the parity-generating circuit). The XOR operation is a necessity for data restoration or the like in a case where the group includes many disk units, so that if the XOR calculations are divided and executed in sequence it takes much time to complete the XOR operation due to the following facts:
the number of the

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Disk-array controller does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Disk-array controller, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Disk-array controller will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2569744

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.