Method and apparatus for scalable error correction code...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S006130

Reexamination Certificate

active

06513098

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to memory controllers. In particular, the present invention relates to memory controllers for use in connection with the generation of error correction code having scalable performance.
BACKGROUND OF THE INVENTION
Computer systems require reliable storage for large amounts of data. Often, redundant arrays of independent (or inexpensive) disks (RAID) devices are used to provide such storage. In general, RAID devices involve storing data on a plurality of hard disk drives. The use of RAID techniques increases the reliability and/or speed of data storage and retrieval. There are various schemes, or RAID levels, according to which a number of hard disk drives or other storage devices may be used in connection with the storage of data. One such scheme is known as RAID level 5 (or RAID 5).
RAID 5 is also known as distributed data guarding. In a RAID 5 array, parity data is spread across all of the drives in the array. RAID 5 systems are tolerant of single drive failures, and provide high performance by enabling read operations to be conducted simultaneously. A RAID 5 array requires at least three individual drives.
In connection with RAID 5 arrays, write operations have typically had significantly lower performance than read operations. In particular, because data taken from the array during read operations does not need to be processed, it can be passed through the system controller quickly. However, write data must be processed to produce parity information before that data is stored in the array.
In a typical RAID 5 controller, a parity engine, such as a hardware XOR engine, is provided to calculate the required parity information. During a write operation, a block of data is stored in cache memory, and a parity syndrome value is calculated by the XOR engine. The newly calculated syndrome value is also stored in the cache memory. The data and associated syndrome value are then available for storage in the array. Assuming no overhead, a write operation in connection with a RAID 5 array requires a bandwidth of 3n+2(n/d), where n is the number of bytes in a block of data and d is the number of data drives in the array. For example, assuming four data disks, a memory bandwidth of 800 MB/s can support a maximum full stripe write bandwidth of approximately 228 MB/s. If a partial stripe is written, additional steps are required, as the new syndrome value must be calculated from a combination of new data, old data, and old parity data. In a worst case scenario, where the number of bytes to be written is smaller than the full stripe size divided by the number of data disks, the operation requires a bandwidth of 9n, assuming no overhead. For example, a memory bandwidth of 800 MB/s can support a maximum partial stripe write bandwidth of 87 MB/s.
A RAID controller having a single parity engine and single associated cache memory must execute transactions in a sequential manner. That is, one write must be completed before another begins, increasing the latency associated with write operations. This increased latency in turn reduces the number of input/output operations that can be completed per second and limits the bandwidth available for RAID 5 write operations.
One conventional approach to improving the performance of a RAID 5 controller is to increase the bandwidth of the cache memory by increasing its frequency of operation. However, this approach has had limited success, because the interface that is external to the controller has been increasing in bandwidth faster than the bandwidth of the cache has been increasing. Therefore, the frequency of operation of the cache memory has become an even larger impediment to increased RAID controller performance, even though the bandwidth of such memory continues to increase.
Another approach to increasing the performance of RAID controllers is to add a second parity engine and associated cache memory in parallel with the first parity engine and cache memory. However, the provision of a second cache memory interface greatly increases the number of pins required on a chip implementing the controller. Furthermore, because data cannot be passed to and from multiple parity engines over the data bus of a conventional RAID controller simultaneously, performance can decrease as a result of adding parallel parity engines and memories to otherwise conventional RAID controllers.
Still other conventional methods for increasing the performance of RAID controllers include performing parity calculations on the fly, broadcasting writes to memory, caching data internally, or other techniques that may be able to reduce the number of accesses to memory required. However, such approaches promise, at best, minor improvements in the performance of a RAID 5 controller.
Therefore, it would be desirable to provide a method and an apparatus for use in connection with the generation of error correction code that removed or reduced the period of latency encountered during write operations. Furthermore, it would be advantageous to provide such a method and apparatus that was scalable to provide a desired level of performance, and that was inexpensive to implement and reliable in operation.
SUMMARY OF THE INVENTION
According to the present invention, a method and an apparatus for scalable error correction code generation performance are provided. The present invention generally allows the performance of a memory controller for use in connection with error correction code, such as a RAID 5 controller, to be configured to provide a desired level of performance. Specifically, by providing multiple internal parity engines and associated cache memories that are each separately addressable, the latency normally encountered during write operations is reduced or removed. More specifically, the method and apparatus of the present invention allows for parity calculations to be carried out in parallel. By increasing the number of parity engines and cache memories, the performance of the controller can be scaled to provide a desired level of performance.
According to one embodiment of the present invention, a controller is provided with a plurality of parity engines, with each parity engine having an associated cache memory. The parity engines are each separately connected to a switch, which is in turn separately connected to a plurality of channels. A processor is provided for coordinating the operations of the various controller components. The switch allows any of the channels to be interconnected to and thereby communicate with any of the parity engines at the same time that any of the other channels are interconnected to and in communication with any of the other parity engines. Accordingly, parity calculations and other operations involving accesses to the cache memories can be conducted in parallel.
According to another embodiment of the present invention, a write request is received from a host system at a channel of a controller. A processor analyzes the write request, and assigns the write operation to one of a plurality of parity engines and associated cache memories. After the data associated with the write operation has been received by the channel, the channel addresses that data to the assigned parity engine. A provided switch, in response to the address information received in connection with the data, routes the data to the assigned parity engine. Accordingly, a switched circuit type connection is established between the channel and the parity engine. Furthermore, according to this embodiment of the present invention, additional data received at the first channel, or at a second channel, can be directed to a second parity engine, again assigned by the processor, while the first parity engine and associated cache memory is processing the first write request. Therefore, the period of latency normally encountered while parity information is calculated with respect to the first block of data is avoided.
Additional advantages of the present invention will become readily apparent from the following discussion, particula

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for scalable error correction code... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for scalable error correction code..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for scalable error correction code... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3069851

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.