Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
1998-06-08
2003-08-26
Verbrugge, Kevin (Department: 2185)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C709S214000, C709S213000
Reexamination Certificate
active
06611895
ABSTRACT:
TECHNICAL FIELD
The present invention relates to protocols and architectures in cache systems.
BACKGROUND ART
Cache systems provide access to high-speed memory from computer elements such as processors and disk arrays. One use for a cache system is in a DASD controller. A direct access storage device (DASD) is an on-line digital storage device, such as a magnetic disk drive, that allows rapid read and write operations. Often, DASD systems include more than one disk for increased reliability and crash recovery. Such a system can be a redundant array of inexpensive disks (RAID) unit.
In order to meet greater performance demands, cache systems must be capable of handling data at increasing rates. Designing multiple very high data rate channels within a cache system is limited with current parallel bus structures. Such a parallel bus system in shown in FIG.
1
.
One possible solution for increasing the data rate is to make the parallel bus wider by increasing the number of data wires. This results in several difficulties such as a greater number of traces on a printed circuit board (PCB) requiring valuable board real state, additional driver/receiver pairs, additional connector pins to provide circuit card-to-circuit card interconnection, and increased associated electrical power.
Another possible solution for increasing the data rate is to send parallel bus control signals on dedicated wires. These separate signals, called sideband signals, may signal the start of transmission, provide timing, specify intended receivers, request attention, or indicate success or failure. Using sideband signals increases the number of connecting wires and, hence, suffers from the same drawbacks as increasing the number of data wires.
Still another possible solution for increasing the data rate is to increase the clock rate used on an existing parallel bus. However, decreasing the time between clock edges is limited by the physics of parallel connecting devices. In particular, each device has an associated capacitance. The total capacitance is the sum of the individual capacitances and the distributed capacitance of the interconnecting trace. The velocity of propagation of a signal down the bus is inversely proportional to this total capacitance and, therefore, the clock switching speed is directly limited by the total capacitance.
A further possible solution for increasing the data rate is to use a currently available serial protocol for bussing data within the cache system. Such protocols include SONET (Synchronous Optical NETwork), Fiber Channel, and USB (Universal Serial Bus). However, these protocols were designed primarily for connection between devices and not as intradevice busses; and primarily for use with particular interconnection media such as fiber optic cable, coaxial cable, or twisted pairs. Therefore, use in PCB busses results in data transfer rates no greater than 200 megabytes per second, below the capabilities achievable using interconnection media for which the existing protocols were designed.
In addition to simply increasing the data rate in a cache system, data must be written to two different disks in a RAID system. One solution with current parallel buses is to send the data twice, effectively halving the data transfer rate. Another solution is to provide multiple parallel paths, requiring twice the hardware. Still another solution is to construct a special protocol enabling two recipients to receive the same data, requiring more complex logic in the protocol engine and potential performance degradation.
What is needed is a cache system that can achieve increased data rates without incurring the problems associated with increasing the number of parallel connections, using sideband signals, increasing the clock rate, or using current serial bus protocols. The ability to support RAID should also be provided.
SUMMARY OF THE INVENTION
It is a primary object of the present invention to increase the data transfer rate over existing parallel bus systems.
Another object of the present invention is to require less PCB real estate, fewer driver/receiver airs, and less interconnections than existing parallel bus systems.
Still another object of the present invention is to develop a cache system with lower cost than existing parallel bus systems.
A further object of the present invention is to support RAID in a DASD control system.
A still further object is to reduce the complexity of arbiters required to implement a serial cache system.
In carrying out the above objects and other objects and features of the present invention, a cache system is provided. The system includes a plurality of adapters, each adapter connected to at least one of the computer elements, a cache, and a set of bidirectional multichannel serial links, each link connecting one of the plurality of adapters with the cache.
In one embodiment, the cache includes a plurality of memory cards, each memory card connected to each adapter through at least one of the plurality of bidirectional multichannel serial links. In a refinement, each memory card includes at least one memory bank and at least one hub in communication with each memory bank, each hub operable to transmit and receive data over at least one of the plurality of bidirectional multichannel serial links. Each hub may be a simplex hub, permitting either memory read or memory write during a memory access period, or may be a duplex hub, permitting simultaneous memory read and memory write during a memory access period.
In another embodiment, each direction of the bidirectional multichannel serial link includes a plurality of serial data drivers, a serial data receiver corresponding to each of the plurality of serial data drivers, the serial data receiver in communication with the corresponding serial data driver, a serial clock driver, and a serial clock receiver in communication with the serial clock driver. In a refinement, serial data drivers and the serial clock driver can be implemented using a flat panel display driver, and serial data receivers and the serial clock receiver may be implemented using a flat panel display receiver.
In still another embodiment, each adapter has a control logic including a control task operative to receive a master order, to decompose the master order into read orders and write orders, and to receive status information; a read queue for holding read orders; at least one read task operative to input at least one cache read order, decompose the read order into a sequence of cache reads, control the sequence of cache reads, and transmit status information to the control task; a write queue for holding write orders; and at least one write task operative to input at least one cache write order, decompose the write order into a sequence of cache writes, control the sequence of cache writes, and transmit status information to the control task. In a refinement, the write task is further operative to send the same sequence of cache writes to a plurality of memory banks thereby implementing data mirroring.
A cache system is also described in which each memory card includes at least one addressable memory bank and at least one hub, each hub having an arbiter. Each hub is in communication with each memory bank. The arbiter is in communication with each adapter and can select at least one adapter for accessing a memory bank. Each adapter is connected to each hub in each memory cards by one of the bidirectional multichannel serial data links.
In one embodiment, particularly suited for use with a single simplex hub in each memory card, the cache system further comprising a request line from each adapter to each arbiter and a grant line from each arbiter to each adapter. Each adapter can assert the request line when access to the memory card containing the corresponding arbiter is requested. Each arbiter can then determine a selected adapter to which access will be granted and assert the grant line to the selected adapter.
In another embodiment, particularly suited for use with multiple duplex hubs in each memory card, the cache system furt
Burns William A.
Krull Nicholas J.
Selkirk Stephen S.
McLean Kimberly
Verbrugge Kevin
LandOfFree
High bandwidth cache system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with High bandwidth cache system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High bandwidth cache system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3075263