Electrical computers and digital processing systems: memory – Storage accessing and control – Access timing
Reexamination Certificate
1999-09-30
2001-11-20
Gossage, Glenn (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Access timing
C710S120000
Reexamination Certificate
active
06321315
ABSTRACT:
BACKGROUND
The invention relates generally to computer system memory read operations and more particularly, but not by way of limitation, to a method and apparatus for reducing memory read latency.
If memory read latency is defined as the time lag between when a device issues a request to memory for data and the moment the data begins to be received by the device, then latency represents a key indication of a computer system's performance. The lower the latency, generally the better the performance. A naïve approach to minimizing memory read latency would be to immediately transfer, to the requesting device, each quantum of data as it is provided by the memory. One reason this approach is not taken in modern computer systems is that memory access operations are mediated by system controllers having internal data transfer paths greater than that of either the memory or the requesting device. To obtain high data transfer rates, these controllers aggregate data received from, for example, system memory before forwarding it to a requesting device. The act of aggregation creates a latency that otherwise would not be present. This latency is particularly problematic for processor initiated memory read operations because, as a computer system's primary computational engine, if the processor is waiting for data no useful work is being performed.
Referring to
FIG. 1
, prior art computer system
100
includes processor
102
, system memory
104
, system controller
106
(incorporating processor interface
108
, memory interface
110
and primary bus interface
112
), processor bus
114
coupling processor
102
to processor interface
108
, memory bus
116
coupling system memory
104
to memory interface
110
and primary bus
118
coupling other system devices
120
(e.g., network interface adapters and/or a secondary bus bridge circuit and components coupled thereto) to processor
102
and system memory
104
via primary bus interface
112
.
In many current systems such as computer system
100
, processor bus
114
, memory bus
116
and primary bus
118
are 64-bit structures (another common primary bus width is 32-bits). At the same time, system controller
106
may utilize 128-bit (or greater) internal data transfer paths. Because of this, data received from system memory
104
during a memory read operation is aggregated by memory interface
110
before being forwarded to a destination interface and, ultimately, to the requesting device. For example, if processor
102
initiates a memory read request for a 32-byte block of data (a common size for a cache line), after memory interface
110
receives the first 8-bytes (64-bits) from system memory
104
, it waits until it receives the second 8-bytes before sending the entire 16-byte unit to processor interface
108
and, ultimately, processor
102
.
The delay or latency caused by the act of aggregating successive data units received from system memory can result in processor stalls, thereby is reducing the operational/computational efficiency of the computer system. Thus, it would be beneficial to provide techniques (methods and apparatus) to reduce memory read latency in a computer system.
SUMMARY
In one embodiment the invention provides a memory controller method to reduce the latency associated with multi-quanta system memory read operations. The method includes receiving a first quantum of data having a width from a memory device, transferring the first quantum of data to a device on a first clock, receiving a second and third quanta of data from the memory device (each of the second and third quanta having the width), and transferring the second and third quanta of data to the device on a successive clock. Because intra-memory controller data pathways are wider than external data transfer pathways (e.g., memory bus and processor bus data paths), an indication of which portion or portions of the transferred data is valid is forwarded with the data. In another embodiment, the invention provides a memory controller apparatus to provide reduced latency associated with multi-quanta system memory read operations.
REFERENCES:
patent: 5003475 (1991-03-01), Kerber et al.
patent: 5019965 (1991-05-01), Webb, Jr. et al.
patent: 5469544 (1995-11-01), Aatresh et al.
patent: 5664122 (1997-09-01), Rabe et al.
patent: 5734849 (1998-03-01), Butcher
patent: 5862358 (1999-01-01), Ervin et al.
patent: 5905766 (1999-05-01), Nguyen
patent: 5909563 (1999-06-01), Jacobs
patent: 6029253 (2000-02-01), Houg
patent: 6065070 (2000-05-01), Johnson
“Dynamic Scatter Gather Table,” IBM Technical Disclosure Bulletin, vol. 33, pp. 309-311, Aug. 1990.
Elmore Stephen
Gossage Glenn
Micro)n Technology, Inc.
Trop Pruner & Hu P.C.
LandOfFree
Method and apparatus to reduce memory read latency does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus to reduce memory read latency, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus to reduce memory read latency will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2595506