Memory request reordering in a data processing system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S133000, C711S151000, C711S158000, C711S159000

Reexamination Certificate

active

06272600

ABSTRACT:

SOURCE CODE APPENDIX
A microfiche appendix of source code for the address reordering portion of a preferred embodiment are filed herewith. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
The present invention relates to data processing systems with memory subsystems. More particularly, the present invention relates to controlling requests to memory subsystems so as to maximize bandwidth and concurrency, thereby increasing overall memory subsystem and data processing system speed.
In modern data processing systems, the speed of memory subsystems can be a major limiting factor on overall system speed. The memory bottleneck exists because a memory access is typically much slower than the speed at which computer processors and data buses can generate and convey memory access requests. The slow speed of memory access is particularly feel when there is a read request, as opposed to a write request, because a read request indicates that a requesting processor may be waiting for data.
The bottleneck caused by low memory speed becomes even more severe as the speed of computer processors increases at a faster rate than the speed of common memory components. The memory bottleneck is also exacerbated as computer system and network architectures are introduced that contain multiple processors which share a memory subsystem.
One conventional approach to alleviate the memory bottleneck is to use data caching, perhaps at various levels within the data processing system. For example, portions of data in a slow, cheap disk memory subsystem may be copied, or “cached,” into a faster system RMA (random access memory) subsystem. Portions of data in system RAM may in turn be cached into a “second-level” cache RAM subsystem containing a small amount of expensive, even faster RAM. Portions of data may also be cached into yet faster “first-level” cache memory which may reside on the same chip as a processor. Data caching is a powerful technique to minimize accesses to slower memory. However, at some point, the various levels of memory still need to be accessed. Therefore, whether or not caching is employed, techniques to speed up memory access are still needed.
Attempts to speed up memory access have included the organizing of memory into multiple banks. Under this memory architecture, as a first bank of memory is busy servicing a request to access a memory location in the first bank, a second, available bank can begin servicing the next memory access request if the net request targets a memory location in the second bank. Memory locations may be interleaved among the banks, so that contiguous memory addresses, which are likely to be accessed sequentially, are in different banks.
A problem with the conventional use of memory banks is that successive access requests will still sometimes target addresses within a common bank, even if addresses are interleaved among the banks. In this situation, a conventional memory subsystem must still wait for the common bank to become available before the memory subsystem can begin servicing the second and any subsequent requests. Such a forced wait is wasteful if a subsequent third access request could otherwise have begun to be serviced because the third request targets a different, available memory bank. Furthermore, merely organizing memory into interleaved banks does not address the extra urgency that read requests have versus write requests, as discussed above.
What is needed in the art is a way to control access to memory subsystems so as to maximize bandwidth and concurrency by minimizing the amount of time that memory requests must wait to be serviced. In particular, a way is needed to allow a memory subsystem to begin servicing a request to access an available memory locaton even if a preceding request cannot yet be serviced because the preceding request targets an unavailable memory location. Furthermore, a way is needed to give extra priority to read requests, which are more important than write requests, especially in “posted-write” systems in which processors need not wait for a memory write to fully complete before proceeding to the next task.
SUMMARY OF THE INVENTION
The present invention provides method and apparatus for increasing the speed of memory subsystems by controlling the order in which memory access requests are scheduled for service.
According to one embodiment of the invention, a method is provided for reordering a plurality of memory access requests, the method including steps of accepting the plurality of requests; selecting a request to access an available memory location, from the plurality of requests; and scheduling the selected request.
According to another embodiment of the invention, the step of selecting a request to access memory includes steps of determining whether a read request to access an available memory location exists, among the plurality of requests, and if so, selecting a read request to access an available memory location; and if not, selecting a non-read request to access an available memory location.
A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings.


REFERENCES:
patent: 4755938 (1988-07-01), Takahashi et al.
patent: 5060145 (1991-10-01), Scheuneman et al.
patent: 5339442 (1994-08-01), Lippincott
patent: 5379379 (1995-01-01), Becker et al.
patent: 5416739 (1995-05-01), Wong
patent: 5440713 (1995-08-01), Lin et al.
patent: 5499356 (1996-03-01), Eckert et al.
patent: 5517660 (1996-05-01), Rosich
patent: 5638534 (1997-06-01), Mote, Jr.
patent: 5642494 (1997-06-01), Wang et al.
patent: 5687183 (1997-11-01), Chesley
patent: 5740402 (1998-04-01), Bratt et al.
patent: 5745913 (1998-04-01), Pattin et al.
McKee, S. A., et al., “A Memory Controller for Improved Performance of Streamed Computations on Symmetric Multiprocessors,” Proc. 10th International Parallel Processing Symposium (IPPS'96), Honolulu, HI, Apr. 1996, IEEE Press, pp. 7.
McKee, S.A., “Maximizing memory Bandwidth for Streamed Computations,” Ph.D. thesis, School of Engineering and Applied Science, University of Virginia, May 1995.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory request reordering in a data processing system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory request reordering in a data processing system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory request reordering in a data processing system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2547895

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.