Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1997-12-22
2001-05-15
Ellis, Kevin L. (Department: 2185)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S213000
Reexamination Certificate
active
06233656
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to bus bandwidth utilization. More particularly, the present invention relates to optimizing bus bandwidth utilization in a high-latency, bursty bus environment by using a prefetch cache to hold data for multiple small-sized data request.
2. Background
In a high bus transaction environment where read accesses range from single word to multiple-word burst accesses it becomes important to optimize memory bus bandwidth utilization. Each read access requires some form of handshaking between a memory bus interface and a device sought to be accessed that are attached to the memory bus, such as a memory controller servicing a memory store. Each read access incurs a bandwidth overhead cost due to handshaking because handshaking consumes bus bandwidth that would have been otherwise available for transmitting access requests from other clients or additional data. Thus, single word accesses are more inefficient than multiple-word burst accesses because less data are transferred for a given bandwidth consumed.
Inefficient single word accesses take away bus time from other clients who may access the bus more efficiently such as clients that can perform large data access per request-grant handshake, further decreasing the memory bus interface's ability to provide optimum utilization of available memory bus bandwidth.
Moreover, in an environment where a memory controller servicing the data access through a memory bus needs to service high latency devices, such as hard drives, so that data may be transferred to another location through a high speed serial bus, such as a fibre channel bus, it becomes even more important that data accesses through the memory bus are performed efficiently.
Accordingly, it would be desirable to handle memory bus access requests of various sizes and without restricting clients to only performing large data fetches, while maintaining optimum utilization of the memory bus bandwidth.
SUMMARY OF THE INVENTION
The present invention optimizes bus bandwidth utilization in an environment where bus accesses range in size from single word to multi-word burst accesses by prefetching and caching additional words when responding to a single word access request so that if the word is not found in cache, the single word request is converted into a multi-word fetch request. The additional words in the fetch request are taken from the same line in memory in which the single word resides and stored in cache, increasing the chances that the next single word access by the client results in a cache hit. This optimizes the utilization of the bus bandwidth because the number of fetches through the memory bus is reduced through prefetching and when prefetching does occur, additional words besides the word requested are fetched.
In a preferred embodiment of the present invention, the method includes receiving a read request from a client; checking the contents of a cache to determine whether the cache contains information sought in the read request and returning the information from the cache if a cache hit results, If a cache miss results, a bus transaction is initiated by fetching a block of memory containing the information from a memory store, sending the information to the client, and caching the additional remaining information included with the fetched block of memory.
The bus transaction may further include synchronizing clock signals and handshake signals driving the cache with the clock signals and handshake signals driving to the bus and the memory store. Moreover, the method may further include maintaining cache consistency in the cache, while synchronization is being performed.
The method may be implemented using a bus interface having a cache, and which is responsive to a read request received from the client. The bus interface is coupled to the client and to a bus, the cache returning information from the cache if a cache hit otherwise the bus interface fetching a block of memory containing the information from a memory store through the bus, sending the information to the client, caching remaining information within the fetched block of memory that was not requested in the read request, if a cache miss results in response to the read request.
The implementation of the method may further include a synchronization devices for synchronizing clock signals and handshake signals driving the cache with clock signals and handshake signals driving the bus and the memory store. Moreover, the implementation may include a means for maintaining cache consistency in the cache, while the synchronization means is synchronizing the cache with the bus and memory store.
REFERENCES:
patent: 5548620 (1996-08-01), Rogers
patent: 5751336 (1998-05-01), Aggarwal et al.
patent: 5884028 (1999-03-01), Kindell et al.
Jones Darren
Lin Wei-Ting
Ellis Kevin L.
LSI Logic Corporation
LandOfFree
Bandwidth optimization cache does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Bandwidth optimization cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bandwidth optimization cache will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2485271