Method and apparatus for arbitrating access requests to a...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S141000, C711S151000

Reexamination Certificate

active

06182196

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to the use of cache memory and more particularly to a method and apparatus for arbitrating multiple access requests to an identical memory block of memory.
BACKGROUND OF THE INVENTION
Computers are known to include a central processing unit, audio processing circuitry, peripheral ports, video graphics circuitry, and system memory. Video graphic circuits, which include cache memory, are utilized in computers to process images for subsequent display on a display device, which may be a computer monitor, a television, a LCD panel, and/or any other device that displays pixel information. The cache memory is used to optimize computer system performance by temporarily storing data in memory devices that allow for high speed data access, in comparison to data retrieval from low speed memory devices such as system memory, disks or tapes. Cache memory is used as temporary storage for data typically contained in lower speed memory so that each processor access to the data is to the cache memory, as opposed to the lower speed memory, thereby avoiding the latency associated with an access to the lower speed memory.
The initial access to the data incurs the latency time loss to access the data from the low speed memory because the data is transferred from the low speed memory to the cache. Once that data is stored in the cache, however, subsequent accesses to the data are at the high speed cache access rate. Cache memory is conventionally structured to provide access to multiple blocks of memory.
FIG. 1
represents a conventional processing system that includes indexed cache memory. As shown, cache locations C
0
, C
1
, C
2
, and C
3
form cache memory areas within a cache memory
130
. Each of these cache locations is capable of storing a copy of a block of memory A, B, C, etc. of memory
100
. The client
110
(e.g. a processor or application performed by a processor) accesses data contained in the memory
100
via the memory access system
120
. The client
110
communicates a stream of data commands
115
via the command bus
112
, and the data associated with the command stream
115
is communicated via the data bus
111
. By storing copies of the blocks of memory in the cache memory, substantial access speed improvements can be achieved when multiple accesses to the data occur.
The data commands from the client
110
are received by the operation generator
140
within the memory access system
120
. The client data commands direct a transfer of data to or from a memory address. Such commands may be a read, write, or a read-modify-write (read/write). The operation generator
140
generates a series of commands applicable to the memory control
160
and the memory
100
to accomplish each client data command. The operation generator
140
interprets the data command to determine which memory block A, B, C, etc. of memory
100
includes the requested memory address. It also determines whether a copy of the identified memory block is already contained in the cache memory
130
. If the memory block is contained in the cache memory, the operation generator identifies which cache location C
0
, C
1
, etc. contains the copy of the memory block, and formulates a command to effect the data command with this identified cache location.
If the memory block is not contained in the cache memory, the operation generator
140
, allocates one of the cache locations for this memory block. Typically, the allocated cache location will have been allocated to another memory block prior to this data command. Therefore, the operation generator must determine whether some action must be taken with regard to the data currently stored in the identified cache location. If, for example, the copy of the data in the cache location had only been used for reading the data contained in a memory block, no action need be taken. The new memory block data will merely overwrite the prior data. If, however, new data had been written to this cache location, intending to be written to the associated memory block, the copy of the data in the cache location must be written to the memory block before the new memory block data is read into this cache location. Thus, in this case, the operation generator
140
will formulate a command to write the data in the cache location to its previously associated memory block, followed by the command to read the new memory block into this cache location. The command to write data from the cache location to the memory is termed a “flush” of the cache location; the command to read data into the cache location from the memory is termed a “fill” of the cache location.
When the cache memory
130
is full and another request arrives, the operation generator
140
allocates one of the cache locations to the new request. A variety of allocation algorithms can be applied to determine which cache location is to be reallocated, such as lest recently used algorithms, indexed algorithms, etc. Before the operation generator
140
reallocates one of the cache locations, it first determines that the data contained in the cache location is no longer needed. Typically, the data will be needed if it has been modified and the modifications have not been written back to memory. If the data has not been written back to memory, the new data request cannot be processed in the cache location until the modified data has been written back to memory. While this writing occurs, the processing of the data request is halted, which, depending on the nature of the data, may completely halt the processing of the computer system.
A problem that can arise is a situation known as “thrashing.” Thrashing occurs when the operation generator
140
is constantly filling and flushing the cache memory
130
in accordance with the data commands (client operation requests)
115
. Thrashing may better be described as the case where the cache is frequently filling and flushing the same memory locations over and over again. For example, assume a cache has 4 entries. The client issues read commands to the following locations: A, B, C, D, E, A, B, C, D, E, A, . . . etc. In order to do the fill for the first request of E, the data for A must be cleared out to make room in the four-entry cash, which contains A, B, C, D at that point. But then A must be brought right back in again when the second request for A is processed.
Thrashing is heightened if multiple clients begin to initiate fill and flush commands to be processed by the operation generator
140
. In a multiple client system, if the processing order of the fill and flush commands is incorrect, errors may occur. For example, a command to flush modified data may be followed by a command to fill the same memory block. If the fill and flush commands are processed asynchronously and in parallel, the fill may occur before the flush. If the fill occurs before the flush, the modified data in the cache location will be overwritten by the data filled from memory, resulting in errors. Additionally, if multiple clients begin to initiate fill and flush commands directed toward the same memory block at the same time, the likelihood that collisions may occur between these commands is greatly increased. As such, errors may occur that degrade the quality of various computer applications such as video games, drawing applications, painting applications, broadcast television signals, cable television signals, etc., that utilize a video graphics circuit.
Therefore, a need exists for a method and apparatus for arbitrating access requests to a memory when contemporaneous accesses to an identical memory block of the memory occur. A need also exists for a method and apparatus for controlling parallel pipeline memory accesses, which allows for the serialization of the memory access requests when a memory access collision is detected.


REFERENCES:
patent: 4928234 (1990-05-01), Kitamura et al.
patent: 5197130 (1993-03-01), Chen et al.
patent: 5450564 (1995-09-01), Hassler et al.
patent: 5860159 (1999-01-01), Hagersten
patent: 5920898 (19

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for arbitrating access requests to a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for arbitrating access requests to a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for arbitrating access requests to a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2548555

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.