Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-09-13
2002-12-03
Lane, Jack A. (Department: 2185)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S141000, C711S143000, C711S159000, C711S105000
Reexamination Certificate
active
06490655
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to data processing systems. More particularly, the present invention relates to the management and control of the memory system within a data processing system.
2. Description of the Prior Art
It is desirable that data processing systems should operate as quickly as possible to meet the increasing demands for processing capability placed upon them. In this regard, there is continual progress in producing processing systems that operate at higher speeds and so are able to execute more instructions per second. As the processors increase in speed, it is important that other systems of the data processing system should also increase in speed if they are not to become a processing bottleneck holding back the overall performance of the system. An example of such an other system is the memory system associated with a data processing system.
A memory system of a high performance data processing system may comprise a hierarchy of levels of data storage, e.g. an internal on-chip cache, an external off-chip cache, a random access memory and a non-volatile memory, such as a hard drive or flash ROM. Schemes which can increase the overall performance of the memory system of a data processing system are highly advantageous.
SUMMARY OF THE INVENTION
Viewed from one aspect the present invention provides data processing apparatus comprising:
(i) a cache memory having a plurality of cache storage lines;
(ii) a plurality of main memory units operable to store data words to be cached within said cache memory; and
(iii) a cache victim select circuit for selecting a victim cache storage line into which one or more data words are to be transferred from one of said main memory units following a cache miss; wherein
(iv) said cache victim select circuit is responsive to an operational state of at least one of said main memory units when selecting said victim cache storage line.
A cache memory does not typically have enough storage capacity to store all of the data that may be required by the system. Accordingly, the cache memory stores a subset of the total data and when a memory access request is made to an item of data not stored within the cache, then that item of data must be fetched to the cache. In order to make room for the new item of data within the cache, an existing item of data has to be removed from the cache. The selection of which cache storage line (set of data items) should be replaced is performed by a cache victim select circuit. When there are a plurality of main memory units holding the data that is to be cached within the cache memory, then different victim selections will require accesses to be made to different ones of this main memory unit. In this circumstance, it is strongly desirable that the cache victim select circuit should be responsive to the operational state of at least one of the main memory units. Arranging the cache victim select circuit to be responsive to an operation state of at least one of the main memory units allows the victim selection to be adjusted depending upon the detected operational state and accordingly higher performance to be achieved through the selection of a cache victim that will cause the least delay.
The present invention is particularly useful when the cache memory is configured as a write back cache memory. In such embodiments data words from the victim cache line have to be written back to the main memory from where they originally came and so the operational status of that main memory may be critical in determining the degree of delay that would be associated with selection of that particular cache line as the victim cache line.
One highly useful operational parameter to sense regarding a main memory unit is whether or not that main memory unit is already busy exchanging one or more data words with the cache memory. If the main memory unit is already busy, then its current operation will have to complete before it is able to service any requirements stemming from the selection of a victim cache storage line that requires that busy main memory unit to be accessed.
The advantages of the invention are particularly evident when there are many memory masters simultaneously requesting access and a plurality of main memory units that are able to operate independently and concurrently transfer data words to the cache memory. In such embodiments it is highly desirable to select as a cache victim a cache storage line that is not already busy performing a data exchange with the cache memory. The ability of parallel data exchanges with the cache memory to occur increases system performance and accordingly it is desirable that the memory access workload be split evenly between the main memory units to make better use of this parallel capability.
In preferred embodiments it is desirable that the cache victim select circuit should be responsive to a dirty flag (a flag indicating that a line contains one or more data words that have been changed since they were transferred to the cache memory from the main memory) associated with the cache storage lines so as to select in preference those cache storage lines that are marked as non-dirty. Non-dirty cache storage lines will not require writing back to the main memory and so the delay associated with refilling that cache storage line will be reduced.
In modem high performance data processing systems it is advantageously efficient to provide more than one data word requesting unit that may each request exchange of one or more data words with the cache memory. Sharing the memory structures between data word requesting units in this way provides an advantageous compromise between making the most efficient use of the circuit resources provided balanced against the requirements for maximum performance.
Typical examples of data word requesting units are a central processing unit and a video display driving circuit.
In a system having multiple data word requesting units as discussed above, it is desirable that one or more cache storage lines may be locked for preferential use by one of the data word requesting units. In this way it is possible to reduce the likelihood of the activity of one of the data word requesting units having an undue detrimental impact upon the performance of another of the data word requesting units.
A further way in which the cache memory resources may be made better use of is to arrange the cache victim select circuit to be responsive to an indication of which cache storage lines were least recently used when selecting the victim cache storage line.
An overall scheme that has been found particularly advantageous is one in which said cache victim select circuit selects as said victim cache storage line that cache storage line having properties placing it highest in a list of N properties, where 1≦N≦6, said list of N properties being formed of the N highest properties in the list:
(i) least recently used line that is not locked and is not dirty;
(ii) least recently used line that is not locked, is dirty and can be written back to a main memory unit that is not busy;
(iii) least recently used line that is not locked, is dirty and has to be written back to a main memory unit that is busy;
(iv) least recently used line that is locked and is not dirty;
(v) least recently used line that is locked, is dirty and can be written back to a main memory unit that is not busy;
(vi) least recently used line that is locked, is dirty and has to be written back to a main memory unit that is busy.
In some circumstances a partially random cache victim selection scheme may be preferred as a starting point and in such embodiments said cache victim select circuit selects as said victim cache storage line that cache storage line having properties placing it highest in a list of N properties, where 1≦N≦6, said list of N properties being formed of the N highest properties in the list:
(i) randomly selected from those cache storage lines that are not locked and are not dirty;
(ii) randomly selected from those cache stora
Arm Limited
Lane Jack A.
Nixon & Vanderhye P.C.
LandOfFree
Data processing apparatus and method for cache line... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data processing apparatus and method for cache line..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data processing apparatus and method for cache line... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2995315