Electrical computers and digital processing systems: memory – Addressing combined with specific memory configuration or... – Addressing cache memories
Reexamination Certificate
2001-08-08
2004-09-14
Padmanabhan, Mano (Department: 2188)
Electrical computers and digital processing systems: memory
Addressing combined with specific memory configuration or...
Addressing cache memories
C711S117000, C711S118000, C711S119000, C711S128000, C711S144000
Reexamination Certificate
active
06792498
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a memory system for a high-performance computer system, and, more particularly, to a cache memory system having a direct mapping structure or a set associative structure which overcomes a problem of thrashing that occurs when multiple data registration requests are made to a specific column in the allocation of data on the cache memory. The invention also relates to a high-performance computer system equipped with a sub memory to reduce the thrashing-originated dropping of the performance.
In a memory system in a high-performance computer system, data transfer from the memory is important in achieving the high performance. A typical way to achieve it is to shorten the delay time in data transfer by using the temporal locality of data. However, an available cache memory is permitted a considerably smaller capacity than the main memory due to the physical restrictions. With regard to the associativity, many cache memories are of a direct mapping system or a 4-way/8-way set associative system.
The direct mapping system allows data to be registered in the single entry in a cache memory separated into a plurality of entries that has a one-to-one association with the address of the data. According to this system, when two pieces of data have data addresses to be registered in the same entry, immediately previous data is casted out of the cache memory, thus lowering the use efficiency. The direct mapping system is, however, simple in mechanism and has a high mounting efficiency.
The full associative system is opposite to the direct mapping system. The full associative system can register data in any entry. While this system has a high use efficiency, it suffers a very low mounting efficiency. The 4-way/8-way set associative system is positioned as an intermediate system between the direct mapping system and the full associative system, and registers data in any one of four entries or eight entries. This system can therefore register up to four pieces or eight pieces of data without casting previous data out of the cache memory. The 4-way/8-way set associative system can be mounted in a smaller area than the full associative system and thus has a higher mounting efficiency.
There is another mechanism, called “victim cache”, that also copes with thrashing. The victim cache is a cache which temporarily retains data that has been casted out of the cache memory. When thrashing occurs, data casted out of the cache memory is transferred to the victim cache. Then, the associativity of the cache entries that have caused thrashing becomes the associativity of the cache memory plus the associativity of the victim cache, so that the cache operates as if the associativity became higher. This suppresses the problem of thrashing.
Because data casted out of the cache memory is registered in the victim cache without discriminating whether the data has been casted out due to thrashing or not, however, data to be registered contains unnecessary data or data unrelated to thrashing. This lowers the use efficiency of the victim cache. At the time data on the cache memory is replaced, it is necessary to prepare a path to transfer data to the victim cache for registration as well as to invalidate the original data.
A processor equipped with a cache memory generally employs a cache memory of a set associative system or direct mapping system in order to shorten the time to access to the cache memory or the access latency and to improve the mounting efficiency. In a cache memory with such a low associativity, however, when mapping of a data set larger than the associativity concentrates on the same set, so-called thrashing occurs which causes registered data to be casted out by following data to thereby disable the effective function of the cache memory. When thrashing occurs, data transfer which should have a hit in the cache memory results in a miss-hit so that data should be transferred from the main memory. This may drop the performance of the processor to about one third to one tenth of the performance in the state where there is a hit in the cache memory and data is transferred with a short access latency.
While thrashing may be avoided by adjusting the address of data by inserting dummy data in a sequence of data, it is not easy to detect the occurrence of thrashing and specify the location where thrashing occurs. Thrashing may be prevented by designing the cache memory in a full associative system. But, the complexity of checking a hit in the cache memory inevitably enlarges the hardware, thus increasing the time needed to access the cache and decreasing the mounting efficiency. Because of this disadvantages, the full associative system is not generally employed.
SUMMARY OF THE INVENTION
Accordingly, the invention aims at overcoming the problem of thrashing without requiring large-scale hardware, such as the full associative system, in a cache memory.
According to the invention, means for detecting the occurrence of thrashing is provided between the cache memory and the main memory in order to avoid thrashing without lowering the speed of accessing the cache memory or the mounting efficiency. Another feature of the invention lies in that means for storing thrashing data is provided to suppress the thrashing-originated reduction in the execution speed of a processor.
REFERENCES:
patent: 5345560 (1994-09-01), Miura et al.
patent: 5603004 (1997-02-01), Kurpanek et al.
patent: 5802566 (1998-09-01), Hagersten
patent: 5809530 (1998-09-01), Samra et al.
patent: 5822616 (1998-10-01), Hirooka
patent: 5860095 (1999-01-01), Iacobovici et al.
patent: 5958040 (1999-09-01), Jouppi
patent: 6047363 (2000-04-01), Lewchuk
patent: 6085291 (2000-07-01), Hicks et al.
patent: 6173392 (2001-01-01), Shinozaki
patent: 6253289 (2001-06-01), Bates, Jr. et al.
patent: 6321301 (2001-11-01), Lin et al.
patent: 6499085 (2002-12-01), Bogin et al.
patent: 6507892 (2003-01-01), Mulla et al.
patent: 2002/0144062 (2002-10-01), Nakamura
patent: 7-253926 (1995-01-01), None
patent: 9-190382 (1996-12-01), None
Aoki Hidetaka
Nakamura Tomohiro
A. Marquez, Esq. Juan Carlos
Fisher Esq. Stanley P.
Hitachi , Ltd.
Padmanabhan Mano
Reed Smith LLP
LandOfFree
Memory system with mechanism for assisting a cache memory does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Memory system with mechanism for assisting a cache memory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory system with mechanism for assisting a cache memory will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3201977