Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-10-14
2001-08-21
Bragdon, Reginald G. (Department: 2751)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S168000, C711S122000
Reexamination Certificate
active
06279082
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field of the Invention
The present invention relates to computer systems and, in particular, to a system and method for improving access to memory of a page type.
2. Description of Related Art
As is well-known to those skilled in the art, the rapid increase in processor speed have greatly outpaced the gains in memory speed. Consequently a chief bottleneck in the performance of current computers is the primary memory (also called main memory) access time. Conventional techniques to overcome this performance hindrance place a small and fast memory called cache memory in between the processor and the primary memory. Information frequently read from the primary memory is copied to the cache memory so future accesses of that information can be made from the fast cache memory instead of from the slower primary memory. For performance and cost reasons several levels of cache memories are used in modern computers. The first level, also the smallest and fastest cache memory, is called L1 cache and placed closest to the processor. The next level of cache memory is consequently called L2 cache and placed in between the L1 cache and the primary memory.
For most systems the traditional use of cache memory works fine but in complex real time systems, such as, for example, modern telecommunication systems, the amount of code executed and data handled is very large and context switching, switching between different processes, is frequent. In these complex real time systems the locality of information, program code and data, stored in the primary memory is low. Low locality means that a large part of the accessed information is spread out in the primary memory, low spatial locality, or that only a small part of the accessed information is referenced frequently, low temporal locality. With low locality the cache hit ratio, that is how frequently information can be accessed from the cache memory, will also be low as most information will be flushed out of the cache memory before it is needed again. Consequently the normal use of cache memories, especially the L2 cache and above, will not be effective in complex real time systems.
It would therefore be advantageous if the use of cache memories could be more effective in complex real time systems.
In systems where the cache hit ratio is low, a lot of effort has been put on selecting what information to write to the cache memory. This has resulted in advanced prediction algorithms, which take some extra time from the normal execution and also delay the writing of information back to the cache memory.
It would therefore be advantageous if the selection of the information to store in the cache memory could be simplified.
In traditional systems the writing of the information to store in the cache memory is done after the information is read from the primary memory, on a separate memory access cycle, which takes extra time and cause execution delays.
It would therefore be advantages if the information to store in the cache memory could be written to the cache memory with less delays than in the prior art.
A typical conventional memory is built up of a large number of memory cells arranged in a number of rows and columns. The rows and columns of memory cells create a memory matrix. Most memory used today is of page type, e.g. FPM DRAM, EDO DRAM and SDRAM. A memory cell in a page type memory can't be accessed until the row containing this memory cell has been opened. Accessing a new row, often referred to as opening a new page, takes some extra time called page setup time. Consequently accessing information in a new, not opened, page normally takes a longer time, for SDRAM often much longer, than accessing information from an open page in the primary memory. For systems where the cache hit ratio is low, the primary memory will be accessed frequently and an extra delay will be encountered each time a new page is opened in the primary memory.
It would therefore be advantageous if the execution delay when accessing a new page in primary memory could be reduced, especially in systems that normally have a low cache hit ratio.
In traditional systems, where the access time for the cache memory is typically much shorter than for the primary memory, the primary memory is accessed only after the cache memory has been accessed and a cache miss occurred. Waiting for a cache miss before accessing the primary memory thus causes an extra delay in the primary memory access.
It would therefore be advantageous to reduce the access time for the primary memory when a cache miss occurs.
It is, therefore, a first object of the present invention to provide a system and method for a more efficient use of cache memory, especially in systems where the cache hit ratio normally is low.
It is a second object of the present invention to simplify the selection of information to store in the cache memory.
It is a third object of the present invention to reduce the extra time needed to write information to the cache memory.
It is a fourth object of the present invention to reduce the execution delay normally encountered when a new page is accessed in primary memory.
It is a fifth object of the present invention to reduce the delay in accessing the primary memory after a cache miss.
SUMMARY OF THE INVENTION
The present invention is directed to a system and method to improve memory access, and more specifically, to make more effective use of cache memory and reduce the execution delays when a new page in a page type memory is accessed.
The present invention uses a higher level cache memory to store information from only a selected number, n, of the first accessed addresses in each accessed page of the primary memory. The number, n, is preferably selected so that the n accesses to the cache memory gives the processor enough information to keep it busy while a new page is opened in the primary memory.
The invention also provides a novel arrangement of control, address and data busses among the cache and primary memory to reduce the aforementioned delays associated with the use of cache memory in conventional systems.
REFERENCES:
patent: 4387427 (1983-06-01), Cox et al.
patent: 4847758 (1989-07-01), Olson et al.
patent: 5325508 (1994-06-01), Parks et al.
patent: 5349656 (1994-09-01), Kaneko et al.
patent: 5452456 (1995-09-01), Mourey et al.
patent: 5469555 (1995-11-01), Ghosh et al.
patent: 5524212 (1996-06-01), Somani et al.
patent: 5553270 (1996-09-01), Rosenbluth
patent: 5590328 (1996-12-01), Seno et al.
patent: 5761708 (1998-06-01), Cherabuddi et al.
patent: 5781922 (1998-10-01), Braceras et al.
patent: 5829010 (1998-10-01), Cherabuddi
patent: EP 0412949 (1991-02-01), None
patent: 0488566 (1992-06-01), None
patent: 9318459 (1993-09-01), None
Patent Abstracts of Japan, vol. 014, No. 415 (P-1102), Sep. 7, 1990, “Updating Method for Cache Memory”, Seki Yukihiro.
IBM Technical Disclosure Bulletin., vol. 32, No. 6A, Nov. 1989, “Processor Performance Enhancement Using a Memory Cache Scheme”, pp. 373-379, XP000043248.
International Search Report
Holmberg Per Anders
Jonsson Tomas Lars
Rosendahl Lennart Michael
Bragdon Reginald G.
Jenkens & Gilchrist A Professional Corporation
Telefonaktiebolaget LM Ericsson (publ)
LandOfFree
System and method for efficient use of cache to improve... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method for efficient use of cache to improve..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for efficient use of cache to improve... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2449854