Allocation for back-to-back misses in a directory based cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S128000

Reexamination Certificate

active

06332179

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to the field of memory management in a data processing system such as a computer. More specifically, the present invention relates to a system and method for controlling allocation of memory in a cache memory to maximize storage use and minimizes retries.
2. Description of the Related Art
The rapid development of microprocessors has created many challenges. As components are more densely packed on a microprocessor, the processor's speed of operation increases. In addition, techniques such as pipelining and multiple processor systems have created processing systems with very large data throughput needs. To meet this challenge, system designers have turned to cache memory systems.
Cache memory systems rely on the principle of locality of The locality of references principle states that a computer program will spend approximately ninety percent of its time accessing ten percent of its code. In addition, the next memory address accessed is usually an address near the last address accessed. Using these principles, a small portion of the memory may be fetched into a small, extremely fast memory called a cache. Using the principle of locality of references, data with a high probability of being accessed by the processor is stored in the cache. Thus, the processor retrieves the data directly from the very fast cache memory. This means that the processor will spend as little time as possible waiting for the necessary data from memory.
The development of cache systems has reached a very high level of sophistication. It is common in microprocessor-based systems to have two or more levels of cache. Often, the first level (L1) of cache is formed on same semiconductor substrate as the microprocessor. This provides maximum throughput between the first level cache and the processor. The second level (L2) cache is often a larger memory, which may be external or internal. The L2 cache provides a larger, but slower, data storage capability.
One of the more complex systems for using cache involves the use of pipelined associative caches. These systems can provide a very high data throughput. However, the allocation of cache space must be carefully managed.
A particular problem occurs when a miss occurs on two pipelined accesses. A cache miss occurs when a processor requests memory that is not stored in the cache and must be fetched from slower memory devices. A common technique is to allocate a way or write set for the missing data upon the first detection of a cache miss. The memory system then fetches the missed data block from system memory or a lower level cache and stores it in the allocated write set.
The problem arises when, during the pendency of the block allocation, a second pipelined access requests data from the same cache line as the pending allocation in a multiple way associative cache. In this circumstance, the cache memory controller may allocate another way or write set in the cache directory. This wastes space and can cause a system failure unless the cache controller is specifically designed to handle multiple, identical, valid tags at the same index. Therefore, there is a need for a system that avoids the misallocation of cache resources.
BRIEF SUMMARY OF THE INVENTION
It is an object of the present invention to provide a memory caching system that avoids double allocation of a cache line in the directory.
It is a further object of the present invention to provide a memory caching system that avoids inefficient allocation of cache memory lines and prevents unnecessary retries.
It is a further object of the present invention to provide a system resistant to system failures caused by cache write set misallocation.
These and other objects are achieved by a memory caching system that uses a method for allocating blocks of memory by: determining if the contents at a selected memory address are stored in the cache by comparing the selected memory address to the addresses stored in the tags, if the selected memory address is not in the cache, allocating a place in the directory for selected address, wherein, if a place in the directory for an address having the same cache line as the selected memory address has been previously allocated or is in the process of allocating, the selected memory address is allocated to the location of the previous or pending allocation.


REFERENCES:
patent: 5668968 (1997-09-01), Wu
patent: 5781925 (1998-07-01), Larson et al.
patent: 5835951 (1998-11-01), McMahan

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Allocation for back-to-back misses in a directory based cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Allocation for back-to-back misses in a directory based cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Allocation for back-to-back misses in a directory based cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2579733

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.