Reducing resource collisions associated with memory units in...

Electrical computers and digital processing systems: memory – Address formation – Hashing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S117000, C711S118000, C711S122000, C711S220000

Reexamination Certificate

active

06493814

ABSTRACT:

TECHNICAL FIELD
The present invention relates to the field of multi-level hierarchy memory systems, and more particularly to reducing resource collisions associated with memory units, e.g., cache memories, in a multi-level hierarchy memory system.
BACKGROUND INFORMATION
A conventional processing system may employ a multi-level hierarchy memory system. A multi-level memory processing system may comprise a processor having various execution units and registers, as well as branch and dispatch units which forward instructions to appropriate execution units. The processor may further comprise a level one (L1) instruction cache and a level one (L1) data cache. It is noted that those skilled in the art will recognize that a single, unified L1 cache may be implemented. The L1 instruction cache and L1 data cache may be configured to temporarily store instruction and data values, respectively, that may be repeatedly accessed by the processor. By storing instruction and data values that are repeatedly accessed, processing speed may be improved by avoiding the step of loading the values from the system memory, e.g., Random Access Memory (RAM). In order to minimize data access latency, one or more additional levels of cache memory may be implemented within the processing system, such as a level two (L2) cache and a level three (L3) cache. The lower cache levels, e.g., L2, L3, may be employed to stage data to the L1 cache and typically have progressively larger storage capacities but longer access latencies.
Typically, a cache may be organized as a collection of spatially mapped, fixed size storage region pools commonly referred to as “rows.” Each of these storage region pools typically comprise one or more storage regions of fixed granularity. These storage regions may be freely associated with any equally granular storage region in the system as long as the storage region spacially maps to the row containing the storage region pool. The position of the storage region within the pool may be referred to as the “column.” The intersection of each row and column contains a cache line. The size of the storage granule may be referred to as the “cache line size.” A unique tag may be derived from an address of a given storage granule to indicate its residency in a given row/column position.
A particular cache line within a cache may be accessed by using the following addressing scheme. A subset of the full number of address bits commonly referred to as a row selector may be used to form an index into the cache, which uniquely identifies a row within the cache. The remaining set of address bits commonly referred to as an address tag may be used to select a particular column within the cache thereby defining a physical address for the cache line. It is noted that a cache may also be represented by one or more banks where each bank may be indexed by bits within the row selector bits commonly referred to as bank selector bits. Typically, bank selector bits are the bottom bits in the row selector bits.
When a request accesses a cache structure, the address referenced by the request is mapped to determined which row in the cache is allowed to maintain the referenced granule. The tag is then derived and compared with the tag associated with each freely associative region (or column position) within the row. If a match is found, the matching column position may include a valid copy of the memory granule and the requested operation may be permitted to be carried out at this cache. This is commonly referred to as a “cache hit.” Otherwise, a “cache miss” is said to have occurred. In the case of a cache miss, one or more requests may typically be spawned to other memory structures, e.g., caches, in the storage hierarchy to carry out the request for information.
The amount of concurrent request traffic through the multi-level hieararchy memory system tends to increase due to the increase in speculative instruction execution, the increase in the usage of prefetching mechanisms, and the tendency of a single operation, e.g., request, at a given level in the hierarchy to spawn multiple operations, e.g., requests, to the next level of the hieararchy such as in a cache miss.
General caching theory postulates that an ensuing temporally correlated request, i.e., a request arriving soon after a previous request, to access a granule of memory from the same requester may be spatially correlated such that it may be likely to map to a memory granule that is spatially proximate to the memory granule accessed by the previous request.
For this reason, caches are typically organized such that spatially proximate memory granules map to different rows to minimize the over utilization of any given row. Likewise, they are typically organized such that spatially proximate memory granules map to different banks to increase the likelihood that concurrent requests do not result in resource collisions.
For example, a processing system may have a contiguous memory of 1 gigabyte. A given memory unit in the multi-level memory hierarchy system may have a cache structure capable of retaining 2 megabytes. The cache structure may be organized into 1024 rows where each row may comprise 16 freely associative storage regions (or lines) where each storage region may comprise a 128-byte granule of memory, i.e., 128-byte line-size. The byte capacity of the cache may thus be calculated by multiplying the number of rows, the number of columns and the line size of the cache which equals 2 megabytes.
As stated above, in the case of a cache miss, one or more requests may typically be spawned to other memory levels in the multi-level memory hierarchy to carry out the requested operation. More specifically, in the case of a cache miss, a fetch request may be spawned to the next level of the cache hierarchy. The purpose of the fetch is to retrieve the memory granule, i.e., requested information, that was not found at the current cache. In most cases the requested memory granule may be installed, i.e., reloaded, into the cache structure at the current cache.
When the memory granule is installed into the current cache structure, it may displace another memory granule (called the “victim”) currently residing in the same row. Once a victim is selected, the status of the victim may typically be assessed. Typically, the victim is selected by a hash which attempts to choose the least recently used granule since it has the lowest temporal correlation. Normally, if the victim has been locally modified, i.e., it is the only valid copy of the memory granule in the system, it must be preserved. The operation used to insure the preservation of the victim may commonly be referred to as a castout. A castout operation spawns a write request to the next level of the cache hierarchy. The purpose of the write request is to safely place the victim memory granule into the cache structure at the next level of the hierarchy.
The above two operations, e.g., fetch and castout operations, are specific examples of operations that may be temporally correlated in that they tend to be issued almost simultaneously. The fetch and castout operations are spatially correlated in that the memory granules they reference map to the same row in the cache from which they are initiated. It is noted that other operations may be temporally correlated such as fetch operations or castout operations or invalidate/kill operations or any combination thereof that occur almost simultaneously.
Referring to the fetch and castout operation example, when the fetch and castout operations arrive at cache (call it cache Y) in the next level of the cache hierarchy their temporal and spatial correlation to each other change the level of collision risk they introduce at cache Y. Since the row selection bits in the address associated with the modified information in the victim in cache X are the same as the row selection bits in the address associated with the requested information in cache Y, the bank selection bits in the address associated with the modified information in cache X must equal the bank selection bi

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Reducing resource collisions associated with memory units in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Reducing resource collisions associated with memory units in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Reducing resource collisions associated with memory units in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2960890

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.