Reducing power in a snooping cache based multiprocessor...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S146000, C711S145000, C711S141000, C711S120000, C711S124000, C711S129000

Reexamination Certificate

active

06826656

ABSTRACT:

TECHNICAL FIELD
The present invention relates to the field of snooping in a multiprocessor environment, and more particularly to not performing a cache search when a copy of the snooped requested address is determined to not be in the cache thereby mitigating the power consumption associated with a snooped request cache search.
BACKGROUND INFORMATION
A multiprocessor system may comprise multiple processors coupled to a common shared system memory. Each processor may comprise one or more levels of cache memory. The multiprocessor system may further comprise a system bus coupling the processing elements to each other and to the system memory. A cache memory may refer to a relatively small, high-speed memory that contains a copy of information from one or more portions of the system memory. Frequently, the cache memory is physically distinct from the system memory. Such a cache memory may be integral with a processor in the system, commonly referred to as an L
1
cache, or may be non-integral with a processor in the system, commonly referred to as an L
2
cache.
A cache may be organized as a collection of spatially mapped, fixed size storage region pools commonly referred to as “rows.” Each of these storage region pools typically comprises one or more storage regions of fixed granularity. These storage regions may be freely associated with any equally granular storage region in the system as long as the storage region spatially maps to the row containing the storage region pool. The position of the storage region within the pool may be referred to as the “column.” The intersection of each row and column contains a cache line. The size of the storage granule may be referred to as the “cache line size.” A unique tag may be derived from an address of a given storage granule to indicate its residency in a given row/column position.
When a processor generates a read request and the requested data resides in its cache memory, e.g., L
1
cache, then a cache read hit takes place. The processor may then obtain the data from the cache memory without having to access the system memory. If the data is not in the cache memory, then a cache read miss occurs. The memory request may be forwarded to the system and the data may subsequently be retrieved from the system memory as would normally be done if the cache did not exist. On a cache miss, the data that is retrieved from the system memory may be provided to the processor and may also be written into the cache memory due to the statistical likelihood that this data will be requested again by that processor. Likewise, if a processor generates a write request, the write data may be written to the cache memory without having to access the system memory over the system bus.
Hence, data may be stored in multiple locations, e.g., L
1
cache of a particular processor and system memory. If a processor altered the contents of a system memory location that is duplicated in its cache memory, the cache memory may be said to hold “stale” or invalid data. Problems may result if the processor inadvertently obtained this invalid data. Subsequently, it may be desirable to ensure that data is consistent between the system memory and caches. This may commonly be referred to as “maintaining cache coherency.” In order to maintain cache coherency, therefore, it may be necessary to monitor the system bus when the processor does not control the bus to see if another processor accesses system memory. This method of monitoring the bus is referred to in the art as “snooping.”
Each cache may be associated with snooping logic configured to monitor the bus for the addresses requested by a processor. The snooping logic may further be configured to determine if a copy of the requested address is within the associated cache using a protocol commonly referred to as Modified, Exclusive, Shared and Invalid (MESI). That is, the snooping logic may be required to search its associated cache for a copy of the requested address. If the cache contains the specified address (and data) then depending on the type of request and the state of the data within the cache, the snooping logic may be required to perform a particular type of action, e.g., invalidating and/or flushing the data to the shared system memory. However, as is often the case, the requested copy of the address may not be found within the cache and subsequently no action is required.
Performing a cache search consumes a significant amount of power regardless of whether a copy of the snooped requested address is found within the cache. Subsequently, unnecessary power may be consumed when a cache search is performed to search for a copy of the snooped requested address that is not found within the cache.
It would therefore be desirable to not perform a cache search when a copy of the snooped requested address is determined to not be in the cache thereby mitigating the power consumption associated with a snooped request cache search.
SUMMARY
The problems outlined above may at least in part be solved in some embodiments by a segment register storing N bits where each bit may be associated with a segment of memory divided into N segments. It is noted that N may be any number. A segment of memory may represent a range of addresses where data is stored in memory. Upon snooping a requested address on a bus by a cache controller coupled to a cache, a determination may be made as to whether the bit in the segment register associated with the segment of memory comprising the address of the request is set. A set bit is an indication that data may be contained in the cache within the segment address associated with that bit. Subsequently, if the bit associated with the snooped requested address is set, a cache search for the snooped requested address may be performed within the cache. However, a bit that is not set is an indication that no data is contained in the cache within the segment address associated with that bit. Subsequently, if the bit associated with the snooped requested address is not set, then a cache search may be avoided thereby mitigating the power consumption associated with a snooped request cache search.
In one embodiment of the present invention, a memory configured to store data may be coupled to a plurality of processing units via a bus. Each processing unit may comprise a processor and a cache controller coupled to a cache associated with the processing unit. The cache controller may comprise a segment register comprising N bits where each bit in the segment register may be associated with a segment of memory divided into N segments. It is noted that N may be any number. The cache controller may further comprise snooping logic configured to snoop a request to read from or write to a particular memory address on the bus that may be issued from a processor in another processing unit. The snooping logic may further be configured to determine which bit in the segment register is associated with the segment address that includes the snooped requested address. Upon determining which bit in the segment register is associated with the snooped requested address, the snooping logic may be configured to determine if the bit associated with the snooped requested address is set. A set bit is an indication that data may be contained in the cache within the segment address associated with that bit. Subsequently, if the bit associated with the snooped requested address is set, a cache search for the snooped requested address may be performed within the cache. However, a bit that is not set is an indication that no data is contained in the cache within the segment address associated with that bit. Subsequently, if the bit associated with the snooped requested address is not set, then a cache search may not be performed thereby mitigating the power consumption associated with a snooped request cache search.
The foregoing has outlined rather broadly the features and technical advantages of one or more embodiments of the present invention in order that the detailed description of the invention that follows may be better understo

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Reducing power in a snooping cache based multiprocessor... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Reducing power in a snooping cache based multiprocessor..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Reducing power in a snooping cache based multiprocessor... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3296084

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.