Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2002-08-08
2004-11-02
Padmanabhan, Mano (Department: 2188)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S122000
Reexamination Certificate
active
06813694
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to a data processing system in general, and in particular to a data processing system having a cache memory hierarchy. Still more particularly, the present invention relates to a data processing system having a highly scalable shared cache memory hierarchy that includes multiple local invalidation buses.
2. Description of the Related Art
Broadly speaking, all processing units within a symmetric multiprocessor (SMP) data processing system are generally identical. In other words, all of the processing units within an SMP data processing system generally have the same architecture and utilize a common set or subset of instructions and protocols to operate. Each processing unit within the SMP data processing system includes a processor core having multiple registers and execution units for carrying out program instructions. The SMP data processing system may also include a cache memory hierarchy.
A cache memory hierarchy is a cache memory system consisting of several levels of cache memories, each level having a different size and speed. Typically, the first level cache memory, commonly known as the level one (L
1
) cache, has the fastest access time and the highest cost per bit. The remaining levels of cache memories, such as level two (L
2
) caches, level three (L
3
) caches, etc., have a relatively slower access time, but also a relatively lower cost per bit. It is quite common that each lower cache memory level has a progressively slower access time and a larger size.
Within a cache memory hierarchy, when multiple L
1
caches share a single L
2
cache, the L
2
cache is typically inclusive of all the L
1
caches. Thus, the L
2
cache has to maintain a dedicated inclusivity bit per L
1
cache in an L
2
directory for each L
1
cache line. Consequently, the L
2
directory, which is a costly resource, grows substantially as the total number of L
1
cache lines increases. As a result, the additional inclusivity bit information in the L
2
directory leads to a relatively large L
2
cache design with relatively slow access time to the L
2
directory. The present disclosure provides an improved inclusivity tracking and cache invalidation apparatus to solve the above-mentioned problem.
SUMMARY OF THE INVENTION
In accordance with a preferred embodiment of the present invention, a symmetric multiprocessor data processing system includes multiple processing units. Each of the processing units is associated with a level one cache memory. All the level one cache memories are associated with an imprecisely inclusive level two cache memory. In addition, a group of local invalidation buses is connected between all the level one cache memories and the level two cache memory. The imprecisely inclusive level two cache memory includes a tracking means for imprecisely tracking cache line inclusivity of the level one cache memories. Thus, the level two cache memory does not have dedicated inclusivity bits for tracking the cache line inclusivity of each of the associated level one cache memories. The tracking means includes a last_processor_to_store field and a more_than_two_loads field per cache line. When the more_than_two_loads field is asserted, except for a specific cache line in the level one cache memory associated with the processor indicated in the last_processor_to_store field, all cache lines within the level one cache memories that shared identical information with that specific cache line are invalidated via the local invalidation buses connected between all the level one cache memories and the level two cache memory.
All objects, features, and advantages of the present invention will become apparent in the following detailed written description.
REFERENCES:
patent: 2002/0152359 (2002-10-01), Chaudhry et al.
Arimilli Ravi Kumar
Guthrie Guy Lynn
Dillon & Yudell LLP
Inoa Midys
Padmanabhan Mano
Salys Casimer K.
LandOfFree
Local invalidation buses for a highly scalable shared cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Local invalidation buses for a highly scalable shared cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Local invalidation buses for a highly scalable shared cache... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3315902