Split directory-based cache coherency technique for a...

Electrical computers and digital processing systems: processing – Processing architecture – Distributed processing system

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S023000, C712S032000

Reexamination Certificate

active

06295598

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates, in general, to the field of multi-processor computer systems. In particular, the present invention relates to a split directory-based cache coherency technique for a multi-processor computer system.
The advent of low-cost high-performance microprocessors has made large-scale multiprocessor computers feasible. In general, these microprocessors are cache-oriented; that is, they maintain a subset of the contents of main memory in high-speed storage close to the processor to improve the access latency and bandwidth of frequently-used memory data. This local memory can become inconsistent if one processor changes an element of memory by modifying its local cache and then the change is not propagated to all processors that share that memory. The precise structure of such caches varies greatly depending on the system design.
This caching problem has led to two basic architectures sometimes known as “shared memory” and “partitioned memory”. In a shared memory system, algorithms are used to maintain the consistency of the shared data. Typically, in commercially successful systems, the consistency is implemented by hardware and is invisible to the software. Such systems are called “cache-consistent” and form the basis of almost all multiprocessor computer systems produced. On the other hand, the partitioned memory approach disallows sharing of memory altogether or allows sharing by only a small number of processors, thereby simplifying the problem greatly. In such computer systems, larger configurations are created by connecting groups of computer systems with a network and using a message-passing paradigm that is most often made visible to the application software running on the system.
The development of cache coherent systems has led to some fundamental design problems. For large-scale systems, the data transmission and speed limitations make cache coherency difficult to achieve successfully. Coherency operations transmitted across the communications channel have traditionally been limited by low bandwidths, thus reducing overall system speed. Large-scale systems containing a high number of processors require accurate and high-speed cache coherency implementations.
With this in mind, some fundamental issues must be resolved in order to maintain a consistent view of memory across processors. First, processors must follow an arbitration protocol that grants permission to a processor to read or modify memory contents. To perform this function, coherency protocols divide memory into fixed “lines”, (subsections of memory, typically 32, 64, or 128 bytes in size) that are treated as an atomic unit. Typically, each line is allocated to a single processor in “exclusive mode”, which allows writing, to one or more processors in “read-only mode”, or that line is currently not cached. A processor is required to request a line in exclusive or read-only mode when loading it from the memory. In order to support this, the cache must allow the memory subsystem to delay completion of a request while the state of the line is analyzed and operations are performed on the processor cache while the system is waiting for an operation to complete.
The process of moving a line from one processor to another, when that is required, can be done in many ways. One of these approaches is termed “invalidation based” and is the technique most frequently used in existing multi-processor computer systems. In such systems, lines are removed from other processors' caches when the contents of a line are to be changed. Another approach allows for updating all caches containing the line when that line is changed.
The most common method of providing cache coherence is by using a “snoopy bus” approach. In such systems, all processors can monitor all memory transactions because they are all performed over a small number of buses, usually one or two. This approach cannot be used for large-scale systems because buses cannot supply the required data bandwidth from memory to the processors.
In such cases, most commonly a “directory” approach is used. Such systems use a database to record the processors to which lines are allocated. Transactions on memory require that the directory be examined to determine what coherency operations are required to allocate the line in question. The method of keeping the directory varies.
Many schemes have been proposed to record the contents of the directory. Most either require time-expensive searches when a directory inquiry is made or use broadcasting when the precise set of caches containing the line is too large to be recorded in the directory hardware. “Broadcasting”, in this context, means sending a message to all processors in the system, often by the use of special hardware features to support this style of communication. The difficulty with broadcasting is that switch-based networks do not easily support such operations, and the cost of interrupting processors with requests that do not involve their cache contents can be high.
In order to invalidate a line that is to be updated, all caches that contain the line must be contacted, which requires a decision as to which processors to contact. Once a list of processors that have allocated the line has been made from the directory, each processor must be sent a message instructing it to remove the line from the cache and to send any changes to the memory. This operation must be supported by the microprocessor cache hardware.
SUMMARY OF THE INVENTION
In order to provide processors with a cache-coherent view of shared memory resources, all of the processors in a multi-processor computer system must view all memory changes in a useful, predefined order. For the class of microprocessors disclosed in a preferred embodiment described in greater detail hereinafter (e.g. the Deschutes™ microprocessor developed by Intel Corporation, Santa Clara, Calif.), the coherency model is called “total store order”. This means that all memory changes made by a given processor are visible in the order in which they are made by that particular processor and are visible in that order to all processors in the system. Likewise, read operations do not cross conflicting write operations.
Nevertheless, the cache coherency technique disclosed herein is not limited to this particular coherency model and, in fact, can support all current models through the connection of memory to the processors with a cache communication network.
However, since the processors cannot view all transactions in such a system, the present invention contemplates the inclusion of reasonable-cost, complete directories with low-complexity directory lookup. This approach can be extended to allow even smaller directories with some broadcasting if desired, for a given application.
In order to provide coherency, the technique of the present invention requires extra data storage associated with each line of memory (a “coherency tag”) to hold parts of the directory. In addition, a secondary directory area is used for each memory controller. This secondary directory consists of entries that are used for widely-shared lines. In the embodiment disclosed, it is assumed that each such entry contains a bit for every processor on the system, which bit indicates whether that processor holds the line in question. In addition to the bit mask, in certain applications it may be desirable to keep a count of the number of bits that are set in the mask.
What is disclosed herein is a split directory-based cache coherency technique which utilizes a secondary directory in memory to implement a bit mask used to indicate when more than one processor cache in a multi-processor computer system contains the same line of memory. This technique thereby reduces the search complexity required to perform the coherency operations and the overall size of the memory needed to support the coherency system. The technique includes the attachment of a “coherency tag” to a line of memory so that its status can be tracked without having to read each processor's cache to see

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Split directory-based cache coherency technique for a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Split directory-based cache coherency technique for a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Split directory-based cache coherency technique for a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2506599

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.