Cache coherent network adapter for scalable shared memory...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S121000, C711S122000, C711S153000

Reexamination Certificate

active

06343346

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field of the Invention
This invention relates to digital parallel processing systems, wherein a plurality of nodes communicate via messages over an interconnection network and share the entire memory of the system. In particular, this invention deals with distributing the shared memory amongst all the system nodes, such that each node implements a portion of the entire memory. More specifically, the invention relates to a tightly coupled system including local caches at each node, and a method for maintaining cache coherency efficiently across a network using distributed directories, invalidation, read requests, and write-thru updates.
2. Background Art
As more and more processor performance is demanded for computing and server systems, shared memory processors (SMPs) are becoming an important option for providing better performance. SMPs comprise a plurality of processors that share a common memory pool with a part or most of the memory pool being remote from each processor. There are basically two types of multiprocessing systems: tightly coupled and loosely coupled. In a tightly coupled multiprocessor, the shared memory is used by all processors and the entire system is managed by a single operating system. In a loosely coupled multiprocessor, there is no shared memory and each processor has an exclusive memory, which can be loaded from the network if desired.
For either tightly or loosely coupled systems, the accessing of memory from a remote node or location is essential. Accessing remote memory verses local memory is a much slower process and requires performance enhancement techniques to make the remote access feasible. The first performance technique uses local caches (usually several levels of cache) at each processor. Cache memories are well known in the art for being a high performance local memory and alleviating traffic problems at the shared memory or network. A cache memory comprises a data array for caching a data line retrieved from the shared memory, where a cache data line is the basic unit of transfer between the shared memory and the cache. Since the cache size is limited, the cache also includes a directory for mapping the cache line from shared memory to a location within the cache data array. The cache contains either instructions or data, which sustain the processor's need over a period of time before a refill of the cache lines are required. If the data line is found in the cache, then a cache “hit” is said to have occurred otherwise, a cache “miss” is detected and refill of a cache line is required, where the refill replaces a cache line that has been least recently used. When a multi-processing system is comprised of distributed shared memory, the refill can come from the local shared memory or remote shared memory resident in a different node on the network. Conventionally, caches have been classified as either “write-back” or “write-thru”. For a write-thru cache, changed data is immediately stored to shared memory, so that the most recent data is always resident in the shared memory. For a write-back cache, changed data is held in the cache and only written back to shared memory when it is requested by a another node or replaced in the cache.
The execution of programs and the fetching of variables from shared memory at a remote node takes many processor cycle times (15 cycles at best and usually a lot more). The larger the system, the larger the distance to the remote memory, the more chance of conflict in the interconnection scheme, and the more time wasted when fetching from remote memory.
A second performance enhancement technique becoming popular is multi-threading, as disclosed by Nikhil et al in U.S. Pat. No. 5,499,349 “Pipelined Processor using Tokens to Indicate the Next Instruction for Each Multiple Thread of Execution” and N. P. Holt in U.S. Pat. No. 5,530,816 “Data Processing System for Handling Multiple Independent Data-driven Instruction Streams”. The multi-threading technique uses the time when the processor becomes stalled because it must fetch data from remote memory, and switches the processor to work on a different task (or thread).
Traditionally, cache coherency is controlled by using a multi-drop bus to interconnect the plurality of processors and the remote memory, as disclosed by Wilson, Jr. et al in U.S. Pat. No. 4,755,930, “Hierarchical Cache Memory System and Method”. Using a multi-drop bus, cache updating is a rather simple operation. Since the bus drives all processors simultaneously, each processor can “snoop” the bus for store operations to remote memory. Anytime a variable is stored to remote memory, each processor “snoops” the store operation by capturing the address of remote memory being written. It then searches its local caches to determine whether a copy of that variable is present. If it is, the variable is replaced or invalidated. If it is not, no action is taken.
Cache coherency is not so easy over networks. This is because a network cannot be snooped. A network establishes multiple connections at any time; however, each connection is between two of the plurality of nodes. Therefore, except for the two nodes involved in the transfer of data, the other nodes do not see the data and cannot snoop it. It is possible to construct a network that operates only in broadcast mode, so that every processor sees every data transfer in the system. J. Sandberg teaches this approach in U.S. Pat. No. 5,592,625, “Apparatus for Providing Shared Virtual Memory Among Interconnected Computer Nodes with Minimal Processor Involvement”. Sandberg uses only writes over the network to broadcast any change in data to all nodes, causing all nodes to update the changed variable to its new value. Sandberg does not invalidate or read data over the network, as his solution assumes that each node has a full copy of all memory and there is never a need to perform a remote read over the network. Sandberg's write operation over the network to update the variables at all nodes negates the need for invalidation because he opts to replace instead of invalidate. This defeats the major advantage of a network over a bus; i.e., the capability to perform many transfers in parallel is lost since only one broadcast is allowed in the network at a time. Thus, Sandberg's approach reduces the network to having the performance of a serial bus and restricts it to performing only serial transfers—one transfer at a time. This effectively negates the parallel nature of the system and makes it of less value.
A further problem with SMP systems is that they experience performance degradation when being scaled to systems having many nodes. Thus, state-of-the-art SMP systems typically use only a small number of nodes. This typical approach is taught by U.S. Pat. No. 5,537,574, “Sysplex Shared Data Coherency Method” by Elko et al, and allows shared memory to be distributed across several nodes with each node implementing a local cache. Cache coherency is maintained by a centralized global cache and directory, which controls the read and store of data and instructions across all of the distributed and shared memory. No network is used, instead each node has a unique tail to the centralized global cache and directory, which controls the transfer of all global data and tracks the cache coherency of the data. This method works well for small systems but becomes unwieldy for middle or large scale parallel processors, as a centralized function causes serialization and defeats the parallel nature of SMP systems.
A similar system having a centralized global cache and directory is disclosed in U.S. Pat. No. 5,537,569, “Multiprocessor System Utilizing a Directory Memory and Including Grouped Processing Elements Each Having Cache” by Y. Masubuchi. Masubuchi teaches a networked system where a centralized global cache and directory is attached to one node of the network. On the surface, Masubuchi seems to have a more general solution than that taught by Elko in U.S. Pat. No. 5,537,574, because Masubuchi includes a network for

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache coherent network adapter for scalable shared memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache coherent network adapter for scalable shared memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache coherent network adapter for scalable shared memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2857811

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.