Clustered computer system with deadlock avoidance

Electrical computers and digital processing systems: memory – Storage accessing and control – Shared memory area

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S124000, C711S130000, C711S147000, C711S148000

Reexamination Certificate

active

06738872

ABSTRACT:

FIELD OF THE INVENTION
This invention is related to computer systems and particularly one having a remote resource management system whose activity causes deadlock avoidance among a plurality of clusters of symmetric multiprocessors (SMPs).
These co-pending applications and the present application are owned by one and the same assignee, International Business Machines Corporation of Armonk, N.Y.
The descriptions set forth in these co-pending applications are hereby incorporated into the present application by this reference.
Trademarks: S/390 and IBM are registered trademarks of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names such as z900, e(logo)Server may be registered trademarks or product names of International Business Machines Corporation or other companies.
BACKGROUND OF THE INVENTION
Today's e-business environment places great demands on the computer systems that drive their infrastructure. This is especially true in the areas of system performance and availability due in large part to the increasing amount of data sharing and transaction processing inherent in large system applications. Another aspect of the e-business infrastructure is the unpredictability of the workloads which mandate the underlying computer systems to be highly scaleable. However, the importance of additional performance and scalability must always be tempered by the cost of the systems.
Historically system architects have used various means to achieve high performance in large tightly coupled symmetrical multiprocessor (SMP) computer systems. They range from coupling individual processors or processor clusters via a single shared system bus, to coupling processors together in a cluster, whereby the clusters communicate using a cluster-to-cluster interface, to a centrally interconnected network where parallel systems built around a large number (i.e. 32 to 1024) of processors are interconnected via a central switch (i.e. a crossbar switch).
The shared bus method usually provides the most cost efficient system design since a single bus protocol can service multiple types of resources. Furthermore, additional processors, clusters or peripheral devices can be attached economically to the bus to grow the system. However, in large systems the congestion on the system bus coupled with the arbitration overhead tends to degrade overall system performance and yield low SMP efficiency. These problems can be formidable for symmetric multiprocessor systems employing numerous processors, especially if they are running at frequencies that are two to four times faster than the supporting memory subsystem.
The centrally interconnected system usually offers the advantage of equal latency to shared resources for all processors in the system. In an ideal system, equal latency allows multiple applications, or parallel threads within an application, to be distributed among the available processors without any foreknowledge of the system structure or memory hierarchy. These types of systems are generally implemented using one or more large crossbar switches to route data between the processors and memory. The underlying design often translates into large pin packaging requirements and the need for expensive component packaging. In addition, it can be difficult to implement an effective shared cache structure.
The tightly coupled clustering method serves as the compromise solution. In this application, the term cluster refers to a collection of processors sharing a single main memory, and whereby any processor in the system can access any portion of the main memory, regardless of its affinity to a particular cluster. Unlike Non-Uniform Memory Access (NUMA) architectures, the clusters referred to in our examples utilize dedicated hardware to maintain data coherency between the memory and the hierarchical caches located within each cluster, thus presenting a unified single image to the software, void of any memory hierarchy or physical partitions such as memory bank interleaves. One advantage of these systems is that the tightly coupled nature of the processors within a cluster provides excellent performance when the data remains in close proximity to the processors that need it such as the case when data resides in a cluster's shared cache or the memory bank interleaves attached to that cluster. In addition, it usually leads to more cost-efficient packaging when compared to the large N-way crossbar switches found in the central interconnection systems. However, the clustering method can lead to poor performance if processors frequently require data from other clusters, and the ensuing latency is significant, or the bandwidth is inadequate.
The other important aspect of today's large systems is reliability and availability which is paramount in a web-based e-business. Thus, it's not uncommon for such systems to incorporate mechanisms to transfer workloads from a failing processor to another processor, take failing memory off line, and balance workloads among the clusters to ensure the systems are available 24 hours per day 7 days per week. However, in a multi-node system structure, the potential exists for multiple processors or I/O devices to simultaneously request the same block of data to be transferred between the clusters, which can lead to situations where resources on different clusters deadlock against each other, thereby hanging the system.
The use of clusters of microprocessors is a rapidly growing approach to providing unprecedented overall system performance. However, in symmetric multiprocessing (SMP) computer systems, where each processor has equal access to a single shared main memory, many techniques are used to improve system performance by reducing or hiding memory access latencies or maintaining a high degree of concurrent operations. Many times, these techniques create conditions which can result in a cross-cluster deadlock. Because this area is still relatively immature with respect to other areas of computer hardware design, most of the prior art fails to comparably address the same aspects taught by the present invention.
U.S. Pat. No. 6,073,182 entitled Method of Resolving Deadlocks Between Competing Requests in a Multiprocessor Using Global Hang Pulse Logic describes a method of deadlock avoidance using a single technique known as Fast Hang Quiesce. The method taught by the invention primarily targets the processor and I/O controllers within a single cluster (or node) of a the System Controller (SC) described in the preferred embodiment of the present invention. Our invention teaches several improvements regarding deadlock avoidance employing a plurality of techniques, one of which contemplates the embodiment of the art within our invention to expand its capability to cover cross-cluster deadlocks.
U.S. Pat. No. 5,224,100, entitled Routing Technique for a Hierarchical Interprocessor-Communication Network Between Massively-Parallel Processors, describes a routing technique for a massively parallel single instruction-multiple data (SIMD) multilevel hierarchical nodes arranged in clusters. Although this invention teaches a method of deadlock avoidance, it is achieved within a special purpose apparatus designed to perform the single task of transferring data packets from a source processor to a receiving processor. On the other hand, the present invention provides a means of deadlock avoidance in a complex SMP computer system which entails performing many type of operations such as concurrent data accesses from main memory, shared caches, I/O devices, etc. as well as memory storage accesses and cache coherency operations.
U.S. Pat. No. 4,754,398, entitled System for Multiprocessor Communication Using Local and Common Semaphore and Information Registers also teaches a method of deadlock detection, but it is limited to a single operation analogous to an I/O Test and Set operation in the present invention. The method described herein is achieved through the use of dedicated signaling among all the processors in the system. Such an implementation is not p

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Clustered computer system with deadlock avoidance does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Clustered computer system with deadlock avoidance, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Clustered computer system with deadlock avoidance will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3262752

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.