Highly-scalable parallel processing computer system...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Peripheral adapting

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S043000, C714S044000

Reexamination Certificate

active

06247077

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of Invention
The present invention relates generally to computing systems, and more particularly, to a highly scaleable parallel processing system in which storage nodes and compute nodes are network peers.
2. Description of Related Art
Technological evolution often results from a series of seemingly unrelated technical developments. While these unrelated developments might be individually significant, when combined they can form the foundation of a major technology evolution. Historically, there has been uneven technology growth among components in large complex computer systems, including, for example, (1) the rapid advance in central processing unit (CPU) performance relative to disk I/O performance, (2) evolving internal CPU architectures, and (3) interconnect fabrics.
Over the past ten years, disk I/O performance has been growing at a much slower rate overall than that of the node. CPU performance has increased at a rate of 40% to 100% per year, while disk seek times have only improved 7% per year. If this trend continues as expected, the number of disk drives that a typical server node can drive will rise to the point where disks become a dominant component in both quantity and value in most large systems. This phenomenon has already manifested itself in existing large-system installations.
Uneven performance scaling is also occurring within the CPU. To improve CPU performance, CPU vendors are employing a combination of clock speed increases and architectural changes. Many of these architectural changes are proven technologies leveraged from the parallel processing community. These changes can create unbalanced performance, leading to less than expected performance increases. A simple example; the rate at which a CPU can vector interrupts is not scaling at the same rate as basic instructions. Thus, system functions that depend on interrupt performance (such as I/O) are not scaling with compute power.
Interconnect fabrics also demonstrate uneven technology growth characteristics. For years, they have hovered around the 10-20 MB/sec performance level. Over the past year, there have also been major leaps in bandwidth to 100 MB/sec (and greater) levels. This large performance increase enables the economical deployment of large multi-processor systems.
This uneven performance negatively effects application architectures and system configuration options. For example, with respect to application performance, attempts to increase the workload to take advantage of the performance improvement in some part of the system, such as increased CPU performance, are often hampered by the lack of equivalent performance scaling in the disk subsystem. While the CPU could generate twice the number of transactions per second, the disk subsystem can only handle a fraction of that increase. The CPU is perpetually waiting for the storage system. The overall impact of uneven hardware performance growth is that application performance is experiencing an increasing dependence on the characteristics of specific workloads.
Uneven growth in platform hardware technologies also creates other serious problems; a reduction in the number of available options for configuring multi-node systems. A good example is the way the software architecture of a TERADATA® four-node clique is influenced by changes in the technology of the storage interconnects. The TERADATA® clique model expects uniform storage connectivity among the nodes in a single clique; each disk drive can be accessed from every node. Thus when a node fails, the storage dedicated to that node can be divided among the remaining nodes. The uneven growth in storage and node technology restrict the number of disks that can be connected per node in a shared storage environment. This restriction is created by the number of drives that can be connected to an I/O channel and the physical number of buses that can be connected in a four-node shared I/O topology. As node performance continues to improve, we must increase the number of disk spindles connected per node to realize the performance gain.
Cluster and massively parallel processing (MPP) designs are examples of multi-node system designs which attempt to solve the foregoing problems. Clusters suffer from limited expandability, while MPP systems require additional software to present a sufficiently simple application model (in commercial MPP systems, this software is usually a DBMS). MPP systems also need a form of internal clustering (cliques) to provide very high availability. Both solutions still create challenges in the management of the potentially large number of disk drives, which, being electromechanical devices, have fairly predictable failure rates. Issues of node interconnect are exacerbated in MPP systems, since the number of nodes is usually much larger. Both approaches also create challenges in disk connectivity, again fueled by the large number of drives needed to store very large databases.
The uneven technology growth describe above requires a fundamentally different storage connectivity model—one which allows workload scaling to match technology improvements. The present invention satisfies that need.
SUMMARY OF THE INVENTION
The present invention describes a highly-scaleable parallel processing computer system architecture. The parallel processing system comprises a plurality of compute nodes for executing applications, a plurality of I/O nodes, each communicatively coupled to a plurality of storage resources, and an interconnect fabric providing communication between any of the compute nodes and any of the I/O nodes. The interconnect fabric comprises a network for connecting the compute nodes and the I/O nodes, the network comprising a plurality of switch nodes arranged into more than g(log
b
N) switch node stages, wherein b is a total number of switch node input/output ports, and g(x) indicates a ceiling function providing the smallest integer not less than the argument x, the switch node stages thereby providing a plurality of paths between any network input port and network output port. The switch node stages are configured to provide a plurality of bounceback points logically differentiating between switch nodes that load balance messages through the network from switch nodes that direct messages to receiving processors.


REFERENCES:
patent: 4414620 (1983-11-01), Tsuchimoto et al.
patent: 4982187 (1991-01-01), Goldstein et al.
patent: 5014192 (1991-05-01), Mansfield et al.
patent: 5148540 (1992-09-01), Beardsley et al.
patent: 5202985 (1993-04-01), Goyal
patent: 5239643 (1993-08-01), Blount et al.
patent: 5303383 (1994-04-01), Neches et al.
patent: 5398334 (1995-03-01), Topka et al.
patent: 5453978 (1995-09-01), Sethu et al.
patent: 5522046 (1996-05-01), McMillen et al.
patent: 5522077 (1996-05-01), Cuthbert et al.
patent: 5581765 (1996-12-01), Munroe et al.
patent: 5598408 (1997-01-01), Nickolls et al.
patent: 5630125 (1997-05-01), Zellweger
patent: 5634015 (1997-05-01), Chang et al.
patent: 5640596 (1997-06-01), Takamoto et al.
patent: 5671441 (1997-09-01), Glassen et al.
patent: 5678038 (1997-10-01), Dockter et al.
patent: 5699403 (1997-12-01), Ronnen
patent: 5706347 (1998-01-01), Burke et al.
patent: 5745895 (1998-04-01), Bingham et al.
patent: 5778395 (1998-07-01), Whiting et al.
patent: 5808911 (1998-09-01), Tucker et al.
patent: 5812793 (1998-09-01), Shakib et al.
patent: 5832487 (1998-11-01), Olds et al.
patent: 5838659 (1998-11-01), Kainulainen
patent: 5867679 (1999-02-01), Tanaka et al.
patent: 5872850 (1999-02-01), Klein et al.
patent: 5884190 (1999-03-01), Ramanan et al.
patent: 5887138 (1998-07-01), Hagersten et al.
patent: 5917730 (1999-06-01), Rittie et al.
patent: 5940592 (1999-08-01), Ioki et al.
patent: 0 365 115 (1990-04-01), None
patent: 0 560 343 A1 (1993-09-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Highly-scalable parallel processing computer system... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Highly-scalable parallel processing computer system..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Highly-scalable parallel processing computer system... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2468505

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.