Electrical computers and digital processing systems: support – Multiple computer communication using cryptography – Protection at a particular protocol layer
Reexamination Certificate
1998-02-06
2001-07-03
Lee, Thomas (Department: 2787)
Electrical computers and digital processing systems: support
Multiple computer communication using cryptography
Protection at a particular protocol layer
Reexamination Certificate
active
06256740
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of Invention
The present invention relates generally to computing systems, and more particularly, to a method for providing a name service for storage in a highly configurable multi-node processing system.
2. Description of Related Art
Technological evolution often results from a series of seemingly unrelated technical developments. While these unrelated developments might be individually significant, when combined they can form the foundation of a major technology evolution. Historically, there has been uneven technology growth among components in large complex computer systems, including, for example, (1) the rapid advance in central processing unit (CPU) performance relative to disk I/O performance (2) evolving internal CPU architectures and (3) interconnect fabrics.
Over the past ten years, disk I/O performance has been growing at a much slower rate overall than that of the node. CPU performance has increased at a rate of 40% to 100% per year, while disk seek times have only improved 7% per year. If this trend continues as expected, the number of disk drives that a typical server node can drive will rise to the point where disk drives become a dominant component in both quantity and value in most large systems. This phenomenon has already manifested itself in existing large-system installations. Uneven performance scaling is also occurring within the CPU. To improve CPU performance, CPU vendors are employing a combination of clock speed increases and architectural changes. Many of these architectural changes are proven technologies leveraged from the parallel processing community. These changes can create unbalanced performance, leading to less than expected performance increases. A simple example; the rate at which a CPU can vector interrupts is not scaling at the same rate as basic instructions. Thus, system functions that depend on interrupt performance (such as I/O) are not scaling with compute power.
Interconnect fabrics also demonstrate uneven technology growth characteristics. For years, they have hovered around the 10-20 MB/sec performance level. Suddenly over the past year, major leaps in bandwidth to 100 MB/sec (and greater) levels have occurred. This large performance increase enables the economical deployment of multi-processing systems.
This uneven performance negatively effects application architectures and system configuration options. For example, with respect to application performance, attempts to increase the workload to take advantage of the performance improvement in some part of the system, such as increased CPU performance, are often hampered by the lack of equivalent performance scaling in the disk subsystem. While the CPU could generate twice the number of transactions per second, the disk subsystem can only handle a fraction of that increase, because the CPU is perpetually waiting for the storage system. The overall impact of uneven hardware performance growth is that application performance is experiencing an increasing dependence on the characteristics of specific workloads.
Uneven growth in platform hardware technologies also creates other serious problems such as a reduction in the number of available options for configuring multi-node systems. A good example is the way the software architecture of a TERADATA® four-node clique is influenced by changes in the technology of the storage interconnects. The TERADATA® clique model expects uniform storage connectivity among the nodes in a single clique; each disk drive can be accessed from every node. Thus when a node fails, the storage dedicated to that node can be divided among the remaining nodes. The uneven growth in storage and node technology restrict the number of disks that can be connected per node in a shared storage environment. This restriction is created by the number of drives that can be connected to an I/O channel and the physical number of buses that can be connected in a four-node shared I/O topology. As node performance continues to improve, the number of disk spindles connected per node must be increased to realize the performance gain.
Cluster and massively parallel processing (MPP) designs are examples of multi-node system designs which attempt to solve the foregoing problems. Clusters suffer from limited expandability, while MPP systems require additional software to present a sufficiently simple application model (in commercial MPP systems, this software is usually a DBMS). MPP systems also need a form of internal clustering (cliques) to provide very high availability. Both solutions still create challenges in the management of the potentially large number of disk drives, which, being electromechanical devices, have fairly predictable failure rates. Issues of node interconnect are exacerbated in MPP systems, since the number of nodes is usually much larger. Both approaches also create challenges in disk connectivity, again fueled by the large number of drives needed to store very large databases.
The foregoing problems are ameliorated in an architecture wherein storage entities and compute entities, computing over a high performance connectivity fabric, act as architectural peers. This architecture allows increased flexibility in managing storage and compute resources. However, this flexibility presents some unique system management problems. One such problem is naming the storage extents to be accessed by the processors. One potential solution to this problem is a centralized naming service which generates and assigns names to all storage extents. However, such a system is vulnerable to single point failures, and is contrary to the flexible expandability offered by a peer-to-peer multi-node system. The present invention solves this problem by providing the autonomous generation of a globally unique name for a storage extent (which can comprise data values or allocated blocks of data) by each of the storage nodes.
SUMMARY OF THE INVENTION
The present invention describes a method and apparatus for communicating data in a parallel processing computer architecture. The method comprises the steps of generating a globally unique ID in the I/O node for a data extent physically stored in the plurality of storage devices, binding the globally unique ID to the data extent, and exporting the globally unique ID to the compute nodes via the interconnect fabric. In one embodiment, the globally unique ID is generated from a globally unique I/O node identifier and a locally unique data extent identifier. A local entry point is generated in the compute node for the data associated with the globally unique ID, thereby presenting the globally unique ID as a device point in the compute node. In one embodiment, the step of exporting the globally unique ID to the compute nodes comprises the step of receiving a message from the compute node comprising a signature securely identifying it to the I/O node, authenticating the source of the message using the signature, and transmitting the globally unique ID comprising data specifying local access rights to the data represented by the globally unique ID from the I/O node to the compute node.
REFERENCES:
patent: 5148540 (1992-09-01), Breadsley
patent: 5239643 (1993-08-01), Blount et al.
patent: 5303383 (1994-04-01), Neches et al.
patent: 5339361 (1994-08-01), Schwalm et al.
patent: 5522077 (1996-05-01), Cuthbert
patent: 5560005 (1996-09-01), Hoover
patent: 5581765 (1996-12-01), Munroe et al.
patent: 5671441 (1997-09-01), Glassen et al.
patent: 5678038 (1997-10-01), Dockter et al.
patent: 5706347 (1998-01-01), Burke
patent: 5745895 (1998-04-01), Bingham et al.
patent: 5778395 (1998-07-01), Whiting et al.
patent: 5805823 (1996-09-01), Seitz
patent: 5808911 (1998-09-01), Tucker et al.
patent: 5812793 (1998-09-01), Shakib et al.
patent: 5815793 (1998-09-01), Shakib
patent: 5832487 (1998-11-01), Olds et al.
patent: 5838659 (1998-11-01), Kainulainen
patent: 5867679 (1999-02-01), Tanaka
patent: 5872850 (1999-02-01), Klein et al.
patent: 5884090 (1999-03-01), Ramanan
patent: 5887138 (1999-03-01), Hagersten et al.
patent: 5
Chow Kit M.
Meyer Michael W.
Muller P. Keith
Gates & Cooper
Lee Thomas
NCR Corporation
Nguyen Tanh
LandOfFree
Name service for multinode system segmented into I/O and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Name service for multinode system segmented into I/O and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Name service for multinode system segmented into I/O and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2463625