Fault-tolerant, highly-scalable cell switching architecture

Multiplex communications – Fault recovery – Bypass an inoperative switch or inoperative element of a...

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S360000, C370S387000, C370S388000, C370S395710, C709S243000, C712S012000

Reexamination Certificate

active

06741552

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to cell switching architectures, particularly fault-tolerant cell switching architectures.
2. State of the Art
As changes in the field of networking and telecommunications have occurred, it has become increasingly evident that existing time division switches are inadequate for handling the bandwidth requirements of emerging cell switching technologies such as Asynchronous Transfer Mode (ATM). Cell switching involves breaking data into small, fixed-size units. A standard ATM cell has a payload of 48 bytes. A packet, by contrast, may be considerably longer and is not fixed length. A cell switch accommodates packet data by breaking packets up into cells.
New technologies are needed to provide the ultra high bandwidth switching capability being sought in the near future. A challenge is to efficiently implement switches with a large number of physical ports (128-2K ports) operating at gigabit data rates (0.5-10 Gbps/port) and having 0.1 to 10 terabits/second aggregate bandwidth capacities.
Current telecommunications switch systems are typically based on the crossbar, shared memory, or shared medium (e.g. bus and ring) switch architectures. While these architectures are adequate for today's networking applications, scaling them to meet future switching demands presents a formidable challenge. There are substantial engineering tradeoffs to take into consideration when deciding on a switch architecture that has to scale to over 1000 physical ports and operate at gigabit port data rates. Physical packaging issues become very important. Technologies, architectures and systems which have worked well for a 64 port switch operating at 155 Mbps per port are impractical for a 1000 port switch operating at 1 Gbps per port. For example, both the interconnect and circuit complexity of a crossbar switch with N input/output ports grows as O(N
2
), making it impractical for network sizes of 1000 ports and above. Likewise, both shared memory and shared medium architectures become impractical if not infeasible beyond a given switch size due to speed limitations in the sequential access of a single shared resource.
Self-routing multistage networks have also been proposed as the basis for high-performance packet networks for telecommunications in the form of ATM switches. The basic appeal of multistage interconnection networks lies in their inherent simplicity and their scalability to large numbers of ports. For example, U.S. Pat. No. 5,541,914, incorporated herein by reference, describes a class of packet-switched, extended-generalized-shuffle, self-routing multistage interconnection networks (MINs). The network provides a performance/cost trade-off between, on the one hand, the knockout switch or buffered crossbar and, on the other hand, the tandem banyan network. Multiple copies of the network may be serially cascaded back-to-back, and connected in parallel. Applications to broadband telecommunications switching are described.
MIN-based switching architectures, however, do not enjoy inherent fault tolerance. Achieving fault tolerance generally requires over-dimensioning the switch, cascading multiple MIN switching networks, etc. These solutions are complex, expensive, and inelegant.
A different problem is that of providing an interconnection network for massively parallel processing (MPP) computers having thousands of compute nodes. MPP interconnection networks, like switching networks, require high bandwidth and fault tolerance. One MPP interconnection network is that of Danny Hillis's well-known Connection Machine, described in U.S. Pat. No. 4,598,400. In the Connection Machine, a hypercube architecture is used for communication between clusters of processors. In that patent, a hybrid form of circuit and cell switching is used. On each routing cycle, an attempt is made to form a path from the source of a message packet to the destination. When successful, a message travels the entire path in a single routing cycle. In the event that a complete route is unavailable, the packet is delayed until the next routing cycle.
Some background regarding hypercubes is required for an understanding of the prior art and of the present invention.
A binary hypercube is defined in terms of graph theory. A graph is a set G={VE} where V is a set of nodes (also called vertices) and E is a set of edges connecting the nodes. In general, a graph can have any number of nodes and any number of edges up to the number of edges in a completely connected graph which is limited to v(v−1)/2 where v=|V| (or the size of the set V). A binary hypercube is defined in the following way:
A D-dimensional binary hypercube is a graph G={V, E}. V is a set of 2
D
nodes where each node is given a unique D-digit binary number as an address. An edge exists between two nodes if their addresses differ in exactly one digit.
FIG. 25
shows the first three non-trivial binary hypercubes. A 0-dimensional binary hypercube is simply a single node with no edges. FIG.
25
(
a
) shows a 1-dimensional binary hypercube. There are two nodes in the 1-dimension and one edge connecting them. FIG.
25
(
b
) shows a 2-dimensional binary hypercube. In this hypercube, there are four nodes and four edges. There is an edge between nodes 01 and 00 since their addresses disagree only in the second position. There is no edge between nodes 10 and 01 since the node's addresses disagree in both positions. Edges generated because addresses differ in the first position are said to be in the zeroth dimension; edges generated because addresses differ in the second position are said to be in the first dimension; etc. FIG.
25
(
c
) shows a 3-dimensional binary hypercube. The hypercube addresses are constructed from right to left. Edges that exist because addresses differ in the right-most digit are said to be connecting nodes in the first dimension (or sometimes zeroth dimension if counting starts from zero); edges that exist because addresses differ in the second digit from the right are said to be connecting nodes in the second dimension; and etc. In the diagrams, edges in the first dimension are drawn vertically, edges in the second dimension are drawn horizontally, and edges in the third dimension are drawn diagonally.
Hypercubes have several properties. The number of edges in a D-dimensional binary hypercube is d(2)
d
/2 since there an edge per dimension ending at a node, there are (2)
d
nodes and each edge has two ends (being bidirectional). The significance of this formula is that the number of edges grows proportionally to the number of nodes times the log of the number of nodes. If distance between nodes is measured as the number of edges in the shortest path between the two nodes, then the longest distance in a D-dimensional hypercube is D. If p is the distance between two nodes, then p! is the number of shortest paths between the nodes.
Comparing hypercubes to completely connected networks, although path length is always a constant 1 in a completely connected network, a completely connected network of n nodes requires n(n−1)/2 edges, meaning the number of edges grows proportionally to the square of the number of nodes. A hypercube, as already mentioned, has many fewer edges for large n. Comparing hypercubes to sparse networks such as rings, a ring of n nodes may require only n edges, but the maximum distance between two nodes is n/2 and there are only two paths between any two nodes. In a hypercube, the maximum distance between any two nodes is the log of the number of nodes, and there are many available paths. In addition, every node in a hypercube has the same structure: there are no special nodes like the nodes at the center of a star network, where the entire network is disconnected if the center is removed.
Two approaches to switching in hypercubes are circuit switching and cell switching. Circuit switching typically reserves an entire path for a data flow through the hypercube for an arbit

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fault-tolerant, highly-scalable cell switching architecture does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fault-tolerant, highly-scalable cell switching architecture, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fault-tolerant, highly-scalable cell switching architecture will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3199585

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.