Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1999-02-19
2003-05-13
Chin, Wellington (Department: 2664)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S395430
Reexamination Certificate
active
06563835
ABSTRACT:
TECHNICAL FIELD
The present invention relates generally to Asynchronous Transfer Mode (ATM) switches, and, more particularly, to a call processing arrangement for ATM switches in which aspects of the call processing are performed in a distributed (rather than centralized) manner.
BACKGROUND OF THE INVENTION
In any connection-oriented communications networks, a connection from the source to the destination has to be established before user data or information can be transferred on the connection. The functions performed in the switch to establish the connection are referred to in this application as “call processing”. Generally, call processing in the context of this patent application refers to the procedure which a switch follows to handle call setup and release. More specifically, call processing includes the functions of signaling, call control, and call flow (path selection).
The number of calls that the switch can process per second is an important performance objective for the switch design. A switch that can handle a large number of calls per second allows the network operator to generate a significant amount of revenue given that resources are available to admit calls. With the advent of very large switch design driven by asynchronous transfer mode (ATM) technology, the demand for call processing power has increased significantly in recent years. As widely available transmission speeds have increased by more than an order of magnitude, call processing performance has not increased nearly as much. Current ATM switch implementations can typically handle 50 to 200 calls per second. This has not even reached the level of call processing capacity of today's telephone exchanges, which can handle about 400 to 500 calls per second, and is far from adequate to handle increased traffic volume evident in data networks. It has long been recognized that a major obstacle to carry transaction-oriented data traffic over ATM is call processing delay in ATM switches. A recent study on switched IP flows reported by S. Lin and N. McKeown in “A simulation study of IP switching,” ACM SIGCOMM '97, indicated that switches supporting at least 10,000 virtual connections on each switch link of 45 Mbps would see acceptable performance with today's Internet traffic. However, the volume and diversity of Internet traffic is growing rapidly. It is predicted in the study that before long, more than 65,536 virtual connections per switch link would be required to provide adequate performance in the core of the Internet. When the average holding time for each data traffic flow or connection is short, this places a significantly high demand for fast connection setup and tear-down per each link. As the total switching capacity increases, switches are being built with higher port density, thus further increasing the demand for call processing capacity for the entire switch. With today's hardware and memory technology, switching fabrics with the capacity of 160 Gbps described by K. Eng and M. A. Pashan in “Advances in shared-memory designs for gigabit ATM switching,” Bell Labs Technical Journal, Vol. 2, No. 2, pp. 175-187, Spring 1997, or 320 Gbps described by N. McKeown, M. Izzard, A. Mekkittikul, W. Ellersick and M. Horowitz in “The Tiny Tera: A packet switch core,” IEEE Micro Magazine, January-February 1997, can be made commercially available. Thus, the growth in the switching capacity itself, as well as the increased transmission bandwidth has led the need for dramatically increasing call processing capacity. It is believed that one of the real challenges for future switch designs is to support more than 10,000 calls per second with processing latency of 100 microseconds for call establishment.
It has been widely recognized that signaling message processing can be a significant bottleneck in call processing. One of the ways to speed up call processing is therefore to reduce the time in processing signaling messages. ATM signaling protocols are transmitted using tagged message formats in which a message can be considered as a sequence of interleaved tag and data fields. The tag fields define the meaning of subsequent fields. These messages are computationally expensive to decode, partly because decoding each data field requires testing one or more tag fields. Therefore, one way to improve the performance of signaling message processing is to develop sufficient techniques to reduce the cost of encoding and decoding. A fast decoding technique that has been proposed by T. Blackwell in “Fast decoding of tagged message formats,” IEEE/ACM INFOCOM '96 goes in this direction. Proposals for parallel processing as described by D. Ghosal, T. V. Lakshman and Y. Huang in “High-speed protocol processing using parallel architectures,” IEEE INFOCOM '94, p. 159-166, 1994 and hardware-based decoding as described by M. Bilgic and B. Sarikaya in “Performance comparison of ASN 1 encoder/decoders using FTAM”, Computer Communications, Vol. 16, No. 4, pp. 229-240, April 1993, have also been proposed in the literature. Since one of the most processor intensive tasks is the parsing of signaling messages, another way to speed up this process is the complexity reduction of signaling messages as proposed by T. Helstern in “Modification to fast SVC setup procedures,” ATM Forum Contribution 97-0521, July 1997. A new architecture for lightweight signaling has been recently proposed by G. Hjalmtysson and K. K. Ramakrishnan in “UNITE—An architecture for lightweight signaling in ATM networks,” to appear in INFOCOM '98, which uses a single ATM cell with proper coding to manage cell establishment possibly in hardware while performing other tasks such as quality of service (QoS) negotiation in-band.
With significant advances of hardware technology over the last fifteen years, memory speed has increased (90 times), from 450 ns to 5 ns, and CPU speed has increased (250 times) from 1 Mhz to 250 Mhz. In comparison, the transmission speed has increased (11,000 times) from 56 Kbps to 622 Mbps. It is evident that we are reaching the point where call processing power cannot be further improved only through the use of faster components and microprocessors. Other solutions are required.
SUMMARY OF THE INVENTION
In accordance with the present invention, call processing architectures for a connection oriented switch such as an ATM switch are arranged such that the switch performance is easily grown as the call traffic handled by the switch increases. This makes the design “scalable”. A key element of the present invention is the distribution of call processing functionality, including some or all of the signaling, call processing and call routing (path selection) functions, to each interface module in a switch. This offloads the processing required in the centralized switch control module that is normally a bottleneck in conventional designs, and thereby overcomes the deficiencies associated with the conventional centralized call processing architecture.
In accordance with the present invention, a connection oriented switch such as an ATM switch includes a switch fabric, and a plurality of input/output or interface modules for receiving incoming calls, applying them to the switch fabric, and routing calls that were passed through the switch fabric toward their destinations. Each interface module also has its own dedicated processor that performs some or all of the call processing tasks to off-load the processing burden from a central control entity. The switch further includes a switch control module that performs call processing tasks not performed in the input/output modules. Depending on the degree of processing power distribution, three embodiments are described: distributed signaling architecture, distributed call control architecture, and distributed routing architecture.
In the distributed signaling architecture, the signaling function resides in the ingress and egress modules that are part of the interface module. The call control and routing functions are performed in a centralized manner, in the switch contro
Chin Wellington
Lucent Technologies - Inc.
Pham Brenda
LandOfFree
Call processing arrangement for ATM switches does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Call processing arrangement for ATM switches, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Call processing arrangement for ATM switches will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3017176