Queue management with support for multicasts in an...

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S429000, C370S432000

Reexamination Certificate

active

06219352

ABSTRACT:

RELATED APPLICATIONS
The present application is related to the co-pending United States Patent Application Entitled, “A Flexible Scheduler in an Asynchronous Transfer Mode (ATM) Switch”, Filed on even date herewith, Ser. No. 08/977,661 filed Nov. 24, 1997, (hereafter “RELATED APPLICATION 1”) and is incorporated by reference in its entirety herewith.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to communication networks, and more specifically to a method and apparatus for managing queues in an asynchronous transfer mode (ATM) switch to provide support for both multicast and unicast transmissions in a communication network.
2. Related Art
Different types of communication networks have evolved in the past to provide different types of services. For example, voice networks allow users to converse in a telephone conversation and data networks allow users to share vast quantities of data. In general, each type of communication network can have different requirements for providing the corresponding services. As an illustration, voice networks may need predictable bandwidth with low latencies to support voice calls while data networks may need high bandwidth in bursts to support large data transfers.
Due to such varying requirements, different types of communication networks have evolved with separate communication backbones, possibly implemented with different technologies. Often, these different technologies are implemented using very different techniques or principles. For example, voice networks have been implemented using a technique commonly referred to as time division multiplexing, which provides fixed and predictable bandwidth for each voice channel. On the other hand, data networks (such as those based on Internet Protocol) have been implemented to share available bandwidth on demand. That is, any end-system of a data network can potentially use all the available bandwidth at a given instance of time, and then the other systems have all the bandwidth for use.
In general, having separate communication backbones for communication networks results in inefficiency in the usage of the overall bandwidth. According to the well-known principle of ‘economy of scale’, ten servers serving hundred customers of a single queue generally provide slower service than thousand servers serving ten thousand clients even though the server-client ratio is the same. There is more efficiency with larger numbers typically because any of the larger pool of available servers can immediately serve a customer in a queue, and thus keep the queue length short.
The inefficiency (due to separate communication backbones) can result in degradation of aggregate service levels or in inability to provide more services. The problem can be exasperated with the increasing demands being placed on the networks. In addition, the overhead to manage the separate networks may be unacceptably high due to the increased number of components in the overall system. Further, the same end-station can be providing different services, which have varying requirements. For example, a computer system may be used for diverse applications such as data sharing, telephone conversations, and video conferencing applications.
Accordingly, the communications industry has been migrating towards a shared communications backbone for all the different types of services. Asynchronous transfer mode (ATM) is one standard which allows such a shared communication backbone. In general, an ATM network includes several ATM switches connecting several end-systems. Each switch includes several ports to connect to end systems and other switches. A switch receives a cell on one port and forwards the cell on another port to provide a connection between (or among) the end-systems. A cell is a basic unit of communication in ATM networks. The cell size is designed to be small (fifty-three bytes), and such small size enables each cell to be served quickly. In turn, the quickness enables support for diverse types of applications (voice, video, data), with each application receiving a service according to its own requirements.
To communicate with another end-station, an end-station of a communication network (such as an ATM network) usually ‘opens a connection’. Opening a connection generally refers to determining a sequence of switches between the two end-stations such that the switches provide a determined communication path between the two end-stations with any specific service levels required for the communication. The desired service levels may include the required bit rates, latencies etc., which makes each connection suitable for the particular service it is being used for.
Once a connection is established, the end systems communicate with each other using cells in an ATM environment. Each cell has header information which helps identify the communication path. Using the header information, the switches forward each cell to the destination end-system. Each cell is typically forwarded according to the service levels the corresponding connection is set up with.
A cell switch (i.e., ATM switch) typically includes a memory storage to store large volume of cells received on different ports. The memory provides a temporary storage for each cell as the cells await their turn for transmission to a next switch (or end-system) in the communication path. Such a wait is typically present due to reasons such as numerous cells from several ports arriving for transmission on the same port.
For the ATM backbone to be usable for different types communication services (e.g., voice and data services), the ATM backbone needs to support different features the end applications (running on the end-systems) may require. One such feature is the multicast capability. Multicast typically refers to the ability of one end-station (source end station) to send a cell to several end-stations (target end-stations) without the source end-station having to retransmit the cell to the individual target end stations. Thus, a multicast connection may be viewed as a tree having several output branches corresponding to a single root or source.
To support multicasts, an intermediate switch may transmit each cell received on a multicast connection on several ports, with each transmission corresponding to an output branch. A cell transmitted on a port may be transmitted on several additional ports in another switch located further down the cell transmission path. Such transmission on multiple ports in one or more intermediate switches enables an ATM backbone to support multicast transmissions.
Thus, when a source end-system sends a sequence of multicast cells on a multicast connection, a switch may need to transmit each of the cells several times (corresponding to several branches) to ensure that the cell is received by all of the intended target end-systems. A switch may maintain multiple copies of each multicast cell, with each copy being used for transmission on an output branch.
Unfortunately, such multiple copies may require a large memory in the switch, and such large memories may be undesirable for cost or availability reasons. The memory problem may be particularly accentuated when a switch may maintain a queue for each branch (as opposed to just one queue for each port or class of service). Maintaining a queue for each branch generally provides the flexibility to serve each connection according to the specific service parameters (e.g., bit rates, latencies etc.) with which the connection may have been setup. Thus, if the multicast cells are copied for each branch, it may consume unacceptably large amounts of memory.
Accordingly, what is needed is a queuing method and apparatus in an ATM switch which enables support for multicasts without requiring large memories, particularly in systems which maintain a queue for cells of each connection.
In addition, the ATM switch may need to support a high throughput performance (that is the rate at which cells are processed and transmitted) while optimizing memory usage. High throughput performance is ge

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Queue management with support for multicasts in an... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Queue management with support for multicasts in an..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Queue management with support for multicasts in an... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2464523

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.