Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1999-03-01
2004-11-23
Olms, Douglas (Department: 2661)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S413000
Reexamination Certificate
active
06822966
ABSTRACT:
FIELD OF INVENTION
The present invention relates to allocating buffers for data transmission in a network communication device constructed to direct network traffic. More particularly, the invention relates to allocating buffers in the network device based on port utilization and quality of service goals to achieve fair buffer usage.
BACKGROUND
Computer networks have been used by a large number of users to increase computing power, share resources and communicate. Computer networks include a number of computers or other devices located in one room or one building and connected by a local area network (LAN), such as Ethernet or Token Ring. LANs located in different locations can be interconnected by switches, routers, bridges, microwave links or satellite links to form a wide area network (WAN). Such a network may include hundreds of connected devices distributed over numerous geographical locations and belonging to different organizations.
One computer or video communication device can send data to another device using network communications devices (e.g., packet switches, routers, bridges) interconnected by transmission media or data links. Viewed from the outside, a network communications device includes input ports that receive data and output ports that send data over data links. Inside the network device, transmitted data is accepted at input ports, buffered and transferred internally, and eventually received at output ports for re-transmission over the next data link. This process uses various scheduling and arbitration algorithms for “switching” the data.
Typically, network communications devices transmit data in a standardized format, such as TCP/IP datagrams, frames, or ATM cells, which generally will be referred to as data units. Each data unit typically includes a header portion with addressing information and a body portion with transmitted data or payload. A data unit sent between network devices may, in general, vary in size depending on the type of the data unit. When a data unit arrives at an input port of a network communication device, a routing algorithm analyzes the header and makes a forwarding decision based on a destination address carried by the header.
In the process of forwarding data from an input port to an output port, different internal switching devices can use several temporary memories (called buffers). A communication device decides if an incoming data unit is buffered or dropped, based on the incoming data unit itself. This decision is performed at the line speed, and is executed after the start of the header and before the last bit of the data unit is received. This way the communication device can re-use the current buffer for the next data unit if decided that the current data unit will be dropped.
The buffers may simply be arranged into a single large pool of buffer units to be used for all received data units stored by the device. This arrangement assures that no data units will be arbitrarily dropped as long as there are available buffer units for storage. However, to this arrangement cannot effectively arbitrate access to several concurrent inputs and outputs. Alternatively, separate pools of buffers may be associated with input ports and output ports. Based on the location of buffers, there are generally three classes of data switching architectures: output buffered, input buffered, or combined input-output buffered architectures. The present invention is applicable to all three architectures.
An output buffered network device places data units, arriving at its input port, into different output buffers located at its output ports depending on the address of each data unit. An output buffered network device, having N input ports and receiving data at M bits per second, needs a data transmission rate of N×M for the switching to ensure that data units are not lost (i.e., dropped). The buffers store the received data units when the transmission rate is lower.
Advantageously, output buffered network communication devices can use up to the full bandwidth of outbound data links because of the immediate forwarding of the received data units into output buffers. The data units are fed from output buffers to the output data links as fast as the links can accept them. However, when the transmission rate is lower than the reception rate, the communication device has to keep buffering the data units and may eventually run out of buffers. For a larger communication device having many input ports with a high link speed, the buffer capacity and speed must be increased proportionally in order to handle the combined data rates of all input ports being switched to different output ports. Increasing the buffer capacity and speed increases the cost of output buffered network devices.
A network communication device can assign one pool of buffers for each output port (or input port), which is commonly called “per port queuing”. In this arrangement, one port cannot interfere with another port's performance. However, higher priority traffic may suffer a higher loss of data units (e.g., ATM cell, frames) then lower priority traffic. The data loss occurs even though there may be, for example, idle ports with available cell/frame buffers; this is the result of the buffer fragmentation.
Alternatively, the communication device can assign one pool of buffers for each priority, which is called “per priority queuing”. In this arrangement, higher priority traffic will not suffer data unit loss because of over-subscription of lower priority traffic. However, the buffers allocated to higher priority traffic may not be available to lower priority traffic when the higher priority traffic is idle. In another arrangement, the communication device can assign one pool of buffers for each priority for each output port (or input port). This is called “per port and priority queuing”. Advantageously, a given port and priority will only drop data units due to the action of data streams using that port and priority. However, this arrangement fragments buffer queues and thus buffers may be used inefficiently. For example, buffer units may be assigned to ports and priorities that are currently idle, and thus buffer units are left unused. Therefore, efficient buffer allocation becomes very important.
A network communication device can also use an input buffered architecture. Popular input buffered devices use non-blocking input buffered switching called the crossbar. The input buffered crossbar has a crossbar fabric running at a speedup of 1 (i.e., equal to link rate). However, if each input port maintains a single FIFO queue, data suffer from head-of-line blocking. This blocking limits the maximum achievable throughput and is relatively inflexible. To eliminate head-of-line blocking, input ports can have virtual output queues (VOQs). Inputs ports with VOQs have a bank of queues with one queue per output port. Data units are stored in random access buffers at the input ports, and pointers to the data are stored in the respective VOQs. Buffer allocation becomes again important.
Asynchronous transfer mode (ATM) switching technology enables fast data switching for wide ranges of traffic demands. ATM can carry virtually any type of information that can be expressed in a digital form, ranging from voice telephone traffic, to real-time video, to high-speed file transfers, etc. ATM based networks may eliminate the need for different networks to carry different types of traffic. ATM transfer is asynchronous in the sense that the recurrence of ATM cells, which carry transferred information, is not necessarily periodic. Each communication device that uses the ATM network submits an ATM cell for transfer when it has a cell to send. Once aggregated and scheduled, the ATM cells ride in synchronous slots on a high-speed media, such as a SONET optical fiber.
ATM organizes digital data into cells having a fixed length and format. Each ATM cell includes a header, primarily for identifying cells relating to the same virtual connection, and the transmitted data or payload. The ATM standard d
Matthews Wallace
Putcha Sivarama Seshu
Enterasys Networks Inc.
Lahive & Cockfield LLP
Olms Douglas
Pizarro Ricardo M.
LandOfFree
Allocating buffers for data transmission in a network... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Allocating buffers for data transmission in a network..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Allocating buffers for data transmission in a network... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3279580