Push-out technique for shared memory buffer management in a...

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S902000, C370S902000, C709S218000

Reexamination Certificate

active

06704316

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to shared memory buffer management in network nodes. More particularly, the present invention relates to a push-out technique for shared memory buffer management in network nodes.
BACKGROUND OF THE INVENTION
Data networks are used to transmit information between two or more endpoints connected to the network. The data is transmitted in packets, with each packet containing a header describing, among other things, the source and destination of the data packet, and a body containing the actual data. The data can represent various forms of information, such as text, graphics, audio, or video.
Data networks are generally made up of multiple network nodes connected by links. The data packets travel between endpoints by traversing the various nodes and links of the network. Thus, when a data packet enters a network node, the destination information in the header of the packet instructs the node as to the next destination for that data packet. A single data packet may traverse many network nodes prior to reaching its final destination.
Each network node may have multiple input ports and output ports. As a data packet is received at a network node, it is transmitted to its next destination in the network via an appropriate output port of the node. Depending on the amount and nature of the data packets entering a network node, it is possible that the node will not be able to output the data packets at a rate sufficient to keep up with the rate that the data packets are received. In the simplest design of a network node, newly arriving data packets may simply be discarded if the output rate of the node cannot keep up with the rate of receipt of new packets.
More advanced network nodes have a buffer stored in a memory of the network node such that data packets may be held in a queue prior to being output from the node. In such a configuration, if data packets are received at a rate faster than the node is able to output the data packets, the newly received data packets are queued in a memory buffer of the node until such time as they may be transmitted. However, since the buffer is of a finite size, it is still possible that the rate of receipt will be such that the buffer will become full. One solution is to drop any new incoming data packets when the buffer is full. However, one problem with this solution is that it may be desirable to give different types of data packets different priorities. For example, if data packets are carrying a residential telephone call, it may be acceptable to drop a data packet periodically because the degradation in service may not be noticeable by the people engaging in the conversation. However, if the data packets are carrying data for a high speed computer application, the loss of even one data packet may corrupt the data resulting in a severe problem.
As a result of the need to differentiate the types of data packets, different data packets may be associated with different traffic classes. A traffic class is a description of the type of service the data packets are providing, and each traffic class may be associated with a different loss priority. For example, a traffic class of “residential telephone” may have a relatively low loss priority as compared with a traffic class of “high speed data”.
There are various configurations of network nodes which use buffers to store incoming data packets. One such configuration is called a shared memory architecture. In such an architecture, each output port has one or more associated queues stored in buffer memory of the network node. Further, the area of memory set aside for buffer space is shared by the queues of multiple output ports. Thus, the total available buffer memory space is shared among the different output ports. For network nodes with a shared memory architecture, buffer management techniques are needed to regulate the sharing of buffer memory among the different output ports. Such techniques need to take into account the different traffic classes with their different loss priorities.
One technique, known as a threshold-based technique, allows all new packets to be stored in the buffer until the buffer is filled to a certain percentage of its size. Once this threshold is reached, then only data packets above a certain loss priority will be accepted. In this way, a certain amount of buffer space is reserved for high priority data packets. Such a threshold-based technique is described in U.S. patent application Ser. No. 08/736,149, filed Oct. 24, 1996, entitled Method for Shared Memory Management in Network Nodes, which is assigned to the same assignee as the present invention. In the technique described in the copending application, each queue is allocated some nominal buffer size for incoming data packets. If the addition of a new data packet would exceed the nominal buffer size, the queue may be allocated additional buffer space if the total free buffer space remains below a certain threshold. This threshold may be different depending on the traffic class of the queues. One of the problems with threshold-based techniques is that they do not adapt well to changing traffic conditions. In addition, the performance of these techniques depends largely on the chosen values of the thresholds, which values are difficult to choose and which are usually provisioned empirically.
Another memory management technique called push-out is generally more efficient than threshold techniques. In a push-out technique, low priority data packets which are already in a queue may be removed in order to make room for newly arriving higher priority data packets. One such push-out technique is described in Beraldi, R., Iera, A., Marano, S., “
Push
-
Out Based” Strategies for Controlling the Share of Buffer Space,
Proceedings of IEEE Singapore International Conference on Networks/International Conference on Information Engineering '93, p. 39-43, vol. 1. One of the problems with existing push-out techniques is that if there is heavy traffic of high priority data packets, the high priority data packets could starve the low priority data packets such that the low priority data packets will not make it through the queue to an output port.
SUMMARY OF THE INVENTION
The present invention provides an improved push-out technique for memory management in a shared memory network node. In accordance with the invention, a weighted queue length is maintained in memory for each queue stored in the shared memory buffer. When a new data packet arrives at the network node to be stored in its appropriate queue and the buffer is full, a data packet is removed from the queue having the largest weighted queue length. This makes room in the buffer for the newly arrived data packet to be stored in its appropriate queue.
The weighted queue length is maintained by adjusting the weighted queue length of a queue by an amount equal to the weight assigned to the traffic class of the data packet. These weights may be provisioned in order to implement different loss priorities among the traffic classes. In addition, the same traffic class may be assigned a different weight at two different output ports of the node, thus giving further flexibility and control over the loss priorities among output ports.
In accordance with another aspect of the invention, initial values of weighted queue lengths may be assigned in order to further control the memory management. The assignment of an initial weighted queue length allocates a nominal buffer space to a queue.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.


REFERENCES:
patent: 5136584 (1992-08-01), Hedhund
patent: 5140583 (1992-08-01), May et al.
patent: 5959993 (1999-09-01), Varma et al.
Choudhury, Abhijit K., and Hahne, Ellen L., Dynamic Queue Length Thresholds for Multipriority Traffic, 15th International Teletraffic Congress, Washington, D.C., Jun. 1997, pp. 561-569.
Chao, H. Jonathan, and Uzun, Necdet, An ATM

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Push-out technique for shared memory buffer management in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Push-out technique for shared memory buffer management in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Push-out technique for shared memory buffer management in a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3215897

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.