Method and apparatus for allocating shared memory resources...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S241000, C709S235000, C709S213000, C709S232000, C710S056000, C710S057000, C710S053000, C710S054000, C370S412000, C370S229000, C370S236000, C711S100000

Reexamination Certificate

active

06219728

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to management of memory resources. More specifically, a method and apparatus for allocating shared memory resources and discarding incoming data as necessary.
2. Background
In a network environment, various traffic management techniques are used to control the flow of data throughout the network. Network devices often utilize buffers and queues to control the flow of network data. During periods of heavy network traffic or congestion, certain data cells or packets may be discarded to prevent buffer overflow or deadlock.
FIG. 1
illustrates a known switch
10
for use in a network environment. Switch
10
receives data cells from a plurality of input ports (labeled IN
1
-IN
M
) and transmits data cells from a plurality of output ports (labeled OUT
1
-OUT
N
). A plurality of input buffers
12
are coupled between the input ports and switch
10
. A plurality of output buffers
14
are coupled between switch
10
and the output ports. As shown in
FIG. 1
, each input buffer
12
is separated from the remaining input buffers and dedicated to a particular port of switch
10
. If a particular port is not active, then its associated input buffer cannot be used by another port. Instead, the buffer remains idle even if other buffers are fully utilized. For example, if the input buffer associated with input IN
1
is full and the input buffer associated with IN
2
is empty, incoming data on input IN
1
will be discarded, and cannot be stored in the input buffer associated with IN
2
. Similarly, each output buffer
14
is separated from the remaining output buffers and dedicated to a particular output line.
To provide improved memory utilization, another type of network switch was developed having a shared memory buffer. An example of a shared memory switch is illustrated in FIG.
2
. Shared memory switch
100
includes a plurality of inputs and a plurality of outputs. Rather than providing separate input buffers for each input, shared memory switch
100
includes a shared memory
102
which receives data cells or packets from any of the inputs.
When using a shared memory device, the memory resources must be allocated between the various ports coupled to the shared memory. Known switches utilize fixed discard thresholds for determining when to discard an incoming or outgoing data cell or packet. Thus, when the level of data associated with a particular port exceeds a fixed threshold value, the data cell or packet is discarded. Although a shared memory switch allows multiple ports to share a single memory buffer, the use of fixed thresholds for discarding data creates several problems.
If a single port is active, the port is limited by its fixed threshold. Thus, instead of utilizing the entire memory buffer, the memory usage by the single active port may not exceed the fixed threshold value. When the threshold value is reached, additional incoming cells must be discarded rather than being stored in the empty portions of the memory buffer. This results in an under-utilization of the memory buffer resources.
Another problem created by fixed thresholds results in an unequal allocation of memory resources among the various ports. To take advantage of the shared memory buffer, fixed thresholds are typically set higher than the “fair share” of the memory resources for each port. For example, if a shared memory device is accessed by four different ports, the “fair share” for each port is 25% of the available memory resources. However, if the threshold for each port is set at 25% of the total memory available, then the situation is similar to the prior art switch of
FIG. 1
having separate memory buffers. In this situation, each switch may utilize a separate portion of the shared memory equal to its fair share. To provide better memory utilization, the fixed thresholds are typically set higher than the port's “fair share” of memory. Problems occur when all ports are active and certain ports use memory resources up to their threshold values. Since the fixed thresholds are set higher than the port's “fair share,” overallocation of the memory resources may occur if several ports are active at the same time. This overallocation of memory resources may overload the buffer and cause the buffer to malfunction.
It is therefore desirable to provide a mechanism for managing a shared memory buffer in a manner that efficiently utilizes memory resources and prevents overload and unfair usage of memory resources.
SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for allocating shared memory resources and discarding incoming data as necessary. Adaptive thresholds are provided for each individual queue or port. The adaptive thresholds are adjusted in response to changes in the overall usage of the shared memory resources. As memory usage increases, each threshold value is lowered. When memory usage decreases, each threshold value is increased. The adaptive thresholds of the present invention provide for efficient utilization of memory resources and relatively uniform allocation of memory resources.
An embodiment of the present invention provides a system for allocating shared memory resources among a plurality of queues. The shared memory resources are monitored to determine a number of available memory buffers in the shared memory. Threshold values are generated for each queue indicating the number of data cells to be stored in the associated queue. The threshold values are updated in response to changes in the number of available memory buffers.
Another feature of the invention performs a comparison of the threshold value with the queue usage to determine whether to accept or discard incoming data cells destined for the queue.
An aspect of the invention adjusts threshold values by increasing the threshold value in response to increased available memory and decreasing the threshold value in response to decreased available memory.


REFERENCES:
patent: 4769811 (1988-09-01), Eckberg, Jr. et al.
patent: 4953157 (1990-08-01), Franklin et al.
patent: 5231633 (1993-07-01), Hluchyj et al.
patent: 5541912 (1996-07-01), Choudhury et al.
patent: 5704047 (1997-12-01), Schneeberger
Kamoun et al, Analysis of Shared Finite Storage in a Computer Network Node Environment Under General Traffic Conditions, IEEE Transactions on Communications, vol. Com-28, No. 7, pp. 992-1003, Jul. 1980.
Yin et al, Congestion Control for Packet Voice by Selective Packet Discarding, IEEE Transactions on Communications, vol. 38, No. 5, pp. 674-683, May 1990.
Choudhury et al, Space Priority Management in a Shared Memory ATM Switch, IEEE GLOBECOM '93 Proceedings vol. 3 (Houston, Texas), pp. 1375-1383, Dec. 1993.
Choudhury et al, Dynamic Queue Length Thresholds in a Shared Memory ATM Switch, IEEE INFOCOM '96 Proceedings vol. 2 (San Francisco, California), pp. 679-687, Mar. 24, 1996.
Traffic Management Specification v4.0, The ATM Forum Technical Committee, af-tm-0056.000 Letter Ballot, Apr. 1996.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for allocating shared memory resources... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for allocating shared memory resources..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for allocating shared memory resources... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2521076

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.