Electrical computers and digital processing systems: multicomput – Computer-to-computer protocol implementing – Computer-to-computer data transfer regulating
Reexamination Certificate
2002-01-30
2004-10-26
Barot, Bharat (Department: 2155)
Electrical computers and digital processing systems: multicomput
Computer-to-computer protocol implementing
Computer-to-computer data transfer regulating
C709S231000, C709S235000, C709S238000, C709S242000, C707S793000, C707S793000, C370S230000, C370S412000
Reexamination Certificate
active
06810426
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to network communication, and more specifically, to apparatuses, methods and systems for enhancing quality of service in a network.
BACKGROUND OF THE INVENTION
In the current state of the Internet, the issues of guaranteed bandwidth fairness and support for multiple levels of latency are becoming increasingly important. Guaranteed bandwidth fairness is typically provided using so called “Fair Queuing” algorithms. These algorithms guarantee that bandwidth of a certain link (or virtual link) is fairly apportioned among its various flows. Fair Queuing algorithms are incorporated into network systems using fair queuing (or bandwidth) schedulers. These schedulers seek to control congestion even in the presence of ill-behaved sources, so that a single source that sends packets to a gateway at a sufficiently high speed cannot capture an arbitrarily high portion of the bandwidth of the outgoing line. While providing bandwidth guarantees is important, it is also important that latency-critical traffic flows (such as Voice Over IP and Video) experience as low latency as possible. Prioritizing traffic flows so that latency-critical flows experience low latency is currently provided by priority (or latency) schedulers.
Conventional network solutions have attempted to resolve both fair queuing and priority scheduling, and, despite the inherent tension between the two concerns, have been somewhat successful in incorporating both features in network systems. For instance, according to one conventional solution shown in
FIG. 1
, conventional schedulers
120
have been created that cascade both fair queuing
100
and priority schedulers
110
in series to achieve fair queuing and low latency for latency-critical traffic. Fair queuing schedulers
100
have been proposed in which gateways maintain separate queues for packets received from each individual source. In many fair queuing schedulers, the queues are then serviced in a round-robin manner, which prevents a source from arbitrarily increasing its share of bandwidth or the delay of other sources. Therefore, when a source sends packets too quickly, it may effectively lengthen its own queue, thereby preventing anti-social behavior and limiting the negative impact on well-behaved sources. Other schedulers, some of which use a round-robin-based approach, have attempted to resolve problematic sources that send very long packets of data, which can get more bandwidth than other sources. However, these attempts suffer from some disadvantages, including that cascading the schedulers often results in erroneous queuing. Furthermore, arrangements such as those illustrated in
FIG. 1
can require a substantial amount of packet processing.
One method for maintaining quality of service for networks, Deficit Round Robin (DRR), is a well-known fair queuing algorithm that is relatively efficient, simple, and is increasingly being accepted as a standard for fair queuing. DRR guarantees fair apportioning of bandwidth, provides close-to-perfect fairness in scheduling, and provides fast and lightweight enqueuing and dequeuing operations. DRR also provides O(1) time complexity, which means that the algorithm's computation does not grow with input size (the number of queues). As a result, the processing time taken by the algorithm is independent of the number of queues. DRR is next explained in detail with reference to prior art
FIGS. 2
,
3
A and
3
B, although it should be appreciated that DRR is well known to those of skill in the art.
FIG. 2
shows a DRR queue structure
200
implemented by the DRR algorithm. The DRR queue structure
200
is located between an incoming link
210
and an outgoing link
220
, and operates to buffer data packets. Incoming packets from data sources received via the incoming link
210
are queued in the DRR queue structure
200
by an enqueue agent
230
. The enqueue agent
230
typically creates a queue for each source forwarding data packets over the incoming link. According to one embodiment of DRR, queues are created and ordered sequentially based on the time data packets arrive at the queue structure
200
. Therefore, a first data packet from a first source may be buffered into a first queue position in the queue structure
200
, whereas a later received data packet from a separate source may be placed in a queue positioned lower in the queue structure
200
. After the packets are queued onto the DRR queue structure
200
, a dequeue agent
240
removes the packets from the DRR queue structure
200
and transmits the packets over the outgoing link
220
. The implementations of the enqueue agent
220
and dequeue agent
240
constitute the DRR queuing algorithm. According to DRR queuing, the dequeue agent
240
intelligently dequeues the packets from the DRR queue structure
200
based on bandwidth apportioning specifications and places the packets on the outgoing link. One implementation of the DRR queue structure
200
consists of an array of linked lists of packets, which ensures that each queue (for example, the nth queue) can be accessed quickly. Additionally, the head and tail pointers of the linked list are stored so as to enable sufficient enqueuing and dequeuing.
According to one implementation of DRR, there is typically a deficit
250
data element and a quota
260
data element. According to the DRR algorithm, each data flow that is assured a share of bandwidth has a corresponding first in first out queue inside the DRR, and each queue within the DRR queue structure
200
has a deficit and quota associated therewith. The quota
260
data element of a queue is the number of bytes of data the queue will send per cycle when viewed from a long-term average. The deficit
250
refers the number of bytes of data that a queue can send in the current round. According to a general weighted variant of DRR, the quotas of the various queues of the DRR are initially set so that the ratios of the quotas are in accordance with the intended apportioning of bandwidth among flows. However, in the example presented in
FIG. 3.
, all quotas are equal and hence coalesced into a single data element termed Quantum. One skilled in the art would appreciate that in the most general case, each queue would have its corresponding quota. In operation, the enqueue agent
230
enqueues an arriving packet into the packet's appropriate queue. The dequeue agent
240
then continuously steps through the queues in a round-robin fashion and sends as many packets from a queue as allowed by its deficit. At the end of each round, the deficit of a non-empty queue is increased by the quantum (and in the most general case, by its quota), as maintained in the quota element. Thus, if a packet cannot be sent for want of deficit, that remaining deficit is retained and increased by its quota in the next round. As a result, past unfairness due to packet boundaries is corrected in subsequent rounds. However, it should be appreciated that queues that are empty (i.e., have no packets located therein) at the end of the round do not retain and add their current deficit to that of the next round. The past deficit is then ignored since it was not being made use of and hence did not cause any unfairness.
The operation of DRR is illustrated in
FIGS. 3
a
and
3
b
, which show a queue structure having four queues
310
, labeled
1
through
4
, where each queue has buffered a plurality of packets. As referred to herein, the fourth queue, labeled queue #
4
, has a greater queue number than queues one through three. For instance, in
FIG. 3
a
, the first queue (labeled queue #
1
) includes packets having 200, 750 and 20 data elements (e.g., each data element is a byte of data), while the second queue (labeled queue #
2
) includes packets of 500 and 500 data elements. The packets are buffered in each respective queue sequentially, such that the packets arriving first enter the queue before packets arriving later in time. For instance, in the first queue of
FIG. 3
a
, the packet having 200 data elements
Bhagavath Vijay Krishna
Mysore Manamohan D.
Pagan Florence C.
Short Joel E.
Barot Bharat
Nomadix, Inc.
LandOfFree
Methods and systems providing fair queuing and priority... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods and systems providing fair queuing and priority..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and systems providing fair queuing and priority... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3307490