Multiplex communications – Data flow congestion prevention or control
Reexamination Certificate
2001-11-30
2004-06-01
Sam, Phirin (Department: 2661)
Multiplex communications
Data flow congestion prevention or control
C370S235000
Reexamination Certificate
active
06744730
ABSTRACT:
BACKGROUND OF INVENTION
1. Technical Field
The present invention relates to throughput optimization of Internet protocol (IP) based applications, such as transmission control protocol (TCP), over links such as cellular, where long interruptions of the application traffic can be caused by error bursts or system design.
2. Discussion of Related Art
As shown in
FIG. 1
, a sending host provides datagrams according to the Internet Protocol (IP) on a line
12
over an Internet
14
on a line
16
to a receiving host
18
. Each of the hosts
10
,
18
, may be thought of as having protocol software on each machine stacked vertically into layers, each of which handles different functionalities in the communication of datagrams. The higher layers deal with end-to-end application issues while lower layers handle issues relating to transfer of packets or datagrams through the network. In traversing a network such as the Internet
14
shown in
FIG. 1
, the datagrams of a given message may traverse different routes through different routers from the host
10
to the host
18
. The intermediate routers also have protocol stacks but the datagrams do not need to consult higher levels of the routers because only lower layers are needed to receive, route and send datagrams.
For instance, the host
10
may send a datagram which passes up to the IP layer on intermediate routers on the way to the receiving host
18
but no higher. Only when the datagram reaches the receiving host
18
does IP extract the message and pass it up to higher levels of the protocol software.
In practice, delays and loss may occur between the end point hosts
10
,
18
due to congestion in the routers and other devices of the Internet
14
as well as lack of storage space in the receiving host
18
. Even severe delays can be caused by an overload of datagrams at one or more switching points, routers, or the like. When this happens, delays increase as routers begin to pile up datagrams until they are able to send them forward. But since the storage capacity of each router is not unlimited and since datagrams compete for that storage space, it is possible in an uncoordinated network such as the Internet that the number of datagrams arriving at a congested router will be too much for it to handle and it will be forced to drop datagrams.
If that happens, the hosts
10
,
18
would not normally know the details of where this congestion has occurred or why. To them, an unexpected delay or loss is a premonition for congestion. This attribution is due to the fact that in wired networks, for which the Internet was designed, the successful transmission of datagrams between hosts and routers and between routers is very reliable, and congestion is a good assumption as the cause of the delay and loss. For this reliable wired environment the Internet designers provided for certain responses to perceived congestion.
One of these is for the TCP layer to use a specialized sliding window mechanism as shown in
FIG. 2
that is used for several purposes. This window makes it possible to send multiple segments (the unit of transfer between TCP layers on two machines is called a segment) from the host
10
before an acknowledgement arrives, so as to increase total throughput. It also has a flow control purpose that allows the receiving host
18
to restrict transmission until it has sufficient buffer space to accommodate more data. The window operates at the octet level, not at the segment or datagram level (TCP segments are encapsulated within IP datagrams). Octets are numbered sequentially as shown in FIG.
2
. Whenever a sending host sends a TCP segment it puts the sequence number of the first octet in that segment and in return expects an acknowledgement from the receiver for the last octet the receiver has successfully received. The sending host
10
keeps three pointers associated with every connection. The first pointer marks the left of the sliding window to separate octets on the left (
1
,
2
) that have been sent and acknowledged from octets yet to be acknowledged. A second pointer marks the right of the sliding window and defines the highest octet (
9
) in the sequence that can be sent before more acknowledgements are received. The third pointer marks the boundary inside the window that separates those octets that have already been sent (
3
-
6
) from those octets that have not yet been sent (
7
-
9
). The receiving host
18
maintains a similar window to piece the stream together again after a plurality of datagrams traverse the Internet
14
, possibly over different routes using different routers and arriving out-of-sequence.
TCP allows the window size to vary over time. It does that by having the receiving host
18
specify not only how many octets have been received but how many additional octets of data it is prepared to accept. This is carried out by a so-called window advertisement which can be thought of as specifying the receiving host's current buffer size. When the receiving host
18
causes an increase to its window advertisement, the sending host
10
increases the size of its sliding window. Likewise, when the receiving host
18
signals decreased buffer space with a decreased window advertisement, the sending host
10
decreases the size of its window and stops sending octets beyond the boundary. An advantage of all this is of course that it provides flow control as well as reliable transfer. If the receiving host
18
buffer begins to get full, it can send smaller window advertisements. It can even send a window size of zero to stop all transmissions. It can later advertise a non-zero window size to trigger the flow of data again once buffer space is available again.
In addition to flow control, TCP maintains a second limit, called the congestion window limit or congestion window to control congestion. The goal of the flow control window is to ensure that the sender does not send more data than what the receiver can actually accommodate. However, in many cases it is the networks which may not have enough space to accommodate all the data that the sender is sending. As alluded to before, in case of congestion the network buffer space may get exhausted and the data packets might get dropped. In addition, if there are a plurality of senders and if neither of them regulates the rate of data being sent the network will never be able to make room for any single connection, and the congestion may persist for a very long time. To avoid this, TCP uses a congestion window that tries to estimate the amount of buffer space available in the network. To summarize, TCP does not solely rely on the window advertised by the receiving host in deciding how many packets to send. It instead takes the minimum of the advertised window and the congestion window to decide how much data can be sent into the network without waiting for an acknowledgement from the receiver.
When TCP experiences datagram loss, it adopts two different strategies to adjust its congestion window. At the start of the connection when the TCP has no information about the state of the network, it begins by sending one packet into the network and waiting for an acknowledgement from the receiver. An acknowledgement implies that the network had sufficient space for at least one packet from this connection to accommodate. It then injects two new packets to see if the network can accommodate two packets and waits for the acknowledgement of these new packets. This process of probing the network for its buffer space, called slow-start, continues until the sender sees a packet loss. A packet loss indicates that the network has run out of its capacity to enqueue data at a rate higher than this and therefore the sender must not try to be too aggressive in sending more packets. However, since the buffer space in the network keeps changing (e.g., another TCP connection ended and the buffer was freed-up because of this), the sender still keeps trying to increase its congestion window for every acknowledgement received, but at a rate considerably slower than slow start.
Anyt
Le Khiem
Swami Yogesh
Nokia Corporation
Sam Phirin
Ware Fressola Van Der Sluys & Adolphson LLP
LandOfFree
Throughput enhancement after interruption does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Throughput enhancement after interruption, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Throughput enhancement after interruption will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3336715