Method for explicit data rate control in a packet...

Multiplex communications – Data flow congestion prevention or control – Control of data admission to the network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S232000, C370S235000

Reexamination Certificate

active

06298041

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates to digital packet telecommunications, and particularly to data flow rate control at a particular layer of a digitally-switched packet telecommunications environment normally not subject to data flow rate control wherein data packets are communicated at a variety of rates without supervision as to rate of data transfer, such as under the TCP/IP protocol suite.
The widely-used TCP/IP protocol suite, which implements the world-wide data communication network environment called the Internet and is employed in local networks also (Intranets), intentionally omits any explicit supervisory function over the rate of data transport over the various media which comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very high-speed packets and very low-speed packets in potential conflict and the resultant inefficiencies. Certain loading conditions can even cause instabilities which could lead to overloads that could stop data transfer temporarily. It is therefore considered desirable to provide some mechanism to optimize efficiency of data transfer while minimizing the risk of data loss.
In order to understand the exact context of the invention, an explanation of technical aspects of the Internet/Intranet telecommunications environment may prove helpful.
Internet/Intranet technology is based largely on the TCP/IP protocol suite, where IP is the network level Internet Protocol and TCP is the transport level Transmission Control Protocol. At the network level, IP provides a “datagram” delivery service. By contrast, TCP builds a transport level service on top of the datagram service to provide guaranteed, sequential delivery of a byte stream between two IP hosts.
TCP has ‘flow control’ mechanisms operative at the end stations only to limit the rate at which a TCP endpoint will emit data, but it does not employ explicit data rate control. The basic flow control mechanism is a ‘sliding window’, a time slot within an allowable window which by its sliding operation essentially limits the amount of unacknowledged transmit data that a transmitter can emit.
Another flow control mechanism is a congestion window, which is a refinement of the sliding window scheme involving a conservative expansion to make use of the full, allowable window. A component of this mechanism is sometimes referred to as ‘slow start’.
The sliding window flow control mechanism works in conjunction with the Retransmit Timeout Mechanism (RTO), which is a timeout to prompt a retransmission of unacknowledged data. The timeout length is based on a running average of the Round Trip Time (RTT) for acknowledgment receipt, i.e. if an acknowledgment is not received within (typically) the smoothed RTT+4*mean deviation, then packet loss is inferred and the data pending acknowledgment is retransmitted.
Data rate flow control mechanisms which are operative end-to-end without explicit data rate control draw a strong inference of congestion from packet loss (inferred, typically, by RTO). TCP end systems, for example, will ‘back-off’, i.e., inhibit transmission in increasing multiples of the base RTT average as a reaction to consecutive packet loss.
Bandwidth Management in TCP/IP Networks
Bandwidth management in TCP/IP networks is accomplished by a combination of TCP end systems and routers which queue packets and discard packets when some congestion threshold is exceeded. The discarded and therefore unacknowledged packet serves as a feedback mechanism to the TCP transmitter. (TCP end systems are clients or servers running the TCP transport protocol, typically as part of their operating system.)
The term ‘bandwidth management’ is often used to refer to link level bandwidth management, e.g. multiple line support for Point to Point Protocol (PPP). Link level bandwidth management is essentially the process of keeping track of all traffic and deciding whether an additional dial line or ISDN channel should be opened or an extraneous one closed. The field of this invention is concerned with network level bandwidth management, i.e. policies to assign available bandwidth from a single logical link to network flows.
Routers support various queuing options. These options are generally intended to promote fairness and to provide a rough ability to partition and prioritize separate classes of traffic. Configuring these queuing options with any precision or without side effects is in fact very difficult, and in some cases, not possible. Seemingly simple things, such as the length of the queue, have a profound effect on traffic characteristics. Discarding packets as a feedback mechanism to TCP end systems may cause large, uneven delays perceptible to interactive users.
Routers can only control outbound traffic. A 5% load or less on outbound traffic can correspond to a 100% load on inbound traffic, due to the typical imbalance between an outbound stream of acknowledgments and an inbound stream of data.
A mechanism is needed to control traffic which is more efficient in that it is more tolerant of and responsive to traffic loading.
As background, further information about TCP/IP and the state of the art of flow control may be had in the following publications:
Comer, Douglas. Internetworking with TCP/IP. Vol. I. Prentice Hall, 1991.
Comer, Douglas and Stevens, David. Internetworking with TCP/IP. Vol. II. Design, Implementation, and Internals. Prentice Hall, 1991.
W. Richard Stevens, TCP/IP Illustrated. Vol. I—The Protocols. Addison-Wesley. 1994.
RFC 793. Transmission Control Protocol. Postel, 1981.
RFC 1122. Host Requirements. Braden 1989.
A particularly relevant reference to the present work is:
Hari Balakrishnan, Srinivasan Seshan, Elan Amir, Randy H. Katz. Improving TCP/IP Performance over Wireless Networks. Proc. 1 st ACM Conf. on Mobile Computing and Networking, Berkeley, Calif., November 1995.
The above document reports efforts of a research group at the University of California at Berkeley to implement TCP ‘interior spoofing’ to mitigate the effects of single packet loss in micro-cellular wireless networks. Its mechanism is the buffering of data and performing retransmissions to preempt end to end RTO. It is a software mechanism at a wireless network based station which will aggressively retry transmission a single time when it infers that a single packet loss has occurred. This is a more aggressive retransmission than the normal TCP RTO mechanism or the ‘quick recovery’ mechanism, whereby a transmitter retransmits after receiving N consecutive identical acknowledgments when there is a window of data pending acknowledgment.
Sliding window protocols are known, as in Comer, Vol. I. page 175. Known sliding window protocols are not time based. Rate is a byproduct of the characteristics of the network and the end systems.
SUMMARY OF THE INVENTION
According to the invention, a method for explicit network level data rate control is introduced into a level of a packet communication environment at which there is a lack of data rate supervision to control assignment of available bandwidth from a single logical link to network flows. The method includes adding latency to the acknowledgment (ACK) packet of the network level and adjusting the reported size of the existing flow control window associated with the packet in order to directly control the data rate of the source data at the station originating the packet.
Called direct feedback rate control, the method comprises a mechanism that mitigates TCP packet level traffic through a given link in order to manage the bandwidth of that link. A software mechanism to implement the function may be a software driver, part of a kernel of an operating system or a management function implemented on a separate dedicated machine in the communication path.
The invention has the advantage of being transparent to all other protocol entities in a TCP/IP network environment, For example, in the connections controlled according to the invention, it is transparent to TCP end systems (i.e., end systems using

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for explicit data rate control in a packet... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for explicit data rate control in a packet..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for explicit data rate control in a packet... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2558629

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.