Method for data rate control for heterogenous or peer...

Multiplex communications – Communication techniques for information carried in plural... – Adaptive

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S229000

Reexamination Certificate

active

06456630

ABSTRACT:

COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
This invention relates to digital packet telecommunications, and particularly to management of network bandwidth across heterogeneous network boundaries. It is particularly useful in conjunction with data flow rate detection and control of a digitally-switched packet telecommunications environment normally not subject to data flow rate control.
The ubiquitous TCP/IP protocol suite, which implements the world-wide data communication network environment called the Internet and is also used in private networks (Intranets), intentionally omits explicit supervisory function over the rate of data transport over the various media which comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very high-speed packet flows and very low-speed packet flows in potential conflict for network resources, which results in inefficiencies. Certain pathological loading conditions can result in instability, overloading and data transfer stoppage. Therefore, it is desirable to provide some mechanism to optimize efficiency of data transfer while minimizing the risk of data loss. Early indication of the rate of data flow which can or must be supported is very useful. In fact, data flow rate capacity information is a key factor for use in resource allocation decisions.
Internet/Intranet technology is based largely on the TCP/IP protocol suite, where IP, or Internet Protocol, is the network layer protocol and TCP, or Transmission Control Protocol, is the transport layer protocol. At the network level, IP provides a “datagram” delivery service. By contrast, TCP builds a transport level service over the datagram service to provide guaranteed, sequential delivery of a byte stream between two IP hosts.
TCP flow control mechanisms operate exclusively at the end stations to limit the rate at which TCP endpoints emit data. However, TCP lacks explicit data rate control. In fact, there is heretofore no concept of coordination of data rates among multiple flows. The basic TCP flow control mechanism is a sliding window, superimposed on a range of bytes beyond the last explicitly-acknowledged byte. Its sliding operation limits the amount of unacknowledged transmissible data that a TCP endpoint can emit.
Another flow control mechanism is a congestion window, which is a refinement of the sliding window scheme, which employs conservative expansion to fully utilize all of the allowable window. A component of this mechanism is sometimes referred to as “slow start”.
The sliding window flow control mechanism works in conjunction with the Retransmit Timeout Mechanism (RTO), which is a timeout to prompt a retransmission of unacknowledged data. The timeout length is based on a running average of the Round Trip Time (RTT) for acknowledgment receipt, i.e. if an acknowledgment is not received within (typically) the smoothed RTT+4*mean deviation, then packet loss is inferred and the data pending acknowledgment is retransmitted.
Data rate flow control mechanisms which are operative end-to-end without explicit data rate control draw a strong inference of congestion from packet loss (inferred, typically, by RTO). TCP end systems, for example, will ‘back-off’, i.e., inhibit transmission in increasing multiples of the base RTT average as a reaction to consecutive packet loss.
1.1 Bandwidth Management in TCP/IP Networks
Conventional bandwidth management in TCP/IP networks is accomplished by a combination of TCP end systems and routers which queue packets and discard packets when certain congestion thresholds are exceeded. The discarded, and therefore unacknowledged, packet serves as a feedback mechanism to the TCP transmitter. (TCP end systems are clients or servers running the TCP transport protocol, typically as part of their operating system.)
The term “bandwidth management” is often used to refer to link level bandwidth management, e.g. multiple line support for Point to Point Protocol (PPP). Link level bandwidth management is essentially the process of keeping track of all traffic and deciding whether an additional dial line or ISDN channel should be opened or an extraneous one closed. The field of this invention is concerned with network level bandwidth management, i.e. policies to assign available bandwidth from one or more logical links to network flows.
Routers support various queuing options. These options are generally intended to promote fairness and to provide a rough ability to partition and prioritize separate classes of traffic. Configuring these queuing options with any precision or without side effects is in fact very difficult, and in some cases, not possible. Seemingly simple things, such as the length of the queue, have a profound effect on traffic characteristics. Discarding packets as a feedback mechanism to TCP end systems may cause large, uneven delays perceptible to interactive users.
1.2 Bandwidth Management in Frame Relay and ATM Networks
Large TCP/IP networks, such as the Internet, are composed of subnets. LAN based IP subnets may be interconnected through a wide area network via point-to-point wide area links. In practice, the wide area network is often a Frame Relay network, an ATM (Asynchronous Transfer Mode) network or a Frame Relay Network with an ATM core. In these cases, a Frame Access Device (FRAD), a Frame Relay Router or an ATM access concentrator is employed to encapsulate the TCP/IP traffic and map it to an appropriate PVC or SVC. For example, one such network topology would be an ATM network using Switched or Permanent Virtual Circuits (S/PVCs). The FRAD or ATM access concentrator may be referred to as a network edge device.
Frame Relay and ATM networks possess certain signaling protocols, whereby a network edge device may be advised of the current explicit rate at which traffic may, at the time, be allowed to be injected into the S/PVC by the network edge device. For example, Frame PVCs have a configured Committed Information Rate (CIR) and Peak Information Rate (PIR). Signaling within the Frame Relay protocol informs the network edge device via Forward/Backward Explicit Congestion Notification bits (FECN/BECN) that either congestion exists and traffic should not be injected beyond the CIR rate or that no congestion exists and that traffic may be injected up to the PIR rate. ATM networks may support an Available Bit Rate (ABR) service which supplies explicit rate information to the network edge.
There is no such explicit rate signaling in the TCP/IP protocol. Flow control in TCP/IP networks is handled by the transport layer and by routers queuing and discarding packets. The carriage of TCP traffic over networks lacking explicit rate signaling may be significantly degraded due to dropped packets and variable queuing delays in these networks.
The non-explicit rate control methods used by TCP are typically elastic and can expand to use all available bandwidth. In situations where a typical topology with clients on high speed LANs access via a relatively low speed WAN link servers with high speed WAN links, a bottleneck occurs at the low speed WAN link. This bottleneck cannot be alleviated by purchasing incremental bandwidth at the low speed link, because of the elastic nature of bandwidth consumption.
This bottleneck is especially a problem in the inbound (into the LAN, from the WAN) direction, because the queuing effects and packet drops are occurring on the far end of the WAN access link, The far end may not be within the administrative control of the network manager who wants to control and allocate bandwidth inbound to LAN clients. Given the elastic nature of bandwidth consumption, casual and unimportant use of a par

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for data rate control for heterogenous or peer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for data rate control for heterogenous or peer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for data rate control for heterogenous or peer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2893669

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.