Traffic management and flow prioritization on a routed...

Multiplex communications – Data flow congestion prevention or control – Control of data admission to the network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S235000, C370S392000, C370S412000, C370S428000

Reexamination Certificate

active

06320845

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to computer networks and, more particularly, to traffic management and flow prioritization across communication circuits (especially virtual circuits).
BACKGROUND OF THE INVENTION
Data communication in a computer network involves the exchange of data between two or more entities interconnected by communication links and subnetworks. These entities are typically software programs executing on hardware computer platforms, which, depending on their roles within the network, may serve as end stations or intermediate stations. Examples of intermediate stations include routers, bridges and switches that interconnect communication links and subnetworks; an end station may be a computer located on one of the subnetworks. More generally, an end station connotes a source of or target for data that typically does not provide routing or other services to other computers on the network. A local area network (LAN) is an example of a subnetwork that provides relatively short-distance communication among the interconnected stations; in contrast, a wide area network (WAN) facilitates long-distance communication over links provided by public or private telecommunications facilities.
End stations typically communicate by exchanging discrete packets or frames of data according to predefined protocols. In this context, a protocol represents a set of rules defining how the stations interact with each other to transfer data. Such interaction is simple within a LAN, since these are typically “broadcast” networks: when a source station transmits a frame over the LAN, it reaches all stations on that LAN. If the intended recipient of the frame is connected to another network, the frame is passed over a routing device to that other network. Collectively, these hardware and software components comprise a communications network and their interconnections are defined by an underlying architecture.
Traffic flowing through a network may be considered as a set of flows. A flow consists of a set of packets that require similar treatment by the network. Flows may be defined according to a wide range of criteria to meet different needs. For example, a flow could be the set of packets sent from one host to another, or it could be the set of packets exchanged by a pair of communicating application programs. In general there may be many flows passing through any point in the network at any time.
Packets travel between network entities over “circuits” connecting the entities. The traditional circuit is a physical transmission line between the communicating stations, and over which data is exchanged. The circuit is defined by a communication subnetwork that carries messages between connected entities the way the telephone system connects callers—i.e., over wires or optical fibers intercommunicating through switches. Accordingly, the physical circuit is continuously dedicated to the transmission path between connected entities; data exchange is synchronous over the path in the sense of being tied to a common master clock.
This mode of communication absorbs substantial bandwidth. Unless data moves constantly between the communicating entities, the circuit will at times be idle and the bandwidth consequently wasted. To avoid this, networks frequently employ “virtual” circuits rather than dedicated transmission lines. The virtual circuit establishes a routing pathway for message travel between communicating stations, and is virtual in the sense that many stations can transmit across the lines defining the circuit; these are not dedicated to a single pair of entities. Different types of networks employ the virtual circuit (VC) model of communication, most notably Frame Relay and Asynchronous Transfer Mode (ATM).
ATM can accommodate constant-rate and variable-rate data traffic. The VC may be established by a SETUP message, which travels through a sequence of switches until it reaches the destination station, thereby establishing the VC path. In general, a VC may carry many flows at one time.
A representative ATM configuration is shown in
FIG. 1. A
single VC interconnects router
25
and router
27
. An end station
10
on a LAN
12
communicates with an end station
15
on another LAN
17
over this VC, and with an end station
20
on still another LAN
22
over the same VC. The VC in this case carries two flows: one from end station
10
to end station
15
, and one from end station
10
to end station
20
. The flows are handled by first and second routers
25
,
27
and a series of ATM switches representatively indicated at
30
,
32
. Each router
25
,
27
is connected to an illustrated LAN and to an ATM switch. The connection to an ATM switch is by means of an ATM interface configured to direct flows onto VCs; the routers establish further connections by means of additional interfaces, which may or may not be ATM interfaces. That is, each router
25
,
27
may contain various interfaces for handling different types of network traffic.
Routers
25
,
27
typically forward IP datagrams (i.e., operating at level
3
in the protocol stack by inspecting IP headers), terminating VCs at the ATM interfaces. ATM switches
30
,
32
forward ATM cells (operating at level
2
in the protocol stack by examining ATM cell headers). VCs do not terminate on the interfaces of ATM switches
30
,
32
, instead passing through the switches. The VCs, then, span routers
25
,
27
, originating on one router and terminating on the other.
There may be more than one VC between router
25
and router
27
. Furthermore, there may be additional VCs originating (or terminating) on routers
25
,
27
and terminating (or originating) on other routers not shown here.
The foregoing discussion assumes a continuous sequence of ATM switches intervening between ATM-capable routers
25
,
27
. Other device sequences are of course possible. For example, depending on the routing algorithm and communications costs, router
25
may direct traffic directly to router
27
and vice versa. Alternatively, one or more routers may intervene between ATM switches
30
,
32
. In this case, communication between end stations
10
,
15
and
10
,
20
would occur over paths defined by multiple sequential VCs, each terminating at a router.
Permitting communication resources to be shared among many communicating entities may lead to congestion. Routers and links have finite information-handling capacity (bandwidth), and if streams of packets arrive on several input lines all requesting the same output line, a queue will build up. Eventually, if the queue exceeds the buffering capacity of the router, packets will be lost.
One approach to congestion control is “traffic shaping,” which regulates the average rate and concentration of data transfer—that is, the traffic pattern. Limitations on transmission patterns are imposed on each VC with the goal of avoiding excessive data backup. These limitations are enforced by a “shaper,” usually implemented in hardware running on a router's ATM network interfaces. The pattern adopted by a shaper may differ for different VCs and may also vary over time for a particular VC. If packets arrive at the router above the allowed VC transmission rate, then the network interface “shapes” the traffic sent on the VC to conform to that rate. As a result, packets must be buffered in the router circuitry or on the network interface itself. When this condition occurs, the VC is said to be “backlogged.”
Another approach to traffic management is reactive rather than proactive. The bandwidth of the VC cannot be expanded, but at least it can be allocated among flows on the VC. “Fair queueing” is a simple approach to bandwidth allocation in which a queue is defined for each flow seeking access to a given router output line. The router is programmed to scan the queues round robin, sequentially taking the first packet from each queue and transmitting it over the line. A flow, in this circumstance, is a series of packets requiring similar queueing treatment.
One problem with this approach is that it

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Traffic management and flow prioritization on a routed... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Traffic management and flow prioritization on a routed..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Traffic management and flow prioritization on a routed... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2618159

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.