ATM adaption layer traffic scheduling for variable bit rate...

Multiplex communications – Data flow congestion prevention or control – Control of data admission to the network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S468000

Reexamination Certificate

active

06603739

ABSTRACT:

FIELD OF INVENTION
The invention relates generally to the art of digital communications and more specifically to a system for minimizing the average latency in transporting messages, such as packets or frames, which are segmented into a plurality of smaller cells for transport across a network.
BACKGROUND OF INVENTION
Asynchronous transfer mode (hereinafter “ATM”) service inter-networking protocols enable data or messages formatted according to a non-ATM data communication protocol to be transported across an ATM network. For example, the Frame Relay Forum FRF.5 protocol specifies how a relatively large, variable length, frame relay packet should be segmented into a plurality of ATM-like, fixed-size, cells for transport across an ATM network. Such protocols necessarily define how the ATM Adaption Layer (AAL) should be provisioned since this layer of the ATM/B-ISDN protocol stack, as defined by ITU Recommendation I.321 and shown in
FIG. 1
, is responsible for adapting the services provided by the ATM Layer, which provides basic ATM cell transport functions, to higher layers, e.g. frame relay bearer service.
FIG. 2
illustrates a generic version of the AAL in greater detail. As shown in
FIG. 2
, some versions of the AAL, such as AAL
3
/
4
and AAL
5
, include a convergence sublayer (CS) and a segmentation and reassembly sublayer (SAR). The CS, which sits directly above the SAR and below the AAL Service Access Point (SAP), aligns the SDUs and adds overhead information. The CS may also provide service specific signalling or data link functions.
The SAR, when operating in a message mode, segments a single AAL SAR Service Data Unit (hereinafter “SDU”), such as a variable length frame packet, into a plurality of AAL SAR Protocol Data Units (hereinafter “PDU”), each of which essentially forms the payload of an ATM cell transmitted across an ATM network. Conversely, at the destination, the destination SAR requires that all of the PDUs composing an SDU be passed from the ATM Layer to the destination SAR before it can reassemble the SDU and, ignoring the role of the convergence sublayer, indicate reception of the SDU to the higher layer using the AAL. Thus, the latency in transmitting an SDU from a first point to a second point in a network can be defined as the time from which the transmission of the SDU is first requested until the time the last PDU arrives at the destination SAR and the SDU is reassembled. In other words, latency can be defined as the time required to transmit the SDU from an originating AAL SAP to a destination AAL SAP. This latency is entirely characterized by the amount of time required to propagate the last PDU of an SDU across the ATM network—the time required to propagate any other PDU before the last PDU of an SDU is of no consequence at the destination AAL SAP.
Latency manifests itself as sluggishness or slow response time in interactive-type communications. For example, if one were sending joystick instructions across a network during the course of an interactive game played there over, a long latency would, in the absence of other aggravating factors, result in a noticeable time period between the physical movement of the joystick and the corresponding computer action. Accordingly, it is desirable to minimize latency for interactive telecommunications applications.
Latency is affected by the service discipline used to schedule or multiplex PDUs corresponding to SDUs from a plurality of virtual connections (VCs) into a single cell stream for transmission across the Physical Layer (PHY) of the ATM network.
FIG. 3
shows how an ATM Layer
11
provides a SAP
10
to each of several VCs, each of which has its own AAL
12
(i.e., the AAL is invoked in parallel instances). The ATM Layer
11
, in turn, uses a single SAP
14
into a PHY
16
. One role of the ATM Layer
11
is to accept requests of PDUs
17
from each SAP
10
and to multiplex these PDUs into a single cell stream
18
such that the timing of the transmission of each of the PDUs conforms to predetermined traffic parameters assigned to its respective VC.
FIG. 3
illustrates a condition where each VC generates a burst
20
of several ATM PDU requests at the ATM SAP
10
, wherein each such burst corresponds to a single SDU
22
, such that there is an overlap in the transmission periods of the SAR SDUs from the ATM Layer
11
to the PHY
16
. The ATM PDUs
17
received from the ATM SAP
10
must therefore be queued, and then the ATM PDUs from each of the different ATM SAPs
10
must be multiplexed in some order onto the single stream
18
of ATM PDUs passed to the PHY SAP
14
. Given this set of PDUs which have been requested over several ATM Layer SAPs
10
, and subject to the constraints of satisfying the traffic parameters of each VC, it is often desired to minimize the average amount of latency experienced per unit of SDU data (i.e., per PDU) for various types of ATM service categories.
As shown in
FIG. 3
, a typical ATM Layer implementation might use round-robin ordering in sending the PDUs
17
to the PHY SAP
14
from each ATM SAP
10
. This would result in each corresponding SDU
22
using an equal fraction of the PHY bandwidth while the PDUs for each SDU are being transmitted. This is shown, for instance, in the bandwidth occupancy chart of
FIG. 4A
for the situation where two VCs each request a burst of the same number of PDUs at about the same time, wherein each VC has a PCR equal to 100% of the available bandwidth. (A “bandwidth occupancy chart” is a chart with time on the horizontal axis, and bandwidth on the vertical axis. Each SDU sent on an ATM virtual connection is shown as a shaded region on such a chart. The net height of the region at a particular time shows the amount of bandwidth occupied by the transmission of the SDU at that time; the leftmost and rightmost extent of the region gives the time at which the first and last PDUs for the SDU are transmitted, respectively; and the total area of the region gives the size of the SDU. Unshaded regions in these charts represent the proportion of unused PHY bandwidth, for which the ATM Layer will be sending idle cells.) This ordering is not optimal with respect to the average amount of latency experienced per unit of SDU data.
SUMMARY OF INVENTION
Broadly speaking, the invention seeks to minimize or reduce the average per unit latency in transporting messages which are decomposed into a plurality of smaller data units for transport across a network.
One aspect of the invention relates to an apparatus for transmitting messages associated with a plurality of variable bit rate connections, each of which is associated with a traffic contract which defines compliance thereto as conformance to a leaky bucket algorithm. The apparatus comprises transmission equipment for receiving multiple messages, segmenting each received message into one or more data units, and multiplexing such data units from various connections into a single stream for transport over a physical interface at an output transmission rate. A bandwidth allocation means is associated with the transmission equipment for dynamically allocating a portion of the output transmission rate to any connection. A scheduler is connected to the bandwidth allocation means for scheduling the transfer of messages to the single stream of the transmission equipment and for allocating a portion of the output transmission rate to each connection at the time its message is transferred to the transmission equipment. The portion of the output transmission rate for a given connection is substantially equal to 1/T, T being computed as
T

max

(
T
S
+
X
-
τ
S
N
-
1
,
T
p
,
T
L
)
,
if





N
>
1
and
T←
max(
T
p
, T
L
), if
N
=1
where T
S
is a period corresponding to a constant sustained transmission rate, T
p
is a period corresponding to a peak transmission rate, &tgr;
S
is a burst tolerance, N is the number of data units in the message, X is a fill level of the leaky bucket associated with the given connection, and T
L
corresponds to all un

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

ATM adaption layer traffic scheduling for variable bit rate... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with ATM adaption layer traffic scheduling for variable bit rate..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and ATM adaption layer traffic scheduling for variable bit rate... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3116724

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.