Partial back pressure (PBP) transmission technique for...

Electrical computers and digital processing systems: multicomput – Computer-to-computer protocol implementing – Computer-to-computer data transfer regulating

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S223000, C709S224000, C709S225000, C709S233000

Reexamination Certificate

active

06721797

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to Asynchronous Transfer Mode (ATM) communication systems and more particularly to ATM communications systems employing Passive Optical Networks (PONs).
BACKGROUND OF THE INVENTION
Asynchronous Transfer Mode-Passive Optical Networks (ATM-PONs) are considered a promising solution for fiber-based access networks communicating to end-users in Fiber-To-The-Home (FTTH)/Fiber-To-The-Building (FTTB) environments. Many ATM-PONs utilize a tree topology where a passive optical splitter/merger provides broadcasting in the downstream direction and merging in the upstream direction. The splitter/merger typically couples to a single Optical Line Termination unit (OLT) in the upstream direction and to multiple Optical Network Termination units (ONTs) in the downstream direction, thus providing the tree topology. The OLT provides the network-side interface of the optical access network, while the ONTs provide the customer-side interface to the optical access network. Because all incoming ATM cells from ONTs are combined into one cell stream en route to the OLT through the optical merger, there may be collisions among upstream (ONT to OLT) cells from different ONTs unless proper preventative mechanisms are employed.
A grant allocation technique is used to control upstream cell transfer from ONTs, where a grant is permission from the OLT for an ONT to send one upstream cell at a specified slot. A current approach considered by many vendors as well as the standards body is per-QoS (quality of service) class traffic control, where one queue is provided per each QoS class at the ONT and a simple scheduler provides prioritized services among queues. One prior art ONT architecture includes two user network interface (LNI) cards with the physical memory at the ONT configured into a number of queues to accommodate the different service classes. One typical priority queue configuration assigns CBR (constant bit rate) traffic to a 1
st
priority queue, real-time VBR (variable bit rate) traffic to a 2
nd
priority queue, non-real-time VBR traffic to a 3
rd
priority queue, ABR (available bit rate) traffic to a 4
th
priority queue and UBR (unspecified bit rate) traffic to a 5
th
priority queue. When receiving normal data grants, the server scans the 1
st
priority queue and sends a cell if any are available. Otherwise, the server scans the next queue and repeats scanning to the 5
th
queue until it finds a cell to send. In this manner, queues with higher priorities are guaranteed to receive service before queues with lower priorities. If only data grants are used with this priority queuing scheme, a so-called starvation effect may occur. For example, if there is any nonconforming, greedy traffic arriving at one of the higher priority queues, then cells in lower priority queues cannot receive a fair amount of services even though conforming to the traffic contract. To prevent this starvation effect and provide a fair amount of services to cells in lower priority queues, tagged grants (which are special data grants) can be used. When a tagged grant is received from the OLT, the server starts scanning queues, for example, not from the 1
st
priority queue but from the 3
rd
priority queue.
One problem with this per-QoS class traffic control in the ONT is the lack of fairness guarantee among UNIs, e.g. Ethernet UNIs, when multiple UNIs are used in the same ONT. Because all incoming traffic with the same QoS class (e.g. ABR & UBR) is stored at the same priority queue and handled without any notion of connection by the scheduler, there may be a case where traffic from some end-users cannot get a guaranteed amount of service when traffic rates from others are exceedingly high. Therefore, some mechanism should be implemented inside the ONT to provide fairness among Ethernet UNIs in the same ONT.
One technique based on the previously described ONT architecture and the use of tagged grants to solve the fairness issue has streams from the two Ethernet UNIs assigned to different priority queues, i.e., one to a high priority queue and the other to a lower priority queue. Two different grants, one for normal data grants and the other for tagged grants, are used to isolate both streams. The OLT can then directly control two Ethernet UNIs in the same ONT with different grants and provide fairness between them.
This two-queue approach has a number of disadvantages. For instance, the two-queue approach typically leads to non-efficient and asymmetrical use of bandwidth. That is, a traffic stream associated with a higher priority queue cannot share the bandwidth assigned to the other stream, i.e., tagged grants, while a stream at a lower priority queue can share the bandwidth to a stream at a higher priority queue, i.e., data grants. Therefore, some portion of the bandwidth is wasted. Even with the same grant rate, the actual amount of transmitted traffic for both streams can be different unless there are always cells in the higher priority queue when data grants are received.
Another drawback is the use of non-standardized tagged grants. This technique cannot be utilized in the case where one vendor's ONTs are used with the OLT from other vendors not supporting tagged grants. Finally, the two-queue approach lends itself to poor scalability. As an example, the two-queue approach can accommodate two Ethernet UNI cards at an ONT, where each Ethernet UNI can have only one stream, i.e., a virtual channel. However, if an ONT supports more than two Ethernet UNI cards or each Ethernet UNI card can support more than one stream, the two-queue approach is no longer applicable.
SUMMARY OF THE INVENTION
Efficient transmission and fairness guarantees for upstream traffic in ATM-PONs are achieved using a partial back pressure (PBP) technique for traffic generated from user network interface (UNI) cards, e.g. Ethernet UNI cards or other network interface cards for non-constant bit rate sources. The PBP technique utilizes a feedback flow control mechanism between priority queues and UNI cards in a customer-side interface device, e.g. an Optical Network Termination unit, to achieve improved transmission efficiency and fairness guarantees of incoming traffic. The peak upstream rate of the UNI cards is dynamically controlled based on feedback information from the interface device where a queue status monitor observes the traffic level in the priority queue. Upon reaching a designated threshold- level in the priority queue, the status monitor triggers activation of rate controllers in the upstream output of the UNI cards. The rate controllers reduce the peak output of the UNI cards to a controlled peak rate. Once the queue level is reduced beyond a second threshold level, the status monitor deactivates the rate controllers.


REFERENCES:
patent: 5453980 (1995-09-01), Van Engelshoven
patent: 5648958 (1997-07-01), Counterman
patent: 5719853 (1998-02-01), Ikeda
patent: 5838922 (1998-11-01), Galand et al.
patent: 5860148 (1999-01-01), Bergantino et al.
patent: 5926478 (1999-07-01), Ghaibeh et al.
patent: 5978374 (1999-11-01), Ghaibeh et al.
patent: 6198558 (2001-03-01), Graves et al.
patent: 6229788 (2001-05-01), Graves et al.
patent: 6424656 (2002-07-01), Hoebeke
patent: 6498667 (2002-12-01), Masucci et al.
patent: 6519255 (2003-02-01), Graves
patent: 0 648 034 (1995-04-01), None
Bonomi, F. et al.,: “The Rate-Based Flow Control Framework for the available bit rate ATM Service”, IEEE Network, Mar. 1, 1995.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Partial back pressure (PBP) transmission technique for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Partial back pressure (PBP) transmission technique for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Partial back pressure (PBP) transmission technique for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3235673

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.