Multiplex communications – Data flow congestion prevention or control – Control of data admission to the network
Reexamination Certificate
1998-05-27
2001-05-08
Marcelo, Melvin (Department: 2663)
Multiplex communications
Data flow congestion prevention or control
Control of data admission to the network
C359S199200, C725S118000
Reexamination Certificate
active
06229788
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to broadband communication networks, and particularly to a method and apparatus for controlling traffic flow through the network in order to minimize the cost of buffering data in outside-plant components.
BACKGROUND OF THE INVENTION
In order to enable the delivery of broadband services by present and future fiber-based data transport systems, optical fiber is being extended deeper into the network, having led to the deployment of optical network units (ONUs) to a point within several hundred (or a few thousand) feet of the end user. The ONUs each serve a plurality of subscribers and communicate via a fiber optic feeder cable with a host digital terminal (HDT) that is usually placed in a central office location and is connected to the remainder of the network. The relatively short drop length between the ONUs and the individual subscribers reduces frequency-dependent signal losses on the residual copper twisted pairs and allows the transmittal of high-bandwidth data across these drops to and from the subscribers.
In a packet switched or cell switched network (such as ATM), packets or cells travel along virtual circuits (VCs) established between communicating entities, such as subscribers or file servers. Typically, there are three main classes of traffic that can be delivered to a subscriber, namely broadcast (BC), continuous bit rate (CBR) and unspecified bit rate (UBR).
For the case of BC traffic, a plurality of BC channels (such as television channels) are transmitted along the fiber leading from the HDT to the ONUs, each occupying a constant bandwidth irresepective of the number of subscribers actually using that channel at a given time. At each ONU, the BC channels being subscribed to by a subscriber are replicated and carried to the subscriber by respective “bearers” of traffic occupying one BC channel's worth bandwidth on the drop. Therefore, during peak viewing times, the total BC drop bandwidth delivered to all the subscribers connected to an ONU (or to a group of ONUs) far exceeds the bandwidth taken up by BC traffic on the fiber feeder.
A CBR service (such as a telephone call) occupies a negotiated, constant and guaranteed bandwidth on the fiber feeder and drop cable for each individual VC that is set up, and therefore the total bandwidth taken up by a CBR service on both the feeder and the drop directly is dependent on the number of subscribers using CBR services at a given time and on the number of CBR services used by each subscriber. VCs for carrying CBR traffic are usually only set up if there is bandwidth available on both the feeder and the drop, after having met the bandwidth requirements of BC traffic.
Finally, UBR is considered the lowest priority traffic, and is often the cheapest available service, from the subscriber's point of view. Since UBR does not guarantee a bit rate, it is more often discussed in the context of a service rather than a circuit. Typically, UBR is used to transmit files and other non-time-critical data. UBR services occupy respective portions of the residual bandwidth on both the feeder and the drop, allocated after all BC and CBR circuits have been set up. The residual bandwidth is shared among the total number of requested UBR services, which is a function of time.
Designers of broadband access systems must be careful to consider traffic congestion encountered at so-called “choke points”, i.e., parts of the network at which the output bandwidth capacity is less than the total input bandwidth capacity. For example, in the downstream (network-to-subscriber) direction, it is typically the case that more bandwidth is available on the fiber feeder between the HDT and the ONUs than what can be supplied to any one subscriber over a copper drop. Therefore, a large file transfer from a file server or similar source in the external network may be propagated to the ONU serving the recipient subscriber at close to the maximum capacity of the fiber feeder to that ONU, but cannot be delivered at this rate to the subscriber, due to the lower bandwidth capacity of the subscriber's copper drop. The excess delivery rate into the ONU from the fiber, relative to the capacity of the copper drop will result in the ONU becoming overloaded with data.
Moreover, the total number of subscribers multiplied by the (relatively low) available bandwidth per copper loop may exceed the total bandwidth capacity of the fiber feeder in both the upstream or downstream directions. This scenario is particularly harmful when each customer establishes a CBR connection, and can ultimately lead to the delay or loss of ATM cells and a degraded quality of service (QoS).
In the prior art, congestion is commonly treated by placing a buffer (or “queue”) of a fixed, predetermined size in both directions for every subscriber line card at the ONU. The main goal of this approach is to provide enough buffering margin or traffic buffer capacity so that a transient peak bandwidth demand (in either direction) results in the excess instantaneous data rate from the summation of all the services flowing through the choke points in question being temporarily stored in the buffers and emptied at the available rate.
The colocation of queues in the ONU is done in the hope that there is enough room in the buffer to handle the surplus of incoming data until there is either an increase in available output bandwidth or a decrease in the total input bandwidth across the summation of services. Neither one of these conditions is met during a prolonged excessive bandwidth request, and any fixed queue size is liable to overflow and cause loss of data. Although by increasing the buffer size, a longer bandwidth peak can be accommodated, the required buffer size is proportional to the maximum possible transaction size, which has been found to be continually on the rise. Deciding on a particular size instantly limits the effectiveness of the buffer for handling peak bandwidth demands in the future. Clearly, prior art solutions involving buffers are only temporary fixes for avoiding loss of data due to congestion at choke points.
It is useful to consider a concrete example illustrating the difficulties with the current state of the art, in which the available downstream and upstream bandwidths are respectively 600 and 155 Megabits per second (Mbps). Furthermore, let there be 8 ONUs (each serving 24 subscriber lines) connected to the same fiber in a passive optical network (PON). Thus, if a subscriber is demanding a 4 -Megabyte file transfer during a period of low overall system usage, then the entire downstream bandwidth capacity of 600 Mbps is available, and the file arrives at the ONU within 52 ms.
However, the maximum transmission rate per line (i.e., per subscriber loop) is typically on the order of 20 Mbps downstream and 2 Mbps upstream. Thus, in the same 52 ms time period, only 130 kilobytes of the original 4-Megabyte file can be delivered to the subscriber from the ONU choke point. The residual 3.87 Megabytes must be buffered in the ONU's downstream path for that subscriber so as to be delivered over the next 1.55 seconds. In general, ninety-seven percent (i.e., (600−20)/600) of the file to be transferred must be stored at the ONU. Clearly, a serious disadvantage is that the amount of memory to be installed in the ONU on a per-subscriber basis is a function of maximum file size (nowadays in the order of several dozen megabytes), which leads to large, expensive, power-hungry and unreliable components that in turn present the service provider with high maintenance costs.
In another scenario, if all subscribers would simultaneously use the available 2 Mbps data rate to transmit data to their respective line cards, then the total demanded instantaneous upstream bandwidth on the fiber would be 2×24×8=384 Mbps, against an available 155 Mbps on the fiber feeder. An upstream data transfer of 1 Megabyte for each subscriber would require the buffering of approximately 600 kilobytes in the upstream path of each li
Fisher David Anthony
Graves Alan Frank
Timms Andrew Jocelyn
Marcelo Melvin
Nortel Networks Limited
LandOfFree
Method and apparatus for traffic shaping in a broadband... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for traffic shaping in a broadband..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for traffic shaping in a broadband... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2505568