Network capacity planning based on buffers occupancy monitoring

Multiplex communications – Data flow congestion prevention or control – Control of data admission to the network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S252000, C370S395100, C370S412000, C709S220000, C709S223000, C709S232000, C709S238000

Reexamination Certificate

active

06690646

ABSTRACT:

CROSS REFERENCE TO RELATED APPLICATION
This application claims priority from European application 99480062.1, filed Jul. 13, 1999 (MM/DD/YY), which is hereby incorporated by reference. The contents of the present application are not necessarily identical to the contents of the priority document.
BACKGROUND OF THE INVENTION
1. Technical Field
The invention relates to high-speed packet switched networks. More particularly, the invention relates to an efficient method and system of network capacity planning which relies on a close monitoring of the occupancy of buffers in the network nodes.
2. Description of the Related Art
The emergence of high speed networking technologies such as ATM cell-based or Frame Relay based technologies, now makes possible the integration of multiple types of traffic having different quality of service requirements (QoS), like speech, video and data, over the same communication network, which is often referred to as a “broadband” network. The communication circuits which may be shared in such network include transmission lines, program controlled processors, nodes or links, and data or packets buffers. Traffic QoS requirements are taken into account during the path selection process, and can be defined as a set of measurable quantities or parameters that describe the user's perception of the service offered by the network. Such parameters include the connection setup delay, the connection blocking probability, the loss probability, the error probability, the end-to-end transit delay and the end-to-end delay variation also referred to as jitter. Real-time traffics have more constraining requirements than non-real-time ones i.e. end-to-end delays and jitters. It is necessary to be able to give priority to the real-time packets in order to minimize these delays. Meanwhile, the packet loss must be guaranteed both for real-time and non-real-time applications which have reserved bandwidth in the network, while it is not mandatory for non-reserved type of traffic.
In this context, network users want the ability to request and be granted service level agreements (SLAs). An SLA is an agreement by the network provider to supply a guaranteed level of connectivity for a given price. The agreement is reciprocal in that the user also commits not to go beyond a certain level of network usage. The level of connectivity can be expressed in many ways, including the following: the Bandwidth (number of bits per second), the Latency (end-to-end delay), the Availability (degree of uninterrupted service), the Loss Probability, and the Security (guarantee that only the intended parties can participate in a communication).
Another important objective of the networks providers is to optimize the network resources utilization. Indeed, communication networks have at their disposal limited resources to ensure an efficient packets transmission, and while transmission costs per byte continue to drop year after year, transmission costs are likely to continue to represent the major expense of operating future telecommunication networks as the demand for bandwidth increases. More specifically, considering wide area networks (also referred to as “backbone networks”), the cost of physical connectivity between sites is frequently estimated at 80% of the overall cost. The connectivity can come in the form of a leased line, X.25 service, frame relay bearer service (FRBS), ATM bearer service (ATMBS), X.25, or a virtual private network. As higher-speed links become available, the cost per bit may decrease, but the absolute cost of links will remain significant. Therefore, there is a need to minimize the net cost per transmitted bit for all connectivity options and link speeds. Minimizing the cost per bit means squeezing the maximum possible utilization out of every link.
Thus, considerable efforts have been spent on designing flow and congestion control processes, bandwidth reservation mechanisms, routing algorithms to manage the network bandwidth and do network capacity planning i.e. optimize the configuration of the established connections (bandwidth allocated, path selected, etc.).
In order to comply with both optimizing network resources utilization and guaranteeing satisfactory SLAs to the network customers, high speed networks generally include monitoring software systems to monitor the status of their nodes and links. These monitoring systems typically rely on counters implemented at switching node level. From a network resources monitoring point of view, the most important counters are those which reflect the behavior of the “bottleneck” resources of the network because they will also reflect the end to end behavior or quality of the service delivered. In high speed networks, the switching nodes are generally oversized in terms of performances compared to the communication links. As a matter of fact, switching nodes are “one time cost” for a network owner while lines cost is recurrent for example in a month period basis in case of leased lines, and is also much higher as previously stated. In order to minimize the overall cost of a network, communication lines are sized in order to handle the traffic requirements but no more, and accordingly their throughput is always less than that of a switching node. Therefore, in a high speed network, communication links generally constitute the “bottleneck resources”.
Each switching node typically includes a switching fabric and a plurality of adapter components which connect the node ongoing and outgoing links to the switching fabric. Each adapter component includes a “receive” part and a “transmit” part. The receive part receives data flow entering the node while the transmit part outputs data flow from the node towards another node. In this context, network management processes typically use counters located at the transmit part of the adapter components of the switching nodes for monitoring network resources utilization. These counters count packets or cells just before they are boarded to the outgoing links of the nodes. More specifically, the links monitored are more specifically those existing between two network switches sometimes referred to as “trunks”, rather than those (logically) connecting a device on a user premise and a network access switch, sometimes referred to as “ports”. Indeed, the long distance trunks are usually more expensive than local ports, and accordingly more heavily loaded in order to optimize their cost.
At the transmit part of each adapter component the susmentioned counters are incremented during the steady state process of the cells/packets by a dedicated processing unit sometimes referred to as “Picocode Processor.” Periodically (e.g. every 15 minutes interval), a higher level processor herein referred to as “General Purpose Processor” imbedded in the adapter, but used for background control processing, retrieves the counters values and resets the counters. The General Purpose Processor also computes each line utilization information based on the line speed, and stores this information for further processing. Finally, a bulk statistics server, for example a workstation, independent from the network, retrieves periodically (typically every night) in each node the files containing resources utilization data, and provides to the network management operator summarized data on links utilization and network behavior. Links utilization data are typically expressed in terms of percentage of bandwidth utilized per unit of time. Links utilization is typically evaluated as follows.
Considering a link l whose maximum speed (i.e. bandwidth) is S cells/bytes per second (where S denotes an integer), and assuming that the counters values associated with that link are polled every T time units (where T denotes an integer, e.g., T=15 minutes). Then, the computed utilization estimation U(l) of link l associated with each measurement time interval T would be expressed by the following formula:
U

(
1
)
T
=
N
S
×
T
where N denotes the number of cells/packets received during measurement period T, and where T

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Network capacity planning based on buffers occupancy monitoring does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Network capacity planning based on buffers occupancy monitoring, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Network capacity planning based on buffers occupancy monitoring will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3280717

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.