Multiplex communications – Data flow congestion prevention or control – Flow control of data transmission through a network
Reexamination Certificate
1999-10-14
2004-06-29
Pham, Chi (Department: 2667)
Multiplex communications
Data flow congestion prevention or control
Flow control of data transmission through a network
C370S395400
Reexamination Certificate
active
06757249
ABSTRACT:
FIELD OF INVENTION
The field of invention relates to networking generally. More specifically, the field of invention relates to a pipeline for processing networking packets.
BACKGROUND OF THE INVENTION
Packet Networks
Two forms of networking technology, referred to as “circuit” and “packet” are in widespread use. However, the two have generally been applied to different applications. Circuit networks, usually associated with large telecommunications companies, have traditionally supported mostly voice traffic (e.g., a telephone call) while packet networks have traditionally supported computer traffic (commonly referred to as data traffic or the like).
Circuit networks are generally characterized as having minimal latency, meaning: traffic emanating from a source appears almost instantaneously at its destination. Low latency is deemed a requirement for networks carrying voice traffic since two people engaged in conversation begin have difficulty communicating effectively if there is more than 10-100 milliseconds of delay in the transport of their correspondence. Traffic requiring low latency, such as voice or video conferencing, is referred to as real time traffic. A problem with circuit networks, however, is their relatively inefficient consumption of network resources (in the form of wasted bandwidth).
Packet networks have been generally characterized as having poor latency but good efficiency. Traditionally, the transportation of traffic over a packet network resulted in noticeable delay (i.e., high latency). However, with a packet network, bandwidth tends to be conserved rather than wasted. Packet networks have been traditionally implemented in computer networks since communications between computers usually involve some form of data transfer (e.g., an e-mail) or other type of non real time traffic.
Non real time traffic is usually tolerant of latencies of a few seconds or higher. Non real time traffic includes (among others): e-mail, communications between a web server and a PC, and the sending of files such as a word processing document. Wide Area Network (WAN) traffic and Regional Network (RN) traffic have been traditionally designed to carry voice traffic (since the majority of longer distance communications have been voice communications) resulting in wide scale deployment of circuit networks in the WAN/RN areas. Regional networks typically serve a region. (such as the Northeast or Mid Atlantic states). WANs typically serve longer distance communications such as transoceanic or transcontinental connections.
With the growing popularity of the Internet non-real time traffic has approached voice and other real time traffic in the WAN and RN. Furthermore, advances in silicon technology have resulted in much faster and affordable networking equipment such that the latency problem traditionally associated with packet networks is not the barrier for real time traffic that it once was.
With the poorer efficiency of circuit switched networks, the surge in non real time traffic and the potential of packet networks to carry real time traffic, WAN/RN network managers have begun to think about a packet based approach in the WAN and RN. Furthermore, although packet technology has always been associated with local area networks (LANs) used for computers and other data devices connected over small areas such as an office building or campus; packet approaches are also expected to be used for traditional circuit equipment (such as the telephone or facsimile machine) that are located proximate to a LAN.
Service Level Agreements, Quality of Service and Traffic Rates
Networks carry various forms of data (e.g., voice traffic, data files such as documents, facsimile transmissions, etc.) from a source to a destination. One of the relationships surrounding the commercialized use of a network is the contractual relationship between the user of a network and the provider of a network. The provider of a network (also referred to as a provider, service provider or network service provider) typically owns and manages networking equipment that transport a user's data. In other cases, however, a service provider may lease or otherwise obtain access to the networking equipment of others in order to implement his (i.e., the service provider's) network.
The user of the network (also referred to as a user, network user) is any individual, entity, organization, etc. that seeks the network of another individual, entity or organization to transport the user's traffic. In many cases, the network user and service provider usually form an agreement, referred to as a Service Level Agreement (SLA) based on the user's prediction of his usage of the network and the service provider's prediction of the performance of his network. Note that a network user is not necessarily a party engaged in a commercial contract. For example, a user may be a department in a corporation whose networking needs are handled by a another department (who acts as the service provider) within the corporation.
When a service provider offers a user the use of a network, the agreement (or other description) that characteristics the relationship between the user and service provider typically follows a framework roughly outlined by a queue: an input rate, output rate and an amount of delay. Typically, the service provider and user agree what the user's input rate to the network will be. The user's input rate is usually defined in terms of bits per second and measures how much data the user may send to a network in a given period of time (e.g., 622 Mb/s). If the user exceeds his input rate the service provider is generally not obligated to accept any excess traffic, although terms may vary from contract to contract.
Output rate is analogous to input rate in the sense that it is measured in terms of bits per second. Output rate, however, deals with the rate at which the user may receive traffic from the network. Again, if the user receives traffic at too high a rate, the service provider is not necessarily obligated to deliver it; or at least deliver all of it.
Assuming the user offers and receives traffic to/from the provider's network within his allowable input/output rates, the next question is the amount of delay, also referred to as network latency, that the user can expect to observe for his traffic.
Network latency delay concerns (although is not solely determined by) the priority of the packet within the service provider's network. For example, for high Quality of Service (QoS) levels the user's traffic is typically given a high priority. This means it is placed ahead of other traffic within the network or given a route having fewer nodal hops so it may be processed and delivered in a shorter amount of time. For low priority levels, the user's traffic is given low priority. This typically means it tends to “sit” in the provider's network for periods of time before being processed (since higher priority traffic is continually placed ahead of it); or, the traffic is routed on a path having more nodal hops.
Priority has also been affiliated with the notion that different types or classes of traffic require different types of service. For example, voice traffic typically requires small delay through the network while data traffic may tolerate higher delays. Such characteristics generally force the service provider to treat the different traffic types differently. For example, voice traffic should be given higher priority over data traffic in order to reduce the delay of voice traffic. Such an environment is usually referred to as “differentiated services”.
Note that a single user may have both types of traffic. As an example, in such a case, the service provider and user agree could agree to separate, unique rate and priority terms for the voice traffic and the data traffic. The priority terms for the user's voice traffic would reflect low latency while the priority terms for the user's data traffic would reflect higher latency.
In order for the user/service provider con
Ho Chi Fai
Kejriwal Prabhas
Nokia Inc.
Pham Chi
Squire Sanders & Dempsey L.L.P.
Waxman Andrew M.
LandOfFree
Method and apparatus for output rate regulation and control... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for output rate regulation and control..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for output rate regulation and control... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3351435