Priority queue management system for the transmission of...

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S419000, C370S428000

Reexamination Certificate

active

06771653

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to data transmission networks wherein each node within a network transmits data frames to other nodes in accordance with a designated priority and the type of data contained therein. In particular, the present invention relates to a prioritized queue management system within a data transmission network.
2. Background
A data transmission network is generally a multi-service network carrying differing types of traffic between various applications. Some traffic is critical but some is not and may have a major and unintended impact on overall network performance. A user may, for example, disrupt critical transaction traffic by sending a large document or presentation files to one or several remote users. A structural or operative network change or failure may also impact traffic flow. Without a network traffic prioritization scheme, important data transactions may be heavily impacted.
Applications that utilize a network create three different categories of traffic. These traffic categories are differentiated by their latency (network delay) requirements. Real-time traffic (RT) such as conversational voice, video conferencing, and real-time multimedia, requires a very low latency and controlled latency variation (jitter). Compressed traffic is sensitive to transmission errors, but, because of its low latency requirements, errors in transmission cannot be overcome by retransmission. Therefore, transmission of compressed traffic must have low error rates, or forward-error-correction must be utilized. Both low latency and forward-error-correction decrease the effective transmission rate of a compressed transmission.
Interactive traffic, such as transaction processing, remote data entry, and some legacy protocols (e.g., SNA) requires latencies of approximately one second or less. Greater latencies cause processing delays as the users must wait for replies to their messages before they can continue their work. In some cases, such as certain legacy protocols, exceeding the allowable latency causes session failure; the applications must then re-establish the session. Interactive traffic is not sensitive to bandwidth beyond that needed to satisfy its latency requirements.
Bulk transfer traffic, also called Non Real Time (NRT) accepts virtually any network latency, including latencies on the order of a few seconds. Bulk transfer traffic is more sensitive to the available bandwidth than to the latency. An unusual application of bulk transfer is for transmitting video and audio data that is processed and presented to the end user while the file is still arriving.
Within each traffic category, traffic may be further subdivided by priority. Priorities are not substitutes for traffic categories, which are absolute, not relative, requirements. High priority traffic receives preferential treatment because of its importance to the enterprise, unlike a traffic category which supports an application that will fail if sufficient communications service is not provided. As there are no absolute network parameters that are universally associated with a priority, the meaning, in terms of measurable network performance, of a particular priority varies from one network to the next.
Another priority classification differentiates between Reserved Traffic and Non Reserved traffic. The need for this prioritization arises when both types may coexist in the same network. When data traffic is designated as “reserved”, its characteristics are known and accepted throughout the network. Reserved traffic is transported at a higher priority than non-reserved traffic.
The priority assigned to reserved traffic is an absolute priority, while the priority of all non reserved flows is relative. Absolute priority means that a traffic engineering mechanism has validated that the requesting traffic may pass through the network. The traffic engineering mechanism provides a network context in which reserved traffic is transported with pre-defined (requested) characteristics. For traffic having a relative priority, a priority level is assigned as the traffic enters a node.
Quality of service (QoS) technology provides a method for categorizing traffic and for ensuring that particular categories of traffic will either flow across a backbone in a timely manner, or at least will be prioritized, regardless of competing demands. The primary role of QoS technology is to protect mission-critical traffic from interference by less important traffic. A secondary role of QoS is to allow other non-critical traffic to be transported in a fair and efficient manner. For example, QoS technology permits new multimedia applications to be delivered with a guaranteed delay (latency) and throughput (bandwidth) while observing the critical
on critical distinction.
Current QoS techniques rely on queuing algorithms of various type implemented in network routers and other network devices. Utilization of queues in routers has emerged as a way to handle the problem of bursty Internet Protocol (IP) traffic. As traffic arrives at a router, it is placed in one of a number of queues that are associated with the correct outgoing router port. Depending on the particular queuing algorithm selected during router configuration, traffic is then taken from the queues for transmission. However, two major problems arise from queuing: high-priority interactive traffic can be easily blocked behind bulk data transfers, and queue overflows can-cause packet loss in all traffic, not just the one that saturated the queue.
Priority queuing was developed to overcome the first major problem. It maintains separate outgoing queues for different traffic classes on the same outgoing port. A frame cannot be transmitted from a queue unless all of the higher-priority queues are empty. The drawback of priority queuing is that lower priority queues can be starved for capacity when a link is overloaded. This low priority queue starvation worsens the overload problem by driving low priority traffic flows into timeout and into attempted retransmissions.
Priority queuing has been improved by methods such as Fair Queing (FQ) which attempts to ensure that all queues receive a designated share of the bandwidth. However, this method is not dynamic, nor does it differentiate among differing types of incoming traffic, resulting in limiting some queues permanently. Another method, Weighted Fair Queuing (WFQ), further separates each priority into high and low bandwidth flows. Low bandwidth flows are given priority over high bandwidth flows with traffic flows within groups being interleaved. Response times for the low bandwidth flows (real-time or interactive traffic) are greatly improved and bursty traffic in all flows is interleaved. Nevertheless, a drawback remains in that multiple low bandwidth flows may penalize or even jeopardize high bandwith traffic. Furthermore, WFQ does not account for jitter.
None of the above-mentioned methods directly reduces the overflow problem, resulting in waves of bursty traffic and buffer overflows. The Random Early Detection (RED) algorithm addresses this issue. Instead of allowing “tail drop” to occur, RED monitors outgoing queue size and drops a frame in selected traffic flows before the queue overflows. This frame “loss” causes congestion in the involved flow process and temporarily decreases the flow rate. Other frames belonging to such a flow may be needlessly transmitted as they will be discarded anyway and retransmitted. Unfortunately, retransmission or “window restart” may increase the congestion problem as all the frames within a window will be retransmitted on all flows.
SUMMARY OF THE INVENTION
A system for providing prioritized queue management within a data transmission network node that supports different types of data frame traffic is disclosed herein. The system includes a frame buffer for storing an incoming frame that has an identifiable frame type. A queue is pre-associated with the frame type of the incoming frame such that upon arrival of the frame at the network nod

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Priority queue management system for the transmission of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Priority queue management system for the transmission of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Priority queue management system for the transmission of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3281986

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.