Method of scheduling higher and lower priority data packets

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S429000

Reexamination Certificate

active

06205150

ABSTRACT:

FIELD OF INVENTION
The present invention relates to communications in computer networks. More specifically, it relates to a method for dynamically scheduling transmission of high and low priority data packets associated with a network device by utilizing dual queues, dual scheduling methods and a promoter.
BACKGROUND OF THE INVENTION
As is known in the art, an operating system is a set of software routines used to provide functionality to a computer system. One function of an operation system is to schedule and execute multiple tasks. The multiple tasks provide various functionality including reading and writing to memory and secondary storage, input and output to a computer display, execution of application programs, input and output to peripheral device and others. Many operating systems such as UNIX of Unix Systems Laboratories, owned by Novell of Provo, Utah, or Windows 95/NT by Microsoft Corporation of Redmond, Wash., provide operating systems that use priority schemes to schedule and execute tasks. Such priority schemes include various levels of priority from lower priority tasks to higher priority tasks.
Operating systems known in the art, use queues to schedule tasks of different priorities.
When tasks of varying priorities are used, one or more scheduling methods are used to specify an order in which to satisfy priority tasks in the queue. Examples of common scheduling methods include First-Come, First-Served (“FCFS”), Shortest-Job-First (“SJF”), Round Robin (“RR”) and preemptive priority scheduling methods. Some of these methods ensure that higher priority tasks are executed before lower priority tasks, but also ensure that lower priority tasks are not “starved out” of execution time.
In a First-Come, First-Served scheduling method, a first task received in a queue is the first one executed. As the name suggests, in a Shortest-Job-First scheduling method the shortest task in terms of execution time, in the queue is executed first. Whereas, a Round Robin scheduling method grants each task a set amount of time, or “time-slice,” to execute before moving on to the next task even if the previous one is not completed. After each task in the queue has executed for the time-slice, the round robin scheduling method repeats, or continues to “cycle,” until all tasks in a queue are satisfied. Preemptive priority scheduling methods assign a priority to each task as it enters a queue. Tasks are executed in the queue in the order of the assigned priority; however, the method allows for specific tasks to execute immediately and thereby preempting higher priority tasks from executing. Each of these scheduling methods present various problems.
In the First-Come, First-Served scheduling method, an important task at the end, or “tail” of a queue must wait to be executed until tasks ahead of the it have been executed. Furthermore, a time-intensive task at the beginning, or “head,” of the queue may prevent the remaining tasks is from executing. This is called “starving” which can lead to catastrophic events. For example, if a task in the queue such as one for memory maintenance is starved out, the computer system may fail due to lack of resources. To prevent the starving of shorter tasks by longer, more time-intensive tasks, the shortest-job-first scheduling method allows the shorter tasks to execute before the longer tasks. However, these methods run the risk of delaying higher priority tasks. The Round Robin attempts to prevent the delaying of high priority tasks by allocating only a “time-slice,” to each task in the queue. Although each task is treated equal in priority, longer tasks may take many cycles of time-slices before completely executing. However, this may actually delay the execution of a high priority task. Thus, priority scheduling was developed to prevent lower priority tasks from delaying or “starving” out higher priority tasks.
The problems associated with scheduling tasks in an operating system also occur in multi-user network systems with a plurality of network connections, network devices and data packets. In a network system environment, a data packet is analogous to a task in an operating system. Customers on a network system may have different Customer Premise Equipment (“CPE”) (i.e., a computer) with different capabilities, such as the ability to send and receive data packets at various data rates or bandwidth. In a multimedia system, logical multimedia channels are typically used by a network connection to create separate audio, video and data channels. The audio and video channels are typically allocated with predetermined, fixed maximum bandwidth. For example, on a modem connection an audio channel may have a bandwidth of 5,300 bits-per-second (bps) and a video channel may have a bandwidth of 23,500 bps for a multimedia bandwidth of 28,800 bps (i.e., the sum of the two channels). Many network hosts allow customers to subscribe to various Classes-of-Service (“CoS”) and Qualities-of-Service (‘QoS”) to optimize reliability and data transmission speeds. As is known in the art, class-of-service provides a reliable (e.g., error free, in sequence, with no loss of duplication) transport facility independent of the quality-of-service. Class-of-service parameters include maximum downstream data rates, maximum upstream data rates, upstream channel priority, guaranteed minimum data rates, guaranteed maximum data rate and others. Quality-of-service collectively specifies the performance of a network service that a device expects on a network. Quality-of-service parameters include transit delay expected to deliver data to a specific destination, the level of protection from unauthorized monitoring or modification of data, cost for delivery of data, expected residual error probability, the relative priority associated with the data and other parameters. Higher class-of-service and quality-of-service connections transmit higher priority data packets. Thus, various customers on the network system will transmit and receive both high priority and low priority data packets.
In a network system, a network device, such as a router, is responsible for routing data packets to an appropriate device on a network topology. For example, a network system including a network host and multiple users, or customers, will utilize a router to direct downstream data packets (i.e., data packets from a network host) to the customer premise equipment and upstream data packets (i.e., data packets from the customer premise equipment) to the network host. Along with directing traffic from multiple customer premise equipment and the network host, the router also typically schedules the order in which data packets will be sent and received by the network host and customer premise equipment.
There are several problems associated with scheduling data packets of high and low priority from multiple network devices. Higher priority data packets which are transmission delay sensitive, are typically sent by a router on higher priced connections (e.g., charged time-of-use fees). Delay sensitive information includes voice, real-time video and other information sensitive to transmission delays. Such information can not be sent over connections that may have a large transmission delay without loss of information or loss of quality of information. Non-delay sensitive information and large bursts of data are typically sent over dedicated packet switched connections that are lower priced connections (i.e., charged monthly connection or bandwidth fees).
A router utilizes scheduling schemes analogous to those used by an operating system to manage tasks. When the router receives data packets from one or more customer premise equipment, the router typically places the data packets into a receive queue. After determining the priority associated with each data packet, the router uses one or more scheduling methods to schedule the order in which the router will send the data packets downstream to the network host by placing the data packets in a transmission queue. Likewise, the same router typically receives upstream data

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of scheduling higher and lower priority data packets does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of scheduling higher and lower priority data packets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of scheduling higher and lower priority data packets will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2465081

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.