Controlling allocation of system resources with an enhanced...

Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S241000

Reexamination Certificate

active

06584488

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to data processing systems, and in particularly to a method and system for scheduling threads on a data processing system. Still more particularly, the present invention relates to a method and system for scheduling threads on a data processing system wherein system resources are allocated based on a priority determination of a thread.
2. Description of the Related Art
The basic structure of a conventional computer system includes a system bus or a direct channel that connects one or more processors to input/output (I/O) devices (such as a display monitor, keyboard and mouse), a permanent memory device for storing the operating system and user programs (such as a magnetic hard disk), and a temporary memory device that is utilized by the processors to carry out program instructions (such as random access memory or “RAM”).
When a user program runs on a computer, the computer's operating system (OS) first loads the program files into system memory. The program files include data objects and instructions for handling the data and other parameters which may be input during program execution.
The operating system creates a process to run a user program. A process is a set of resources, including (but not limited to) values in RAM, process limits, permissions, registers, and at least one execution stream. Such an execution stream is commonly termed a “thread.” The utilization of threads in operating systems and user applications is well known in the art. Threads allow multiple execution paths within a single address space (the process context) to run concurrently on a processor. This “multithreading” increases throughput and modularity in multiprocessor and uniprocessor systems alike. For example, if a thread must wait for the occurrence of an external event, then it stops and the computer processor executes another thread of the same or different computer program to optimize processor utilization. Multithreaded programs also can exploit the existence of multiple processors by running the application program in parallel. Parallel execution reduces response time and improves throughput in multiprocessor (MP) systems.
FIG. 1
illustrates multithreading in a uniprocessor computer system
100
, which includes a bus
118
that connects one processor
112
to various I/O devices
114
and a memory device
116
. Memory device
116
contains a set of thread context fields
120
, one for each thread associated with a particular process. Each thread consists of a set of register values and an execution stack
124
. The register values are loaded into the CPU registers when the thread executes. The values are saved back in memory when the thread is suspended. The code that the thread runs is determined by the contents of a program counter within the register set. The program counter typically points to an instruction within the code segment of the application program. Memory device
116
further contains all of the logical addresses for data and instructions utilized by the process, including the stacks of the various threads. After a thread is created and prior to termination, the thread will most likely utilize system resources to gain access to process context
122
. Through the process context
122
, process threads can share data and communicate with one another in a simple and straightforward manner.
AIX machines support the traditional UNIX operating system (OS) which contains a number of OS commands. The AIX operating system is IBM's implementation of the UNIX operating system. UNIX is a trademark of UNIX Systems Laboratories, Inc. The NICE command and related operating system's features purport to allow an administrator the ability to favor or disfavor specific processes as they execute. The result of the NICE command sets an internal NICE value which is utilized in later priority calculations. However, the traditional NICE command's effectiveness in favoring processes is both weak and tends to diminish over time.
During standard UNIX priority calculation, as each thread runs, it accumulates ticks of central processing unit (CPU) time. The CPU time is divided in half in the priority calculation to allow more CPU time to the thread before its priority degrades. UNIX NICE adjusts the priority by adding a constant, the NICE value, to the priority calculation, where a larger value correlates to a lower priority for the thread.
Thread scheduling is an important aspect of implementing threads. Scheduling is based on a priority calculation where the NICE value adjusts the priority. The system is able to recognize that the thread gets less CPU time and itself attempts to adjust to provide more CPU time; hence there is an internal conflict of sorts which is present in the present art.
The scheduler's job is to share resources fairly among the runnable threads on a processor based on resource utilization. The only resource that the AIX scheduler deals with is CPU time. The mechanism that the scheduler utilizes to share the CPU is the priority calculation. Threads with numerically lower priorities run ahead of those with numerically higher priorities. (On a MP, affinity considerations may override this rule).
UNIX operating system (OS) has an associated system NICE value which is utilized by the kernel to determine when a thread should be scheduled to run. This value can be decreased to facilitate processes executing quickly or increased so that the processes execute slowly and thus do not interfere with other system activities.
The thread scheduler, which is part of the UNIX kernel, keeps the CPU busy by allocating it to the highest priority thread. The NICE value of a process is utilized to modify the scheduling priority of its threads. The principle factor that is taken into account when calculating the scheduling priority for a process is its recent CPU usage.
The standard UNIX priority implementation (which AIX and probably all other vendors utilize) determines a thread's dispatching priority as a simple function of its recent CPU utilization history. The NICE facility adjusts the dispatching priorities of select processes on the system by simply adjusting the priority upwards or downwards by adding a fixed amount (the NICE value). Unfortunately, the feedback mechanisms in the scheduler are simultaneously geared to allocating resources fairly to everyone. This results in the diminished effectiveness of NICE mentioned above, because the scheduler will notice the, say, decreased utilization of a thread NICEd down, and soon end up giving that thread equal treatment with all other threads contending for the processor.
Today's NICE−20 (the maximum adjustment allowed) has only a limited effect. For example, if two compute bound threads compete with one NICE'd to −20, after a short time, the default thread gets about 60% of the CPU time while the NICE'd thread gets 40%. This is a rather weak and unsatisfactory result. Although this figure can be increased with AIX schedtune, it degrades the normal behavior of the scheduler. Further, today's NICE command provides no discrimination between threads after initial advantage at the start of each second.
Many consumers, typically ones with larger systems, are desirous of a better way to allocate processor resources among the competing threads on their systems. Improvement in this area is critically important to large server business, as those consumers generally experience performance problems with mixed workloads on a single systems (i.e., in large MP systems). Consumers are no longer content to dedicate individual machines to specific subsets of their work, but demand to be able to cost effectively share the single large system.
A new priority calculation within the system scheduler is therefore desired to improve the effectiveness of the NICE command beyond merely adding the NICE value to a function.
It would therefore be desirable and advantageous to provide an improved method and system for alloc

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Controlling allocation of system resources with an enhanced... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Controlling allocation of system resources with an enhanced..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Controlling allocation of system resources with an enhanced... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3139275

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.