Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing
Reexamination Certificate
1998-06-11
2002-11-05
Banankhah, Majid (Department: 2156)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data routing
Least weight routing
C709S241000, C709S241000, C709S241000
Reexamination Certificate
active
06477561
ABSTRACT:
TECHNICAL FIELD
This invention relates to application programs that use and manage multiple execution threads.
BACKGROUND OF THE INVENTION
A typical server application has a pool of execution threads, referred to herein as worker threads, for performing requested tasks. The task requests arrive asynchronously at a thread pool manager. The thread pool manager queues the requests for available worker threads. When a worker thread becomes available, the pool manager removes a request from the queue and assigns it to the available worker thread. The worker thread performs the requested task and then become available for another task request.
When designing a server application program such as this, it becomes necessary to set a limit on the total number of threads that will be made available from the thread pool. The optimum limit depends on the type of work being performed by the processors executing the threads. I/O-related tasks are relatively non-intensive in terms of processor utilization because of the frequent waits imposed by peripheral devices. If these types of tasks are being performed, it is most efficient to allocate a large number of worker threads to fully utilize the available processing bandwidth of the processors. Computational tasks, on the other hand, result in a relatively high utilization of a computer's processors. If these types of tasks are being performed, it is more efficient to limit the number of worker threads to the number of available processors. Otherwise, processing time is wasted by frequent switching between threads.
Any decision regarding the optimum thread pool size is complicated when processor scalability is considered. With multiple processors, lock contention can become a serious problem. Furthermore, lock contention problems are exacerbated with larger numbers of threads. In certain situations, the use of a large number of threads can actually produce negative processor scalability: performance decreases with the addition of processors because of many worker threads trying to obtain the same locks.
It is very difficult to optimize the thread pool for a particular application program, mainly because of constantly changing conditions. Specifically, the type of work performed by a computer's processors changes with time. Even if the needs of the application program were known, an optimization might become ineffective due to activities of different application programs and/or processes. Furthermore, the same requests might generate different types of blocking behavior at different times, depending on conditions independent of the application program itself.
Another potential problem is that a server program might not even have direct control over a thread pool, such as when the thread pool is provided by a separate application or external function library.
SUMMARY OF THE INVENTION
The inventor has solved the problem of thread pool optimization by varying the number of available threads over time. A thread limit is maintained and repeatedly updated based on the actual CPU utilization of the computer. If CPU utilization is low, the thread limit is set at a relatively high number. If CPU utilization is high, the thread limit is set to a relatively low number.
When a thread is initiated to service a request, the thread calls a gating function first. The gating function compares the current number of active threads against the current thread limit. If the thread limit has been equaled or exceeded, the gating function delays its calling thread for a predefined time and then checks again. The thread is allowed to continue only after the number of active threads has dropped below the thread limit.
In addition to the gating function, the thread calls an exit function after it has completed servicing the current request, just before the thread is ready to process the next request. The gating function and the exit function maintain the count of active threads. The gating function increments an active thread count variable just before returning control to its calling thread. The exit function decrements the active thread count variable just before the thread is returned to the thread pool.
The thread limit is updated at a predefined interval such as one second. An update function calls an existing operating system function to determine current CPU utilization. If the CPU utilization is below a defined lower threshold, the thread limit is increased. If the CPU utilization is above a defined upper threshold, the thread limit is decreased.
REFERENCES:
patent: 5752031 (1998-05-01), Cutler et al.
patent: 6105053 (2000-08-01), Kimmel et al.
patent: 6161166 (2000-12-01), Doing et al.
Banankhah Majid
Lee & Hayes PLLC
Microsoft Corporation
LandOfFree
Thread optimization does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Thread optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Thread optimization will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2932560