Lock management system and method for use in a data...

Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06816952

ABSTRACT:

CROSS-REFERENCE TO OTHER APPLICATIONS
The following co-pending applications of common assignee contain some common disclosure:
“Directory-Based Cache Coherency System Supporting Multiple Instruction Processor and Input/Output Caches”, filed Dec. 31, 1997, Ser. No. 09/001,598, and incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to an improved memory management system and method for use in a data processing system; and, more particularly, relates to an improved lock management system and method.
2. Description of the Prior Art
Data processing systems are becoming increasing complex. Some systems, such as Symmetric Multi-Processor computer systems, couple two or more Instruction Processors (IPs) and multiple Input/Output (I/O) Modules to shared memory. This allows the multiple IPs to operate simultaneously on the same task, and also allows multiple tasks to be performed at the same time to increase system throughput.
As the number of units coupled to a shared memory increases, more demands are placed on the memory and memory latency increases. To address this problem, high-speed cache memory systems are often coupled to one or more of the IPs for storing data signals that are copied from main memory or from other cache memories. These cache memories are generally capable of processing requests faster than the main memory while also serving to reduce the number of requests that the main memory must handle. This increases system throughput.
While the use of cache memories increases system throughput, it causes other design challenges. When multiple cache memories are coupled to a single main memory for the purpose of temporarily storing data signals, some system must be utilized to ensure that all IPs are working from the same (most recent) copy of the data. For example, if a data item is copied, and subsequently modified, within a cache memory, another IP requesting access to the same data item must be prevented from using the older copy of the data item stored either in main memory or the requesting IP's cache. This is referred to as maintaining cache coherency. Maintaining cache coherency becomes more difficult as more cache memories are added to the system since more copies of a single data item may have to be tracked.
Another problem related to that described above involves providing a way to ensure continued access to shared data resources. In a shared memory system, various IPs may require access to common data stored in memory. A first IP that has copied such data within its cache memory may be forced to relinquish control over that data because another IP has requested that same information. If the first IP has not completed processing activities related to that data, the IP is required to re-gain access to it at a later time. In some instances, this is an acceptable way of performing processing activities. In other situations, losing control over a data item in the middle of program execution may result in errors.
The type of errors that are alluded to in the foregoing paragraph can best be understood by example. Consider a transaction processing system that is transferring funds from one bank account to another. The transaction is not considered complete until both bank account balances have been updated. If the instantiation of the software program, or “thread”, which is processing this transaction loses access to the data associated with the account balances at a time when only half of the updates have been completed, the accounts may be in either an under- or over-funded state. To prevent this situation, some mechanism must be used to “lock”, or activate, access rights to the data until the thread has completed all necessary processing activities. The thread then “unlocks”, or deactivates, sole access rights to the data.
Various types of locking mechanisms have been introduced in the prior art. Many of these locking mechanisms use a lock cell or semaphore. A lock cell is a variable that is used to control a software-lock to an associated shared resource such as shared memory data. The state of the lock cell indicates whether the software-lock and the associated, protected shared resource is currently activated by another thread. Generally, a thread activates the software-lock using a lock-type instruction. As is known in the art, this type of instruction first tests the state of the lock cell. If the state of the lock cell indicates the shared resource is available, the instruction then sets the lock cell to activate the software-lock to the executing thread. These testing and setting operations are performed in an atomic operation by a single instruction to prevent multiple processors from inadvertently gaining simultaneous access to the same lock cell.
The lock cell is generally stored within main memory. As noted above, this lock cell may be a software-lock associated with, and protecting, shared data. By software convention, the shared data must not be accessed without first gaining authorization through the software-lock. Many prior art systems store the lock cell and associated data in a same cacheable entity of memory, or “cache line”. As a result, when an IP attempts a lock-type operation on a lock cell, both the lock cell and at least some of the protected data are transferred to the IP's cache. However, if this attempt is made when the software-lock had already been activated by another thread, the transfer of the lock cell and protected data to the new requester's cache temporarily disrupts the processing activity of the IP executing the thread that had activated the software-lock. This reduces execution throughput.
A prior art solution for preventing the foregoing problem was to separate the lock cell into one cache line and the protected data into another cache line. While this solution prevents the thrashing of protected data when another thread attempts the software-lock, the solution causes the IP to acquire two cache lines. First, the IP copies the cache line that contains the lock cell in an exclusive state to attempt the software-lock. If the software-lock is successfully activated, the IP copies the cache line for the protected data upon the first reference to the data. The IP must temporarily suspend processing activities, or “stall”, during the time the protected data is copied from memory. Therefore, while this scheme may prevent the thrashing of data during attempted lock activation, it results in cache line access stalls after the software-lock is successfully activated.
A related problem to the foregoing involves acquiring cache lines using specific cache line states. Some processing systems such as the ES7000™ platform commercially-available from the Unisys Corporation copy data to cache in a variety of states according to the first type of reference to the data or according to how the data was last used. For example, the data may be cached in a “shared” state such that the associated processor can read, but not update, this data. When the IP copies data to the cache in a shared state, a subsequent write operation causes the IP cache to acquire an “exclusive” state for the cache line so that the write can be completed. Acquiring the exclusive state after already having the shared state takes nearly as long as initially copying the data. Other data may be initially cached with exclusive state. Prior art locking mechanisms do not take into consideration how the data will be used when acquiring protected data from main memory, resulting in unnecessary disruption of processing activities.
Yet another drawback associated within prior art software-lock mechanisms involves the time associated with retrieving software-lock-protected data from main memory once a lock has been activated. The software-lock may be associated with one or more cache lines of data that must be copied from memory during subsequent transfer operations. These operations may each be relatively time consuming, especially if a multi-level caching hierarchy is involved. Some p

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Lock management system and method for use in a data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Lock management system and method for use in a data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Lock management system and method for use in a data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3355654

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.