Computer processor read/alter/rewrite optimization cache...

Error detection/correction and fault detection/recovery – Pulse or data error handling – Memory testing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S722000, C714S719000, C714S723000, C365S205000

Reexamination Certificate

active

06754859

ABSTRACT:

CROSS REFERENCE TO RELATED APPLICATION
This application is related to our copending patent applications assigned to the assignee hereof:
“GATE CLOSE FAILURE NOTIFICATION FOR FAIR GATING IN A NONUNIFORM MEMORY ARCHITECTURE DATA PROCESSING SYSTEM” by William A. Shelly et al., filed Sep. 30, 1999, with Ser. No. 09/409,456; and
“GATE CLOSE BALKING FOR FAIR GATING IN A NONUNIFORM MEMORY ARCHITECTURE DATA PROCESSING SYSTEM” by David A. Egolf et al., filed Sep. 30, 1999, with Ser. No. 09/409,811.
1. Field of the Invention
The present invention generally relates to data processing systems, and more specifically to techniques to detect changes by one processor to memory by another processor.
2. Background of the Invention
Data processing systems invariably require that resources be shared among different processes, activities, or tasks in the case of multiprogrammed systems and among different processors in the case of multiprocessor systems. Such sharing is often not obvious within user programs. However, it is a necessity in operating systems, and is quite common in utility programs such as database and communications managers. For example, a dispatch queue is typically shared among multiple processors in a multiprocessor system. This provides a mechanism that allows each processor to select the highest priority task in the dispatch queue to execute. Numerous other operating systems tables are typically shared among different processes, activities, tasks, and processors
Serialization of access to shared resources in a multiprocessor system is controlled through mutual exclusion. This is typically implemented utilizing some sort of hardware gating or semaphores. Gating works by having a process, activity, or task “close” or “lock” a “gate” or “lock” before accessing the shared resource. Then, the “gate” or “lock” is “opened” or “unlocked” after the process, activity, or task is done accessing the shared resource. Both the gate closing and opening are typically atomic memory operations on multiprocessor systems.
There are typically two different types of gates: queued gates and spin gates. Semaphores are examples of queued gates. When a process, activity, or task attempts to “close” a queued gate that is already closed, that process, activity, or task is placed on a queue for that gate, and is dequeued and activated when the gate is subsequently opened by some other process, activity, or task. Queued gates are typically found in situations where the exclusive resource time is quite lengthy, especially in comparison with the time required to dispatch another process, activity, or task.
The second type of gate is a “spin” gate. When a process, activity, or task attempts to “close” a spin gate that is already closed, a tight loop is entered where the processor attempting to close the spin gate keeps executing the “close” instruction until it ultimately is opened by another processor or the processor decides to quite trying. Note that “spin” gates assume a multiprocessor system since the processor “spinning” trying to “close” the spin gate is depending on another processor to “open” the gate. Spin gates are typically found in situations where the exclusive resource time is fairly short, especially in comparison with the time required to dispatch another process, activity, or task. They are especially prevalent in time critical situations.
As noted above, the instructions utilized to open and close gates, in particular spin gates, typically execute utilizing atomic memory operations. Such atomic memory modification instructions are found in most every architecture supporting multiple processors, especially when the processors share memory. Some architectures utilize compare-and-swap or compare-and-exchange instructions (see
FIGS. 10 and 11
) to “close” gates. The Unisys 1100/2200 series of computers utilizes Test Set and Skip (TSS) and Test Clear and Skip (TCS) to close and open spin gates.
The GCOS® 8 architecture produced by the assignee herein utilizes a Set Zero and Negative Indicators and Clear (SZNC) instruction to “close” a spin gate and a Store Instruction Counter plus
2
(STC
2
) instruction to subsequently “open” the spin gate. The SZNC sets the Zero and Negative indicators based on the current value of the gate being “closed”. It then clears (or zeros) the gate. The next instruction executed is typically a branch instruction that repeats executing the SZNC instruction if the gate being closed was already clear (or contained zero). Thus, the SZNC instruction will be executed repeatedly as long as the spin gate is closed, as indicated by having a zero value. The gate is opened by another processor by storing some non-zero value in the gate cell. In the GCOS 8 architecture, execution of the STC
2
instruction to “open” a gate guarantees that the “opened” gate will contain a non-zero value.
One problem that occurs whenever resources are shared between and among processors is that of cache ownership of directly shared data, including locks.
A cache siphon is where the cache copy of a block of memory is moved from one cache memory to another. When more than one processor is trying to get write access to the same word or block of memory containing a gate at the same time to close the gate, the block of memory can “ping pong” back and forth between the processors as each processor siphons the block of memory containing the gate into its own cache memory in order to try to close the gate.
Another problem that arises when directly sharing resources is that in the typical processor architecture, processors repeatedly attempt to close gates or otherwise modify directly shared data until that processor can change that shared data as required. For example, in the case of gates, one processor will bang on the gate until it is opened by another processor.
At first glance this may not seem like a problem since the processor “banging” at a lock cannot do anything else anyway until it succeeds in getting the gate locked. However, this constant “banging” on the gate does introduce significant overhead in bus and cache traffic. It would thus be advantageous to reduce this bus and cache traffic when one processor is waiting for another processor to modify a shared location in memory.


REFERENCES:
patent: 5265232 (1993-11-01), Gannon et al.
patent: 5463736 (1995-10-01), Elko et al.
patent: 5517626 (1996-05-01), Archer et al.
patent: 5519839 (1996-05-01), Culley et al.
patent: 5524208 (1996-06-01), Finch et al.
patent: 5996061 (1999-11-01), Lopez-Aguado et al.
Seong Tae Jhang: Chu Shik Jhon; A new write snooping cache coherence protocol for split transaction bus-based multiprocessor systems, Proceedings IEEE Region 10 Conference on Computer, Communication, Control and Power Engineering, Issue: 0, 19-2.*
Dahlgren, F.; Boosting the performance of hybrid snooping cache protocols, Proceedings 22nd Annual International Symposiu on Computer Architecture, Jun. 22-24, 1995, Page(s): 60-69.*
Terasawa, T.; Ogura, S.; Inoue, K.; Amano, H.; A cache coherency protocol for multiprocessor chip, Proceedings Seventh Annual IEEE International Conference on Wafer Scale Integration, Jan. 18-20, 1995, Page(s): 238-247.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Computer processor read/alter/rewrite optimization cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Computer processor read/alter/rewrite optimization cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer processor read/alter/rewrite optimization cache... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3305399

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.