Apparatus and method for selectively allocating cache lines...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S128000, C711S154000

Reexamination Certificate

active

06745292

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to computer systems, and more specifically to a computer system where multiple processors share a cache memory.
A typical computer system includes at least a processor and a main memory. In performing an instruction, the processor needs to get access to the main memory, to read a word or several words from, or possibly to write a word or several words to, the main memory. A word can be the instruction itself, an operand or a piece of data.
To obtain the fastest memory speed available and at the same time have a large memory size without imposing undue cost, a cache memory is provided between the processor and the main memory. Usually, the cache memory is faster in speed and smaller in size than the main memory.
Because the cache memory has a smaller size than that of the main memory, it contains only a copy of portions of the main memory. When the processor attempts to get access to an address in the main memory, a check is made to determine whether the main memory address has been allocated in the cache memory. If so, a desired operation (read or write operation) will be performed on the allocated address in the cache memory.
If the main memory address has not been allocated in the cache memory, a procedure will be invoked to allocate a space of the cache memory for the main memory address.
In getting access to an main memory address, if the main memory address has been allocated in the cache memory, it is a hit; if the main memory address has not been allocated in the cache memory, it is a miss. The performance of a cache memory can be measured by hit ratio.
When multiple processors share a single large cache memory, they can all take advantage of the large cache size to increase hit ratio and may effectively share programs and data already fetched by any one of the processors.
One problem to this scheme is that the access to the single large cache by the multiple processors may “cross-thrash,” that is, an allocation in the cache memory may replace an entry that had been fetched (may be recently fetched) by some other processors.
Thus, there has been a need to provide an improved cache memory management, and a need to overcome the “cross-thrash” problem, in an environment where multiple processors share a single cache. The present invention provides the method and apparatus meeting these two needs.
SUMMARY OF THE INVENTION
In principle, the present invention divides a cache memory, which is shared by multiple processors, into a plurality of regions. Each of the processor is exclusively associated with one or more of the regions. All the processors have access to all regions on hits. However, on misses, a processor can cause memory allocation only within its associated region or regions. This means that a processor can cause memory allocation only over data it had fetched. By such arrangement, the “cross-thrash” problem is eliminated.
In one aspect, the present invention provides a novel method in use with a computer system including a plurality of processors, a main memory and a cache memory. The method comprises the steps of:
(a) dividing said cache memory into a plurality of regions;
(b) associating each of said processors with a respective one of said regions;
(c) generating an access address that contains content desired by one of said processors; and
(d) if said access address has not been allocated in said cache memory, causing an allocation within a respective region associated with said one of said processors.
In another aspect, the present invention provides a novel apparatus for accelerating the access speed of a main memory. The apparatus comprises:
(a) a cache memory including a plurality of regions, said cache memory is shared by a plurality of processors, each of said processors is associated with a respective one of said regions;
(b) means for generating an access address that contains content desired by one of said processors; and
(c) means, if said access address has not been allocated in said cache memory, for causing an allocation within a respective region associated with said one of said processors.
Accordingly, it is an objective of the present invention to provide an improved cache memory management in an environment where multiple processors share a single cache.
It is another objective of the present invention to overcome the “cross-thrash” problem in an environment where multiple processors share a single cache.


REFERENCES:
patent: 3947823 (1976-03-01), Padegs et al.
patent: 4264953 (1981-04-01), Douglas et al.
patent: 4371929 (1983-02-01), Brann et al.
patent: 4380797 (1983-04-01), Desyllas et al.
patent: 4422145 (1983-12-01), Sacco et al.
patent: 4445174 (1984-04-01), Fletcher
patent: 4905141 (1990-02-01), Brenza
patent: 4980822 (1990-12-01), Brantley, Jr. et al.
patent: 5010475 (1991-04-01), Hazawa
patent: 5157774 (1992-10-01), Culley
patent: 5291442 (1994-03-01), Emma et al.
patent: 5295246 (1994-03-01), Bischoff et al.
patent: 5357623 (1994-10-01), Megory-Cohen
patent: 5434992 (1995-07-01), Mattson
patent: 5490261 (1996-02-01), Bean et al.
patent: 5579508 (1996-11-01), Yoshizawa et al.
patent: 5581724 (1996-12-01), Belsan et al.
patent: 5584015 (1996-12-01), Villette et al.
patent: 5584017 (1996-12-01), Pierce et al.
patent: 5689680 (1997-11-01), Whitaker et al.
patent: 5737749 (1998-04-01), Patel et al.
patent: 5737750 (1998-04-01), Kumar et al.
patent: 5748879 (1998-05-01), Kobayashi
patent: 5761710 (1998-06-01), Igami et al.
patent: 6047356 (2000-04-01), Anderson et al.
patent: 6049850 (2000-04-01), Vishlitzky et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for selectively allocating cache lines... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for selectively allocating cache lines..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for selectively allocating cache lines... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3327877

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.