Cache system for concurrent processes

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S153000, C709S215000

Reexamination Certificate

active

06629208

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a cache system for operating between a processor and a main memory of a computer, and is particularly concerned with a processor capable of executing a plurality of concurrent processes.
BACKGROUND TO THE INVENTION
As is well known in the art, cache memories are used in computer systems to decrease the access latency to certain data and code and to decrease the memory bandwidth used for that data and code. A cache memory can delay, aggregate and reorder memory accesses.
A cache memory operates between a processor and a main memory of a computer. Data and/or instructions which are required by the process running on the processor can be held in the cache while that process runs. An access to the cache is normally much quicker than an access to main memory. If the processor does not locate a required data item or instruction in the cache memory, it directly accesses main memory to retrieve it, and the requested data item or instruction is loaded into the cache. There are various known system for using and refilling cache memories.
In order to rely on a cache in a real time system, the behaviour of the cache needs to be predictable. That is, there needs to be a reasonable degree of certainty that particular data items or instructions which are expected to be found in the cache will in fact be found there. Most existing refill mechanisms will normally always attempt to place in the cache a requested data item or instruction. In order to do this, they must delete other data items or instructions from the cache. This can result in items being deleted which were expected to be there for later use. This is particularly the case for a multi-tasking processor, or for a processor which has to handle interrupt processes or other unpredictable processes.
It is an object of the present invention to provide a cache system which provides greater predictability of caching behaviour for a processor executing a plurality of concurrent processes.
In this context, concurrent processes are considered to be processes which are executed by a common processor, but not necessarily simultaneously. That is, a first process may start to run and may be interrupted for some reason. The processor will then start to execute a second process but is ready to interrupt that when the first process is ready to run again or in response to some other prompt. This is managed by a process handler. It is important that data and/or instructions associated with the first process are not evicted from the cache while the second process is running. Conversely, it is useful to allow the second process to have access to the cache while it is running. Consider for example the situation illustrated in
FIG. 8
where two processes, process A and process B are running concurrently on one CPU. Process A is scheduled first and while it has the CPU it may completely fill the data cache with its own data, evicting any data which has been placed in the data cache for process B. When control then swaps to process B, it may then reverse the state of the data cache, throwing out all of the data of process A and bringing in its own. This ping-ponging of data of date cache state is common between concurrent processes and is often detrimental to performance.
SUMMARY OF THE INVENTION
According to one aspect of the present invention there is provided a method of operating a cache memory arranged between a processor and a main memory of a computer, the processor being capable of executing a plurality of processes wherein each process includes a sequence of instructions, the method comprising:
dividing the cache memory into cache partitions, each cache partition having a plurality of addressable storage locations for holding items in the cache memory;
allocating to each process a partition indicator identifying which, if any, of said cache partitions is to be used for holding items for use in the execution of that process; and
when the processor requests an item from main memory during execution of said current process and that item is not held in the cache memory, fetching the item from main memory and loading it into one of the plurality of addressable storage locations in the identified cache partition.
By allocating a partition indicator to each process, processes running concurrently on the processor are prevented from evicting each others data and/or instructions from the cache memory. That is, the cache partition allocated for example to a first process running on the processor cannot be overwritten by a subsequent, second process. Instead, the second process will have its own cache partition allocated to it. It is of course preferable that the allocation of partition indicators to processes can be altered so that once the first process has completely finished, the cache partition which was allocated to it can then be allocated to another process.
Depending on the needs of the process, it is possible to allocate more than one cache partition to a process or to deny a process access to the cache at all.
In the described embodiment, the partition indicator for a current process which is being executed is held in a process status store which also holds status information about the process. This is referred to herein as the thread status word register. When a new process is to be executed by the processor, a new thread status word is loaded into the store with a new partition indicator allocated to that process.
The partition indicator can be included in a group identifier for the process, the group identifier identifying an address space for the process. In a virtual addressing system, the processor issues addresses comprising a virtual page number and a line-in page number and a translation look-aside buffer is provided for translating the virtual page number to a real page number for accessing the main memory. The translation look-aside buffer can thus also receive the group identifier and derive therefrom the partition indicator for the current process depending on the virtual address space which has been allocated to the process.
The line-in page number of the items addressed can be used to identify the address storage location within the cache partition into which the item is to be located. That is, each cache partition is direct-mapped. It will be apparent that it is not necessary to use all of the end bits of the item's address as the line-in page number, but merely a set of appropriate bits. These will normally be near the least significant end of the address.
Once or more cache partitions may be allocated to a process.
The system can include a cache access circuit which accesses items from the cache memory according to the address in main memory of said items and regardless of the cache partition in which the items is held in the cache memory. That is, the partition indicator is only used on refill and not on look-up. Thus, a cached item can be retrieved from its partition even if subsequent to its caching that partition is now allocated to a process associated with a different address space.
According to another aspect of the invention there is provided a computer system comprising:
a processor for executing a plurality of processes wherein each process includes a sequence of instructions, the processor including a process status store which holds a partition indicator for a current process which is currently being executed;
a main memory;
a cache memory having a set of cache partitions, each cache partition comprising a plurality of addressable storage locations for holding items fetched from said main memory for use by the processor in execution of its processes; and
a cache refill mechanism arranged to fetch an item from the main memory and to load said item into the cache memory at one of said addressable storage locations, wherein the cache refill mechanism selects said one of said addressable storage locations for loading said items in dependence on the partition indicator held in the process status store in association with the current process.
Each process can in

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache system for concurrent processes does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache system for concurrent processes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache system for concurrent processes will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3066493

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.