Access control of a resource shared between components

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S136000

Reexamination Certificate

active

06662173

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention pertains to the control of access to a resource by two or more components. More particularly, the present invention pertains to selectively partitioning a resource (such as a cache resource) between two or more components that share the resource.
In a computer system or the like, main memory is provided (e.g., Dynamic Random Access Memory) for the storage of command information to be executed by a processor. The main memory can also store other forms of information besides command information including address information and data information that is manipulated through the execution of command information by the processor. Write and read operations to/from the main memory by the processor or any other device coupled to the main memory tend to be slow and time consuming. Accordingly, it is known in the art to provide an additional memory resource, such as a cache resource, coupled between the processor, for example, and the main memory. The cache resource stores information (e.g., command, address, and/or data information) that should be a copy of what is stored in the main memory or a more updated version of information stored in the main memory. A design feature of the cache resource is that it is typically faster to read/write information from/to the cache resource as compared to the main memory. For example, the cache memory can be Static Random Access Memory, which tends to be more expensive than DRAM but provides faster read/write transactions.
As stated above, the cache memory stores information that should be a copy of the same information as stored in the main memory or a more updated version. For example, the cache memory stores blocks (or “ways”) of information that have addresses equivalent to addresses for the same information stored in main memory. A processor that seeks to perform a read or write operation from/to the main memory will provide an address to the cache memory, which includes control circuitry for determining if the addressed block resides in the cache memory (and in the main memory) or in the main memory alone. If the addressed block is in the cache memory (sometimes referred to as a “cache hit”), then the read or write operation continues with the block of information in the cache memory. If the addressed block is not in the cache memory (sometimes referred to as a “cache miss”), then the addressed block is retrieved from main memory and placed into the cache memory so that the read or write operation can continue.
When it becomes necessary to retrieve information from the main memory for the cache memory, it is usually necessary to “evict” an addressed block of information from the cache memory to make room. In doing so, one or more addressed blocks are erased (or overwritten by the new information from main memory). If the information from the cache is more up-to-date than the same addressed block in main memory, then during the eviction process, the addressed block is written to the main memory before being erased.
Several algorithms exist to determine which addressable block in the cache memory should be evicted when an addressable block of information needs to be written to the cache memory from the main memory. The Least Recently Used (LRU) algorithm is a common one that attempts to evict the addressable block that is the stalest block in the cache memory (i.e., the one block that has not been read from or written to the longest amount of time).
A problem can arise when the cache memory is shared by two or more components utilizing the cache memory. It is possible that one component can so dominate the cache memory resource that addressable blocks used by the other component will be evicted. Thus, read and/or write operations to the cache memory by the other component will often result in a cache miss. Cache misses lower the performance benefits of the cache because two operations may need to be performed. First, an eviction process may take place, where data in the cache is written back to the main memory. Second, a read operation from the main memory takes place for the addressed block of information. These two steps will typically take longer than a simple read/write operation at the cache memory. As used herein, a “component” is defined as any device or functional mechanism that uses the cache. For example, a component can include two or more threads executed by a processor, where a thread is a series of instructions whose execution achieves a given task (e.g., a subroutine). Components can also include data and instruction operations with the cache memory, the execution of specific types of instructions (e.g., a pre-fetch instruction), and speculative and non-speculative operations to the cache memory.
In view of the above, there is a need for an improved method and apparatus for controlling access to a resource by two or more components.
SUMMARY OF THE INVENTION
According to an embodiment of the present invention, an apparatus for sharing a resource between at least two components is provided. A resource having a plurality of elements is coupled to an access controller. First and second components are coupled to the access controller and adapted to access the elements of the resource. The access controller is adapted to control which of the components are able to access which elements of the resource.


REFERENCES:
patent: 5829051 (1998-10-01), Steely, Jr. et al.
patent: 5832534 (1998-11-01), Singh et al.
patent: 5845331 (1998-12-01), Carter et al.
patent: 5903908 (1999-05-01), Singh et al.
patent: 6105111 (2000-08-01), Hammarlund et al.
Peter Song, Multithreading Comes of Age “ Multithread Processors Can Boost Throughput on Servers, Media Processors”, dated Jul. 14, 1997, pp. 13-18.
Dean M. Tullsen, Susan J. Eggers, Joel S. Emer, Henry M. Levy, Jack L. Lo, and Rebecca L. Stamm, Proceedings from the 23rd Annual International Symposium on Computer Architecture, “Exploiting Choice: Instruction Fetch and Issue on an Implementable Simulations Multithreading Processor”, dated May 22-24, 1996, pp. 191-202.
Richard J. Eickemeyer, Ross E. Johnson, Steven R. Kunkel, Mark S.Squillante and Shiafun Liu, Proceedings from the 23rd Annual International Symposium on Computer Architecture, “Evaluation of Multithread Uniprocessors for Commercial Applications Environments”, dated May 22-24, 1996, pp. 203-212.
Dennis Lee, Jean -Loup Baer, Brad Calder and Dick Grunwald, “ Instruction Cache Fetch Pokicies for Speculative Execution”, pp. 1-11.
Edited by Robert A. Iannucci, Gung R. Gao, Robert Halstead, Jr. and Burton Smith, Multithreaded Computer Architecture: “Summary of the State of the Art”, James Laudon, Anoop Gupta and Maek Horowitz, Architectural and Implementation Tradeoffs in the Design of Multiple-Context Processors, pp. 166-200.
Simon W. Moore, “Multithreaded Processor Design”, copyright 1996, pp. 1-141.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Access control of a resource shared between components does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Access control of a resource shared between components, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Access control of a resource shared between components will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3179142

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.