Method and apparatus for software management of on-chip cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S217000, C711S214000, C711S220000, C711S170000, C712S022000

Reexamination Certificate

active

06681296

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to microprocessors, and, more particularly, to a method and apparatus which improves the operational efficiency of microprocessors having on-chip cache by enabling software management of at least a portion of the cache.
BACKGROUND OF THE INVENTION
The electronic industry is in a state of evolution spurred by the seemingly unquenchable desire of the consumer for better, faster, smaller, cheaper and more functional electronic devices. In their attempt to satisfy these demands, the electronic industry must constantly strive to increase the speed at which functions are performed by microprocessors. Videogame consoles are one primary example of an electronic device that constantly demands greater speed and reduced cost. These consoles must be high in performance and low in cost to satisfy the ever increasing demands associated therewith. The instant invention is directed to increasing the speed at which microprocessors can process information by improving the efficiency at which data and/or instructions can be loaded for processing.
A cache is a high speed memory that is provided on the microprocessor chip for the purpose reducing the number of times that data required for executing commands must be retrieved from main memory. Cache devices provide a close and convenient place for storing data and/or instructions to be used by the control unit of the microprocessor in a fast and efficient manner. Today, all high-performance microprocessors incorporate at least one on-chip level one (L1) cache for storing previously used data and/or instructions.
Main memory is external to the microprocessor and access thereto is provided through a bus which connects the microprocessor to the main memory. The bus connecting the microprocessor and the main memory is controlled by a Bus Interface Unit (BUI). Due to the fact that the main memory accesses must go through the BUI and bus to obtain the requested data from the off-chip memory, accessing this memory is relatively inconvenient and slow as compared to accessing the on-chip cache.
With today's technology, accessing the off-chip main memory can take anywhere from ten to hundreds of CPU clock cycles (a time unit by which the microprocessor or central processing unit (CPU) operates). In contrast, accessing on-chip memory, such as a memory designed to operate as an on-chip cache, can take as few as only one or two CPU clock cycles. Thus, data can be retrieved from a cache at least about ten times faster than the time that it would take to retrieve that same data from main memory. As a result, effective use of the cache can be a critical factor in obtaining optimal performance for applications running on a microprocessor. The drastic time difference between loading desired code or data from an on-chip cache as compared to loading from the main memory is so great (an order of magnitude or more) that effective cache management can be a dominant factor in determining the speed of an application executed by the microprocessor, or even the speed of the entire system built around the microprocessor.
Generally speaking, a cache operates by storing data and/or instructions that have been previously requested by the control unit and retrieved from main memory in the on-chip cache for possible use again by the control unit at a later time. If a second request is made by the control unit for that same data, the data can be quickly retrieved from the cache rather than having to again retrieve the data from the off-chip main memory. In this manner, the speed of the application can be increased by minimizing the need to access the relatively slow main memory.
One limitation, however, regarding the use of cache is that size and cost factors limit the cache to a size that is significantly small relative to the size of the main memory. As a result, the cache quickly becomes full with data that has been retrieved from main memory or elsewhere, thereby preventing additional data required by the control unit from being stored in the cache. Typically, a microprocessor, such as the microprocessors in IBM's PowerPC (IBM Trademark) family of microprocessors (hereafter “PowerPC”), includes a 32 kilobyte (32 k) on-chip level one (L1) instruction (I) cache and a 32K L1 data (D) cache (Harvard Architecture), as well as a level two (L2) cache providing additional on-chip cache functionality. For more information on the PowerPC microprocessors see
PowerPC
740
and PowerPC
750
RISC Microprocessor Family User Manual, IBM
1998 and
PowerPC Microprocessor Family: The Programming Environments, Motorola Inc.
1994, both of which are hereby incorporated by reference in their entirety.
In view of the size limitation on caches, the microprocessor includes hardware that manages the cache in accordance with an algorithm that attempts to predict which data read from main memory is likely to be needed again in the near future by the processing unit. In other words, the cache control hardware is designed according to an algorithm that tries to predict in advance what data from main memory to maintain in the limited amount of storage space available in the cache for later use by the processing unit. Thus, every microprocessor having such a cache incorporates some type of hardware implemented algorithm for managing the contents of the cache. An example of such an automatic replacement algorithm used in the PowerPC is a pseudo least-recently-used (PLRU) replacement algorithm
The automatic replacement algorithm used in a particular microprocessor to manage the contents of the cache, however, is not necessarily optimal or even effective for certain applications run by the microprocessor. In other words, the algorithm implemented by cache control hardware does not always result in efficient cache management for all applications designed for the microprocessor. For example, in certain applications the hardware may drop data from the cache right before it is needed a second time, thereby requiring the processor to obtain the dropped data from main memory, even though the desired data was in the cache moments earlier. This problem results from the fact that one cannot predict in advance the needs of every application that may be implemented using the microprocessor. As a result, some applications will not be able to use the cache in an efficient manner, thereby preventing such applications from running as fast as they otherwise could with efficient cache management. In fact, for some applications, the automatic replacement algorithms perform poorly, thereby preventing the desired low-latency memory accesses for which the cache is designed.
One strategy that has been used in the past in connection with caches to improve application performance is to provide in the instruction set of the microprocessor a mechanism that enables software assisted cache management. Most modern microprocessors provide instructions in the instruction set which enable software to assist the cache management hardware to some degree in managing the cache. For example, the
PowerPC
architecture contains several user-accessible instructions in the instruction set for manipulating the data cache that can significantly improve overall application performance. These instructions are: “block touch” (dcbt); “block touch for store” (dcbtst); “block flush” (dcbf); “block store” (dcbst); and “block set to zero” (dcbz). see
Zen and the Art of Cache Maintenance, Byte Magazine
, March 1997.
In order to understand the operation of these or similar instructions, it is important to define what a “block” is in this context. A block is the fundamental unit of memory on which the cache operates. The cache handles all memory load and store operations using blocks. The particular block size can vary from one microprocessor to another. For example, the PowerPC 601 uses 64-byte blocks, while the PowerPC 603 and 604 user 32-byte blocks.
Each of the above-identified instructions operates on a pair of general purpose register (GPR) operands whose sum forms the effective address of the memo

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for software management of on-chip cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for software management of on-chip cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for software management of on-chip cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3188441

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.