Optimization of cache evictions through software hints

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S119000, C711S144000, C711S145000, C711S159000, C712S043000, C712S220000, C712S227000, C712S229000

Reexamination Certificate

active

06766419

ABSTRACT:

BACKGROUND
The present invention relates to a method and apparatus for providing software “hints” for the purposes of cache management in an integrated circuit.
As processor cycle times continue to decrease, management of memory latency and bus bandwidth becomes an increasingly important capability in computer systems. Contemporary processor designs often address these issues by providing on-chip or on-package cache memory. Some processors even provide multi-level cache hierarchies in which a lowest level cache, the cache closest to the processor core, is very fast but very small when measured against higher level caches. To improve hit rates and to avoid cache pollution, many processors also provide separate instruction and data caches, which enable higher hit rates due to increased code or data locality found in typical application's code and data streams. Unfortunately, the working set sizes of large applications continue to exceed the size of many current processor caches, even those with large multi-level cache systems.
Several processor designs have added instruction set extensions that permit both instruction and data prefetch instructions and/or hints. These prefetch instructions and hints allow software to initiate memory accesses early by identifying addresses of data that are likely to be needed by the software in future execution. Current prefetch instructions are either specified explicitly as data or code prefetch instructions or are specified indirectly on branch or load instructions. Currently existing prefetch hints and instructions enable software to specify a level of the memory hierarchy at which a specific data item should be allocated. This allows software that knows about the relative sizes of an application's working set and the processor's cache sizes to effectively allocate the working set in the appropriate cache level.
Current prefetch hints and instructions focus on allocation of cache lines from memory into the processor's memory hierarchy. However, they rely on the processor's built-in cache replacement policies to evict (and possibly write back) cache lines from the processor to memory. These hardwired replacement policies typically evict data on a least-recently-used, not-recently-used or random basis. In many situations, however, a software designer may know beforehand that a referenced data item is not going to be a likely candidate for re-use in the near future or ever. Consider a video decoding application by way of example. When a frame of data is decoded, the decoded image data typically is output from a processor for display. Once the decoded image data is output, it may have no further use within the processor's cache. Thus, any cache space occupied by such data could be evicted without performance loss within the processor. Currently, however, processor instruction sets do not provide a mechanism that permits software designers to identify data that is not likely to be used again during program execution.
Accordingly, there is a need in the art for an instruction set for a processor that permits a software designer to identify data that is not likely to be used during future program execution.


REFERENCES:
patent: 5471602 (1995-11-01), DeLano
patent: 5561780 (1996-10-01), Glew et al.
patent: 5630075 (1997-05-01), Joshi et al.
patent: 5778430 (1998-07-01), Ish et al.
patent: 5781733 (1998-07-01), Stiles
patent: 5895488 (1999-04-01), Loechel
patent: 5996049 (1999-11-01), Arimilli et al.
patent: 6212605 (2001-04-01), Arimilli et al.
patent: 6243791 (2001-06-01), Vondran, Jr.
patent: 6272598 (2001-08-01), Arlitt et al.
patent: 6345340 (2002-02-01), Arimilli et al.
patent: 6370622 (2002-04-01), Chiou et al.
patent: 6397298 (2002-05-01), Arimilli et al.
patent: 2001/0049771 (2001-12-01), Tischler et al.
Gwennap, “Microprocessor Report: Intel, HP Make EPIC Disclosure,” vol. 11, No. 14, pp 1-5, Oct. 1997.*
Yung, “Design Decisions Influencing the UltraSPARC's Instruction Fetch Architecture,” pp 178-190, IEEE, 1996.*
Dulong, “The IA-64 Architecture at Work,” pp 24-32, IEEE, 1998.*
Chan et al., “Design of the HP PA 7200 CPU,” pp 1-11, Hewlett-Packard Journal, Feb. 1996.*
Thakkar et al., “The Internet Streaming SIMD Extensions,” pp 1-8, Intel Technology Journal Q2, 1999.*
IBM Technical Disclosure Bulletin, “Conditional Least-Recently-Used Data Cache Design to Support Multimedia Applications,” vol. 37, No., 2B, pp 387-390, Feb. 1, 1994.*
Handy, “The Cache Memory Book, Second Edition,” pp. 156-158, 1998.*
PowerPC™ 601, RISC Microprocessor User's Manual, Motorola, Section 4: Chapter 4, “Cache and Memory Unit Operation”, Table of Contents, pp. 4-17 through 4-21, 1993.
PA-RISC 1.1 Architecture and Instruction Set Reference Manual, Hewlett-Packard, HP No. 09740-90039, Third Edition, Table of Contents, pp. 5-171, 5-172, and 5-152, Feb. 1994.
Intel® IA-64 Architecture Software Developer's Manual, vol. 3: Instruction Set Reference, Order No. 245319-001, Table of Contents, pp. 2-47 and 2-220, Jan. 2000.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Optimization of cache evictions through software hints does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Optimization of cache evictions through software hints, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimization of cache evictions through software hints will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3228107

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.