Optimized hardware cleaning function for VIVT data cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S260000, C365S236000

Reexamination Certificate

active

06606687

ABSTRACT:

STATEMENT OF FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not Applicable
BACKGROUND OF THE INVENTION
1. Technical Field
This invention relates in general to computer systems and, more particularly, to a method and apparatus for a hardware cleaning function for a virtual index, virtual tag cache memory systems.
2. Description of the Related Art
Many multi-tasking computer systems make use of a virtual index, virtual tag (VIVT) cache memory architecture. Most computer systems, and in particular those systems embedded within portable solutions, are designed to support a high MIPS (millions of instructions per second) demand while maintaining power consumption at reasonable rates. Relatively small memory caches, accessed in a single processor cycle, allow a high level of performance while running with main memories at lower speed.
While, formerly, computer systems operated on a single application at one time, computer systems of today generally have several applications loaded into their main memories. The scheduling of multiple applications, running in parallel for the user, is managed by an operating system (OS). Most modern operating systems are designed with the concept of a virtual environment. Addresses coming from the processor are virtual address which map to actual (“physical”) addresses in main memory. Cache memories using virtual indices and virtual tags are the most efficient structures for a virtual environment.
For these multi-tasking systems, an important constraint is the context switch. The context switch corresponds to the necessary sequence of actions that the OS needs to execute in order to accommodate several independent tasks on a single processor. The context switch is a limiting factor on the performance in systems with strong real-time requirements, because it takes a significant time and number of instruction to realize the context switch.
Multitasking systems in a virtual environment must deal with “aliasing” of data which can occur when two or more different tasks cache data associated with the same physical address at two or more respective locations in the cache in accordance with the different virtual addresses used by the various tasks. When one task changes the value associated with a cached data item, that change will not be reflected in the cache locations of other virtual addresses which pint to the same physical memory address. As part of a context switch, the operating system must invalidate the content of the cache so that other tasks will see the new value.
The cleaning function associated with invalidating the cache can be very time consuming. Further, the cleaning function may be interrupted only at discrete time intervals, depending upon the cache cleaning design. For many applications which have tight real-time constraints, it is important that interrupts be allowed frequently. However, cleaning routings which have capacity to allow interrupts at frequent intervals often are the least efficient in completing the cleaning operation.
Therefore, a need has arisen for a high efficiency method and apparatus for cleaning a VIVT cache system which allows frequent interrupts.
BRIEF SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for performing a cache clean function in a system with a cache memory and a main memory. Address circuitry outputs a series of cache addresses in a predetermined order within a range of potentially dirty cache addresses. Control logic circuitry writes information from cache addresses associated with the output from said address circuitry to corresponding main memory locations for each dirty cache location. The address circuitry may be enabled and disabled responsive to either the detection of an interrupt or upon completion of writing all dirty entries to the main memory, such that the clean function can continue by enabling said address circuitry after an interrupt.
The present invention provides significant advantages over the prior art. First, the invention has the benefit of the speed of a hardware cache instruction; after the initial invocation of the hardware clean operation, software is involved only if an interrupt occurs. Second, the hardware cleaning operation may be interrupted as it cycles through the cache entries, allowing the system to respond to interrupts as necessary for real-time requirements. Third, the number of cache entries is optimized to service only the range of cache entries which have associated dirty bits.


REFERENCES:
patent: 5515522 (1996-05-01), Bridges et al.
patent: 5542062 (1996-07-01), Taylor et al.
patent: 5668968 (1997-09-01), Wu
patent: 5809560 (1998-09-01), Schneider
patent: 5913226 (1999-06-01), Sato
patent: 5915262 (1999-06-01), Bridgers et al.
patent: 6115777 (2000-09-01), Zahir et al.
patent: 6321299 (2001-11-01), Chauvel et al.
patent: 6341324 (2002-01-01), Caulk et al.
patent: 6397302 (2002-05-01), Razdan et al.
patent: 6412043 (2002-06-01), Chopra et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Optimized hardware cleaning function for VIVT data cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Optimized hardware cleaning function for VIVT data cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimized hardware cleaning function for VIVT data cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3109875

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.