Apparatus for cache compression engine for data compression...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S122000, C711S128000, C711S133000

Reexamination Certificate

active

06640283

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to the field of cache design for high performance processor integrated circuits. In particular, the invention relates to apparatus for compressing data in a large, upper level, on-chip, cache.
BACKGROUND OF THE INVENTION
Cache memories are high speed memory systems that store a partial copy of the contents of a larger, slower, memory system. The partial copy stored in a cache normally contains those portions of the contents of the larger memory system that have been recently accessed by a processor. Cache memory offers advantage in that many programs access the same or nearby code and data locations repeatedly; execution of instructions is statistically more likely to access recently accessed locations or locations near recently accessed locations than other locations in memory.
Many modern computer system implement a hierarchy of cache memory systems for caching memory data in main memory. Main memory of these systems typically consists of Dynamic Random Access Memory (DRAM). Many common processors, including Intel Pentium-II and Pentium-III circuits, have two levels of cache. There also exist computing systems with three and four levels of cache.
In addition to storage, cache memory systems also have apparatus for identifying those portions of the larger, slower, memory system held in cache, this often takes the form of a cache tag memory.
Cache systems typically have cache tag memory subsystems and cache data memory subsystems. Each cache data memory typically operates on units of data of a predetermined size, known as a cache line. The size of a cache line can be different for each level in a multilevel cache. Cache lines are typically larger than the word or byte size used by the processor and may therefore contain data near recently used locations as well as recently used locations.
In typical cache memory systems, when a memory location at a particular main-memory address is to be read, a cache-line address is derived from part of the main-memory address. A portion of the cache-line address is typically presented to the cache tag memory and to the cache data memory; and a read operation done on both memories.
Cache tag memory typically contains one or more address tag fields. Multiple address tag fields can be, and often are, provided to support multiple “ways” of associativity in the cache. Each address tag field is compared to the remaining bits of the cache-line address to determine whether any part of data read from the cache data memory corresponds to data at the desired main-memory address. If the tag indicates that the desired data is in the cache data memory, that data is presented to the processor and next lower-level cache; if not, then the read operation is passed up to the next higher-level cache. If there is no higher-level cache, the read operation is passed to main memory. N-way, set-associative, caches perform N such comparisons of address tag fields to portions of desired data address simultaneously.
Typically, a tag memory contains status information as well as data information. This status information may include “written” flags that indicate whether information in the cache has been written to but not yet updated in higher-level memory, and “valid” flags indicating that information in the cache is valid.
A cache “hit” occurs whenever a memory access to the cache occurs and the cache system finds, through inspecting its tag memory, that the requested data is present and valid in the cache. A cache “miss” occurs whenever a memory access to the cache occurs and the cache system finds, through inspecting its tag memory, that the requested data is not present and valid in the cache.
When a cache “miss” occurs in a low level cache of a typical multilevel cache system, the main-memory address is passed up to the next level of cache, where it is checked in the higher-level cache tag memory in order to determine if there is a “hit” or a “miss” at that higher level. When a cache “miss” occurs at the top level cache, the reference is typically passed to main memory.
Typically, the number of “ways” of associativity in a set-associative cache tag subsystem is the number of sets of address tags in each line of tag memory, and corresponding sets of comparators. The number of ways of storage is the number of cache lines, or superlines, that can be stored and independently referenced through a single line of cache tag memory. In most caches, the number of ways of associativity is the same as the number of ways of storage. Cache superlines are combinations of multiple cache lines that can be referenced though a single address tag in a line of tag memory.
Writethrough caches are those in which a write operation to data stored in the cache results in an immediate update of data in a higher level of cache or in main memory. Writeback caches are those in which a write operation to data stored in the cache writes data in the cache, but update of data in higher levels of cache or in main memory is delayed. Operation of cache in writeback and writethrough modes is known in the art.
Whenever a cache “miss” occurs at any level of the cache, data fetched from a higher level of cache or main memory is typically stored in the cache's data memory and tag memory is updated to reflect that data is now present. Typically also, other data may have to be evicted to make room for the newly fetched data.
A cache “hit rate” is the ratio of memory references that “hit” in cache to total memory references in the system. It is known that the effective performance of cache-equipped processors can vary dramatically with the cache “hit rate.” It is also known that hit rate varies with program characteristics, the size of cache, occurrence of interrupting events, and other factors. In particular, it is known that large effective cache sizes can often offer significantly better hit rates than small cache sizes.
It is therefore advantageous to have a large effective cache size.
Many computer systems embody multiple processors, each having its own cache system for caching main memory references. Typically, processors of such systems may access shared memory. Coherency is required in cache memory of such computer systems. Cache coherency means that each cache in the system “sees” the same memory values. Therefore, if a cache wants to change the contents of a memory location, all other caches in the system having copies of that memory location in its cache must either update or invalidate its contents.
There are many ways data may be compressed that are known in the art. These include run-length algorithms, repeat-based algorithms, and dictionary-based algorithms. Run-length algorithms are commonly used in facsimile transmission. Software tools for compressing and decompressing data for disk storage and modem data transmission are common in the industry. Software tools for compressing and decompressing disk data when that disk data is cached in main memory are known. Software utilities for compressing and decompressing main memory pages are also known. Most of these utilities and tools make use of a processor of the system to perform both compression and decompression operations, these utilities and tools can consume sufficient processor time to adversely affect system performance.
Many systems provide for caching of disk data in main memory, or other memory of speed similar to that of main memory. For purposes of this patent, a cache for caching disk data in main memory or memory of speed similar to that of main memory is a disk cache; and a cache for caching main memory references as information is fetched by a processor is a processor cache. An on-chip cache is a processor cache located on the same integrated circuit as the processor.
Data stored in cache memory is typically not stored in compressed form. It would be advantageous to do so to attain higher effective cache size, and thus higher hit rates.
Typically, data compression requires more time than decompression because of the time required to count run lengths, detect repeats, and b

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus for cache compression engine for data compression... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus for cache compression engine for data compression..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus for cache compression engine for data compression... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3170258

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.