Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-06-10
2001-11-27
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S171000, C711S134000, C711S159000, C710S068000
Reexamination Certificate
active
06324621
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to storage caches, and more particularly to partially compressed storage caches.
BACKGROUND OF THE INVENTION
As processing speeds of computer systems continue to increase, the ability to efficiently retrieve data from memory remains vital. The use of memory caches has been effective in compensating for speed mismatches between two levels of storage access, e.g., between a processor and main memory. Caches generally provide higher speed memory storage for recently/frequently used data in a computer system.
Improving the performance and utilization of caches remains an important aspect of computer system design. Typically, cache organizations and algorithms attempt to utilize the spatial and temporal localities of the storage access. Success in caches is usually measured by the hit ratio (i.e., the number of times that the needed data is found in the cache), as well as average access time (i.e., average time to locate and retrieve a piece of information and return it for processing), maximal throughput (i.e., maximal rate of data transfer), etc. At times, attempts to achieve better performance involve changes to the cache organization, which often improve hit ratios and access times at a slight expense of the maximal throughput due to cache replacing overhead.
For example, one typical method of improving performance by increasing the hit ratio involves expanding the size of the cache. Unfortunately, as the cache size is increased, an equivalent increase in the hit ratio percentage is not always achieved. For example, doubling a 4 GB (gigabytes) cache with a 75% hit ratio to 8 GB does not result in a doubling of the hit ratio. While a small percentage of improvement in the hit ratio occurs, the doubling in size results in considerable cost expenses.
Alternatively, with a fully compressed cache, an increase in storage capacity is achieved without increasing the cache size. When the compressed cache is used in a read-only environment, normally few problems in data integrity result. However, when used in an environment of changing data, significant problems result, mainly due to the need to have random access to the compressed data in the cache. Forming smaller, uniform-sized chunks within the compressed cache is sometimes used to allow more random access to portions of data. However, further complications in updating are created, since the compressed data may not elegantly fit within each chunk due to the size variations in the data. Further, compressing small chunks usually results in lower compression.
Accordingly, a need exists for a cache organization and algorithm that achieves results at least as effective as increasing a cache's size without the concomitant expense incurred by size increases.
SUMMARY OF THE INVENTION
The present invention meets these needs through a partially compressed cache organization. A method aspect for caching storage data includes partitioning a storage cache to include a compressed data partition and an uncompressed data partition, and adjusting the compressed data partition and the uncompressed data partition for chosen performance characteristics, including overall cache size. A data caching system aspect in a data processing system having a host system in communication with a storage system includes at least one storage device and at least one partially compressed cache. The at least one partially compressed cache further includes an uncompressed partition and a compressed partition, where the compressed partition stores at least a victim data unit from the uncompressed partition.
With the present invention, alternative caching organizations and algorithms are introduced that allow for dynamically adjusting partition sizes of uncompressed and compressed cache data according to hit-ratio, response time, compression ratios, and throughput (max IO) objectives. Further, the utilization of sub-partitioning a cache in order to achieve a partially compressed cache is readily applicable in multi-level caching of storage subsystems. In addition, the partially compressed cache organization achieves improved performance on par with increasing a cache's size without incurring the cost expense of cache size increase. These and other advantages of the aspects of the present invention will be more readily understood in conjunction with the following detailed description and accompanying drawings.
REFERENCES:
patent: 5237460 (1993-08-01), Miller et al.
patent: 5450562 (1995-09-01), Rosenberg et al.
patent: 5490260 (1996-02-01), Miller et al.
patent: 5537588 (1996-07-01), Engelmann et al.
patent: 5574952 (1996-11-01), Brady et al.
patent: 5812817 (1998-09-01), Hovis et al.
“Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers,” Norman P. Jouppi,IEEE, 364-373, 1990.
“CE 297 Independent Study Report Effective CACHE Design Alternatives,” Joe-Ming Cheng and Bruce Durgan, Mar. 20, 1992.
Beardsley Brent Cameron
Benhase Michael Thomas
Cheng Joe-Ming
Goldfeder Marc Ethan
Leabo Dell Patrick
Anderson Matthew D.
Bluestone Randall J.
International Business Machines - Corporation
Kim Matthew
Sawyer Law Group LLP
LandOfFree
Data caching with a partially compressed cache does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data caching with a partially compressed cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data caching with a partially compressed cache will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2599002