Virtual uncompressed cache size control in compressed memory...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S144000, C711S156000, C711S170000, C709S247000, C710S068000

Reexamination Certificate

active

06779088

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to compressed memory systems, and more specifically, to a system and method for managing and controlling the size of a virtual uncompressed cache (VUC) for optimizing system performance independently of the operating system or other system software.
2. Discussion of the Prior Art
FIG. 1
shows the overall structure of an example computer system implementing compressed main memory. In
FIG. 1
, a central processing unit (CPU)
102
reads data to and writes data from a cache
104
. Cache misses and stores result in reads and writes to a compressed main memory
108
by means of a compression controller
106
. The compressed main memory
108
is typically divided into a number of logically fixed size segments (the units of compression, also called lines), but in which each such logical segment is physically stored in a compressed format. It is understood that a segment may be stored in an uncompressed format if it cannot be compressed. Exemplary compressed memory systems may be found in commonly-owned issued U.S. Pat. No. 5,761,536 entitled System and Method for Reducing Memory Fragmentation by Assigning Remainders to Share Memory Blocks on a Best Fit Basis and issued U.S. Pat. No. 5,864,859 entitled System and Method of Compression and Decompression using Store Addressing the contents and disclosure of each of which are incorporated by reference as if fully set forth herein. Another compressed memory system incorporated by reference includes Design and Analysis of Internal Organizations for Compressed Random Access Memories by P. Franaszek and J. Robinson, IBM Research Report RC 21146, IBM Watson Research Center, Oct. 20, 1998.
FIG. 2
shows in more detail the structure of the cache
104
, components of the compression controller
106
, and compressed main memory
108
of FIG.
1
. The compressed main memory is implemented using a conventional RAM memory M
210
, which is used to store a directory D
220
and a number of fixed size blocks
230
. The cache
240
is implemented conventionally using a cache directory
245
for a set of cache lines
248
. The compression controller
260
includes a decompressor
262
used for reading compressed data, a compressor
264
used for compressing and writing data, a number of memory buffers
266
used for temporarily holding uncompressed data, and control logic
268
. Each cache line is associated with a given real memory address
250
. Unlike a conventional memory, however, the address
250
does not refer to an address in the memory M
210
; rather the address
250
is used to determine a directory index
270
into the directory D
220
. Each directory entry contains information (shown in more detail in
FIG. 3
) which allows the associated cache line to be retrieved. The units of compressed data referred to by directory entries in D
220
may correspond to cache lines
248
; alternatively, the unit of compression may be larger, that is, sets of cache lines (segments) may be compressed together. For simplicity, the following examples assume the units of compressed data correspond to cache lines
248
; the directory entry
221
for line
1
associated with address A
1
271
is for a line which has compressed to a degree in which the compressed line can be stored entirely within the directory entry; the directory entry
222
for line
2
associated with address A
2
272
is for a line which is stored in compressed format using a first full block
231
and second partially filled block
232
; finally, the directory entries
223
and
224
for line
3
and line
4
associated with addresses A
3
273
and A
4
274
are for lines stored in compressed formats using a number of full blocks (blocks
233
and
234
for line
3
, and block
235
for line
4
) and in which the remainders of the two compressed lines have been combined in block
236
.
FIG. 3
shows some possible examples of directory entry formats. For this example, it is assumed that the blocks
230
of
FIG. 2
are of size 256 bytes and that the cache lines
248
of
FIG. 2
are of size 1024 bytes. This means that lines can be stored in an uncompressed format using four blocks. For this example, directory entries of size 16 bytes are used, in which the first byte consists of a number of flags; the contents of the first byte
305
determines the format of the remainder of the directory entry. A flag bit
301
specifies whether the line is stored in compressed or uncompressed format; if stored in uncompressed format, the remainder of the directory entry is interpreted as for line
1
310
, in which four 30 bit addresses give the addresses in memory of the four blocks containing the line. If stored in compressed format, a flag bit
302
indicates whether the compressed line is stored entirely within the directory entry; if so, the format of the directory entry is as for line
3
330
, in which up to 120 bits of compressed data are stored. Otherwise, for compressed lines longer than 120 bits, the formats shown for line
1
310
or line
2
320
may be used. In the case of the line
1
310
format, additional flag bits
303
specify the number of blocks used to store the compressed line, from one to four 30 bit addresses specify the locations of the blocks, and finally, the size of the remainder, or fragment, of the compressed line stored in the last block (in units of 32 bytes) together with a bit indicating whether the fragment is stored at the beginning or end of the block, is given by four fragment information bits
304
. Directory entry format
320
illustrates an alternative format in which part of the compressed line is stored in the directory entry (to reduce decompression latency); in this case, addresses to only the first and last blocks used to store the remaining part of the compressed line are stored in the directory entry, with intervening blocks (if any) found using a linked list technique, that is, each block used to store the compressed line has, if required, a pointer field containing the address of the next block used to store the given compressed line.
Another issue in such systems is that the compression of the data stored in the compressed memory system can vary dynamically. If the amount of free space available in the compressed memory becomes sufficiently low, there is a possibility that a write-back of a modified cache line could fail. To prevent this, interrupts may be generated when the amount of free space decreases below certain thresholds, with the interrupts causing OS (operating system) intervention so as to prevent this from occurring. An exemplary method for handing this problem is described in commonly-owned, co-pending U.S. patent application Ser. No. 09/021,333 entitled Compression Store Free Space Management, filed Feb. 10, 1998.
In such systems, it has been found advantageous in certain cases to maintain a number of recently used segments in an uncompressed format (regardless of whether they can be compressed): this is referred to as a virtual uncompressed cache. Further details regarding the by implementation of a VUC may be found in commonly-owned, U.S. patent application Ser. No. 09/315,069 entitled Virtual Uncompressed Cache for Compressed Main Memory, U.S. Pat. No. 6,349,372 filed May 19, 1999 the contents and disclosure of which is incorporated by reference as if fully set forth herein.
Because a virtual uncompressed cache (VUC) does not consist of a memory partition, for example, but rather is a logical entity consisting of a subset of all segments in the compressed memory system, the size of the VUC may vary dynamically.
It would thus be highly desirable to provide a system and method for managing the size of the VUC in a simple, cost-effective way, and, if possible, without the generation of interrupts and subsequent OS intervention.
SUMMARY OF THE INVENTION
It is thus an object of the invention to control the size of the VUC so as to: (1) optimize system performance; and (2) avoid, if possible, operating system intervention which is required in certain circumstan

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Virtual uncompressed cache size control in compressed memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Virtual uncompressed cache size control in compressed memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Virtual uncompressed cache size control in compressed memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3276566

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.