PARALLEL COMPRESSION/DECOMPRESSION SYSTEM AND METHOD FOR...

Electrical computers and digital processing systems: memory – Storage accessing and control – Memory configuring

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S247000

Reexamination Certificate

active

06523102

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to memory systems, and more particularly to an integrated compression/decompression circuit embedded on industry standard memory modules where such modules operate to improve performance of a computing system by the storage of compressed data in the system memory and/or on the nonvolatile memory subsystem.
DESCRIPTION OF THE RELATED ART
System memory modules and architectures have remained relatively unchanged for many years. While memory density has increased and the cost per storage bit has decreased over time, there has not been a significant improvement to the effective operation of the memory subsystem using non-memory devices located within such memory subsystems. The majority of computing systems presently use industry standard in-line modules. These modules house multiple DRAM memory devices for easy upgrade, configuration, and improved density per area.
Software-implemented compression and decompression technologies have also been used to reduce the size of data stored on the disk subsystem or in the system memory data. Current compressed data storage implementations use the system's CPU executing a software program to compress information for storage on disk. However, a software solution typically uses too many CPU compute cycles to operate both compression and decompression in the present application(s). This compute cycle problem increases as applications increase in size and complexity. In addition, there has been no general-purpose use of compression and decompression for in-memory system data. Prior art systems have been specific to certain data types. Thus, software compression has been used, but this technique limits CPU performance and has restricted use to certain data types.
Similar problems exist for programs that require multiple applications of software threads to operate in parallel. Software compression does not address heavy loaded or multi-threaded applications, which require high CPU throughput. Other hardware compression solutions have not focused on “in-memory” data (data which reside in the active portion of the memory and software hierarchy). These solutions have typically been I/O data compression devices located away from the system memory or memory subsystem. In addition, the usage of hardware compression has been restricted to slow, serial input and output devices usually located at the I/O subsystem.
Mainframe computers have used data compression for acceleration and reduction of storage space for years. These systems require high dollar compression modules located away from the system memory and do not compress in-memory data in the same memory subsystem for improved performance. Such high dollar compression subsystems use multiple separate engines running in parallel to achieve compression speeds at super computer rates. Multiple separate, serial compression and decompression engines running in parallel are cost prohibitive for general use servers, workstations, desktops, or mobile units. Lower cost semiconductor devices have been developed that use compression hardware as well. The main difference is that these devices do not operate fast enough to run at memory speed and thus lack the necessary performance for in-memory data. Such compression hardware devices are limited to serial operation at compression rates that work for slow I/O devices such as tape backup units. The problem with such I/O compression devices, other than tape backup units, is that portions of the data to compress are often too small of a block size to effectively see the benefits of compression. This is especially true in disk and network subsystems. To operate hardware compression on in-memory data at memory bus speeds requires over an order of magnitude more speed than present day state-of-the-art compression hardware.
Prior Art Computer System Architecture
FIG. 1
illustrates a block diagram example of a prior art computer hardware and software operating system hierarchy of present day computing systems. The prior art memory and data storage hierarchy comprises the CPU Subsystem
100
, the main memory subsystem
200
, and the disk subsystem
300
. The CPU subsystem
100
comprises the L
1
cache memory
120
and L
2
cache memory
130
coupled to the CPU
110
and the CPU's local bus
135
. The CPU subsystem
100
is coupled to the main memory subsystem
200
through the CPU local bus
135
. The main memory subsystem
200
is also coupled to the disk subsystem
300
. The main memory subsystem
200
comprises the memory controller
210
, for controlling the main system memory banks, active pages of memory
220
, inactive pages of memory
230
, and a dynamically defined page fault boundary
232
. The page fault boundary
232
is dynamically controlled by the virtual memory manager software
620
to optimize the balance between active and inactive pages in the system memory and “stale” pages stored on disk. The memory subsystem
200
is coupled to the I/O, or disk subsystem
300
, by the I/O peripheral bus interface
235
, which may be one of multiple bus standards or server/workstation proprietary I/O bus interfaces, e.g., the PCI bus. For purpose of illustration, the I/O disk subsystem
300
comprises the disk controller
310
, the optional disk cache memory
320
, and the actual physical hard disk or disk array
330
which is used to store nonvolatile
on-active pages. In alternate embodiments, multiple subsections of CPU
100
, memory
200
, and disk
300
subsystems may be used for larger capacity and/or faster operation.
The prior art drawing of
FIG. 1
also illustrates the software operating system
600
. The typical operating system (OS) comprises multiple blocks.
FIG. 1
shows a few of the relevant OS blocks, including the virtual memory manager (VMM)
620
, file system
640
, and disk drivers
660
.
The operation of prior art systems for storage and retrieval of active and non-active pages from either the system memory or the disk is now described for reference. Again referring to the prior art system of
FIG. 1
, the VMM
620
is responsible for allocation of active pages and reallocation of inactive pages. The VMM
620
defines page fault boundaries
232
separating the active pages
220
and the inactive pages
230
located in both the system memory subsystem
200
and disk subsystem
300
. An active page may be defined as an area or page of memory, typically 4096 bytes, which is actively used by the CPU during application execution. Active pages reside between or within system memory or CPU cache memory. An inactive page may be defined as an area or page of memory, typically 4096 bytes, which is not directly accessed by the CPU for application execution. Inactive pages may reside in the system memory, or may be stored locally or on networks on storage media such as disks. The page fault boundary
232
is dynamically allocated during run time operation to provide the best performance and operation as defined by many industry standard algorithms such as the LRU/LFU lazy replacement algorithm for page swapping to disk. As applications grow, consuming more system memory than the actual available memory space, the page fault boundaries
232
are redefined to store more inactive pages
230
in the disk subsystem
300
or across networks. Thus, the VMM
620
is responsible for the placement of the page fault boundary
232
and the determination of active pages
220
and inactive pages
230
, which reside in memory and on the disk subsystem
300
.
The file system software
640
, among other tasks, and along with the disk drivers
660
, are responsible for the effective movement of inactive pages between the memory subsystem
200
and the disk subsystem
300
. The file system software
640
may have an interface which is called by the VMM
620
software for the task of data movement to and from the computer disk and network subsystems. The file system
640
software maintains file allocation tables and bookkeeping to locate inactive pages that have been written to disk. In order for the file sys

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

PARALLEL COMPRESSION/DECOMPRESSION SYSTEM AND METHOD FOR... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with PARALLEL COMPRESSION/DECOMPRESSION SYSTEM AND METHOD FOR..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and PARALLEL COMPRESSION/DECOMPRESSION SYSTEM AND METHOD FOR... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3165885

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.