Predictive data compression system and methods

Image analysis – Image compression or coding – Predictive coding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S232000, C382S243000

Reexamination Certificate

active

06349150

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to data storage, data retrieval from storage devices, and data compression. The invention more particularly relates to reducing latency associated with data compression through predictive data compression.
BACKGROUND OF THE INVENTION
Many storage devices and systems utilize data compression to increase virtual storage capacity. The prior art is familiar with Advanced Data Compression products which accompany such storage devices, including Direct Access Storage Devices (DASD) and tape systems.
These products utilize compression algorithms which provide compression ratios of up to 5:1 but which also create data transfer latency: the time it takes for a given byte of data to enter the compression logic until the corresponding compressed data is available for writing to the storage device. A typical latency of the first byte of data through a prior art compression product on a 20 M-byte per second ESCON channel is approximately twenty six microseconds (ESCON defines IBM's fiber interface, as known in the art).
The major cause of above-described latency is attributed to the way in which prior art Advanced Data Compression logic processes data which can expand or compress. Specifically, if compressed data expands rather than compresses, uncompressed data is stored instead.
FIG. 1
schematically illustrates prior art Advanced Data Compression logic
8
.
In
FIG. 1
, the host processor
10
transmits data for compression to the compressor
12
on a data link
14
, such as a serial fiber optic link defined by IBM's ESCON interface. This data is divided into 512-byte blocks, or “chunks” of data. The compressor
12
compresses the first block and writes that compressed block onto data bus
16
and into “Chunk RAM 2,” corresponding to a first random access memory or FIFO within logic
8
. An uncompressed version of the same data chunk is also written into “Chunk RAM 0,” corresponding to a second random access memory or FIFO within logic
8
. After the chunk data is written into both the compressed and uncompressed chunk RAMs, i.e., Chunk RAM
2
and Chunk RAM
0
, respectively, the number of bytes in these memory locations are compared by compression logic
8
and the one with the fewest number of bytes is selected and sent to the associated storage device
18
.
If, for example, the chunk data within Chunk RAM
2
has the fewest number of bytes, that data is routed to device
18
along bus
20
, through multiplexer
22
, and onto a slower 12.5 M-byte per second data path
24
. While the data from Chunk RAM
2
is being sent to device
18
, Chunk RAM
0
and Chunk RAM
1
(a third random access memory or FIFO within logic
8
) are allocated to receive the next 512-byte data chunks of uncompressed and compressed data, respectively, from the host
10
. When Chunk RAMs
0
and
1
are filled, one of the following two scenarios can occur:
(1) If Chunk RAM
2
has had time to empty, Chunk RAM
0
or
1
(whichever has fewer bytes) will be selected, as above, and its data will begin transferring to device
18
. The remaining two Chunk RAMs are then reallocated to receive more data from the host
10
;
(2) If Chunk RAM
2
has not had sufficient time to empty, Chunk RAMs
0
and
1
will hold their data and no more data can be taken from the host
10
. Eventually, when Chunk RAM
2
empties, the sequence of scenario (1) is followed.
This process of allocating two 512-byte chunks of data into Chunk RAMs as compressed and uncompressed versions, and then comparing and selecting the version for storage into device
18
, sequentially repeats through successive data chunks for the entire data transfer from the host
10
. Data transfer latency continues to occur because all 512-bytes of chunk data must be received in order to determine which version (compressed or uncompressed) of the chunk data should be transferred to storage device
18
. The latency actually increases the likelihood that scenario (2) will occur. Furthermore, in that most systems incorporating logic
8
provide a host-side data rate on path
14
that is faster than the device-side data rate on path
24
, scenario (2) is even more likely. By way of example, certain prior art systems have a host-side data rate of 20 M-bytes per second and a device-side data rate of 12.5 M-bytes per second.
Data compression systems would thus benefit from systems and methods which reduce data compression latency such as described above; and one object of the invention is to provide such systems and methods.
Another object of the invention is to provide an application specific integrated circuit (ASIC) which provides improvements to Advanced Data Compression logic to reduce data compression latency.
Yet another object of the invention is to provide a data compression system with predictive data compression to reduce data compression latency.
These and other objects will become apparent in the description that follows.
SUMMARY OF THE INVENTION
U.S. Pat. Nos. 5,602,764 and 5,247,638 relate to data compression for storage devices and provide useful background information for the invention. U.S. Pat. Nos. 5,602,764 and 5,247,638 are thus herein incorporated by reference.
In one aspect, the invention provides data compression logic to compress data from a host to a connected storage device. A compressor implements a compression algorithm on the data, and memory stores compressed and uncompressed data chunks from the compressor. Predictive digital logic assesses memory usage within the memory while uncompressed and compressed chunks are stored within the memory. The predictive digital logic drains either the uncompressed chunks or compressed chunks into the storage device based upon compression efficiency of the compressed data as compared to the uncompressed data and prior to complete loading of the chunks within the memory.
In one aspect, the memory includes Chunk RAMs, one RAM storing the compressed data chunks and one RAM storing the uncompressed data chunks. Preferably, the Chunk RAMs include three chunk RAMs, two RAMs for storing data chunks while a third RAM drains into the storage device.
In another aspect, a multiplexer routes selected chunk data from one of the RAMs to the storage device.
In still another aspect, the predictive digital logic includes calculation logic to dynamically calculate compression efficiency in selecting one RAM for draining. The calculation logic can dynamically calculate and compare compression efficiency as a linear function, for example.
In another aspect, the invention provides a data compression system to compress data from a host to a cache memory. A bus interface communicates with a host bus connected to the host; and a DMA controller reformats the data into data chunks. Compressor logic compresses the reformatted data and stores and selects compressed or uncompressed data chunks for transfer to the cache memory. The compressor logic has predictive digital logic to compare the uncompressed and compressed data chunks and to select either the uncompressed or compressed data chunk to drain into the cache memory based upon compression efficiency of the compressed data chunk as compared to the uncompressed data chunk.
In one aspect, a FIFO is used as a buffer to store reformatted data prior to compression by the compressor.
The invention also provides a method of predicting data compression for early draining of a data buffer within compression logic, the logic of the type which compresses host data for storage into cache memory, including the steps of comparing a storage capacity of compressed and uncompressed data within the logic as the compressed and uncompressed data loads into logic memory; and selecting either the uncompressed or compressed data for transfer to the cache memory based upon compression efficiency of the compressed data as compared to the uncompressed data.
In another aspect, the step of selecting includes the step of dynamically calculating compression efficiency in comparison to a linear function.
The invention is next described further in connec

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Predictive data compression system and methods does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Predictive data compression system and methods, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Predictive data compression system and methods will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2962679

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.