Method and apparatus for enhanced decompressor parsing

Coded data generation or conversion – Digital code to digital code converters

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C341S067000

Reexamination Certificate

active

06310563

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to data compression systems, and more specifically to an improved method and apparatus for coding and parsing compressed data for the purpose of avoiding system bottlenecks that prevent optimum throughput.
2. Discussion of the Prior Art
Data compression has become increasingly vital in today's computer systems due to the high demand for data transmission and storage capacity. In particular, main memory compression is now both feasible and desirable with the advent of parallel compression using a cooperative dictionary, as described in commonly-owned U.S. Pat. No. 5,729,228 to Franaszek et al. entitled PARALLEL COMPRESSION AND DECOMPRESSION USING A COOPERATIVE DICTIONARY, incorporated herein by reference. Parallel compression is a relatively new art in the field of compression. Its main concept is to divide a block of uncompressed data into multiple sectors and then assign them to individual engines for both compression and decompression with all engines sharing a cooperative dictionary such that the compression ratio is close to that of a single-engine design. This results in much better latency and throughput than the previous single-engine designs, thus making main memory compression feasible.
Nevertheless, significant improvements are still needed, particularly in the decompression process, in order to keep pace with the rapid acceleration in today's processor speed. In particular, a processor cannot tolerate high latency or low throughput while accessing data from the main memory through the decompressor. In the past, main memory decompression has often been limited in throughput performance primarily due to the critical timing paths within its decompressor's parser. The main function of the decompressor parser is to extract consecutive data phrases from the incoming compressed data stream. These phrases comprise a certain predetermined combinations of raw characters and variable-length strings. They will eventually be decoded into uncompressed data bytes in the latter stages of the decompressor. The parser must be able to parse phrases quickly so as to sustain the decompression engine pipeline. Specifically, referring to
FIG. 2
, within each clock cycle, the parser utilizes an address pointer to extract a new phrase from the parser data input register, determines its type and bit length, and then calculates the address pointer for the next phrase. This process is quite cumbersome and usually results in critical paths running through multiple logic levels within the barrel shifters, adders, encoders and multiplexers. As a result, it limits the highest decompression clock rate for a given technology and compression algorithm.
It would thus be highly desirable to provide an enhanced method and apparatus which will improve the latency and throughput of the decompressor by simplifying the compression algorithm and its parsing mechanism, without sacrificing the overall compression ratio.
Moreover, it is the case that the entire decompression process is controlled by a state machine having a certain number of states. These states transition from one to another in order to initiate or terminate various steps within the decompression pipeline. They keep all decompression engines in a parallel configuration running synchronously to one another. Once the decompression process is initiated, any stall originated from the decompressor's input interface, any particular internal engine, or its output interface will also stall the entire pipeline for all engines. Thus, any stall downstream to the parser will also immediately stop the parser from parsing. This would degrade the overall decompressor's throughput performance. For example, if a cache controller is not ready to receive additional decompressed data, it will stop requesting for data. This will in turn stall the entire decompressor's pipeline.
It would thus be additionally desirable to provide a method and apparatus which will improve the latency and throughput of the decompressor by isolating the operation of the parser stage in such a manner that a subsequent downstream stall will not stall the parser operation.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a method and apparatus for reducing and eliminating a operation bottleneck created by decompressor operations due to its engines' parser devices.
It is a further object of the invention to provide a method and apparatus for reducing and eliminating a operation bottleneck created by decompressor operations due to its engines' parser devices by employing a compression algorithm whose coding scheme generates an optimal set of coded phrases comprising characters, strings and combinations thereof, to facilitate simpler and faster parsing without sacrificing good compression ratio.
It is a further object of the invention to provide a tokenizer device in between a parser device and its corresponding decompressor engine, that provides isolation of the parser and prevents any downstream stall occurring in the corresponding decompressor engine from stalling parser operation.
It is a further object of the invention to provide an enhanced parser device that is provisioned to match and sustain the bandwidth of the decompressor output by selecting an optimum set of compressed data “phrases” that meets certain performance enhancement criteria and prevents decompressor stalls that may be caused by the parser.
According to the invention, there is provided a data decompression system and methodology for decompressing information units received in compressed form, the data decompression system comprising: a parser device for extracting consecutive data phrases of compressed information units at a parser device engine rate, each data phrase comprising one of a predetermined set of characters, compressed strings or combinations thereof; a tokenizer device associated with the parser device for receiving the data phrases and converting each data phrase into fixed bit-length tokens of shortened bit-length, each token including an indicator for identifying its corresponding phrase as comprising one of a raw character or a string; and, a decompression engine for receiving the tokens and generating corresponding uncompressed information units for output thereof at a predetermined data bus rate, whereby the parser device, tokenizer device and decompression engine operate at engine rates equal to or greater than the predetermined data bus rate for maximizing decompression throughput and reducing decompression time.
Advantageously, such a method and system implementing a tokenizer between the parser device and its corresponding decompressor engine, enables the parser to concentrate parsing a much smaller set of high-level phrases which simplifies parser logic greatly and help tighten cell placement for better timing.
Furthermore, the system of the invention may be implemented in any type of compressed data transmission, storage and retrieval systems.


REFERENCES:
patent: 5325091 (1994-06-01), Kaplan et al.
patent: 5572206 (1996-11-01), Miller et al.
patent: 5608396 (1997-03-01), Cheng et al.
patent: 5729228 (1998-03-01), Franaszek et al.
patent: 6037983 (2000-03-01), Forbes

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for enhanced decompressor parsing does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for enhanced decompressor parsing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for enhanced decompressor parsing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2589080

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.