Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering
Reexamination Certificate
1999-05-03
2003-09-23
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output data buffering
C710S004000, C710S020000, C710S033000, C710S053000, C710S054000, C710S055000, C710S056000, C710S057000, C710S068000, C710S310000
Reexamination Certificate
active
06625671
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates in general to methods and apparatus for electronic data communication, and particularly to a method of compressing buffered data being transmitted between digital devices.
BACKGROUND OF THE INVENTION
In conventional networked systems, data is often transmitted between digital devices over a variety of protocols. Switching platforms exist that are capable of converting and switching data from one protocol to another. For instance, data can be transmitted by an IBM mainframe over an ESCON channel protocol. A switching platform can receive such an ESCON data stream and redirect the data over a different channel protocol like SCSI, or even a network protocol such as ATM. Using such devices, IBM mainframes can communicate to otherwise incompatible devices such as a SCSI storage device using a known protocol like ESCON.
Switching platforms may also allow remote access to devices that are physically located beyond the limits of a particular communications protocol. For instance, local ESCON data streams can be converted to a Wide Area Network protocol such as ATM and easily be transmitted across the continent. A separate switching platform at the receiving site can receive the data stream and convert it back to the original or another protocol, whatever is appropriate for the receiving device.
When converting data streams between protocols and transmitting the data streams over large distances, it is important that the data flow between devices be as efficient as possible. To maximize transmission speeds, it is often useful to compress the data before transmission. It is well known, for instance, to compress all data in a data stream in order to speed transmission between two like devices. This type is transmission is well documented in protocols such as those used for modem to modem data compression.
Unfortunately, several difficulties prevent the use of the same type of data compression techniques over channel or network communications. First, data transmitted over a channel or network must contain destination information that can be understood by devices on the channel or network. This destination information cannot be compressed or encrypted, or else other devices on the channel or network will not recognize the data and the data could not be properly routed.
Second, data is often transmitted in fragments that are so small that fragment by fragment compression would sometimes slow down data transmission. Thus, it is important to selectively compress only those fragments large enough that compression actually reduces the transmission time.
Third, the receiving device will usually expect to receive the data uncompressed. As a result, compression should only take place where a mechanism exists at the receiving end to decompress the data before the data is presented to the receiving device. If no such mechanism exists, the data stream should not be compressed.
What is needed then is a compressing mechanism that overcomes these difficulties by compressing message packets in a way that header information is readable by other devices, and in a way that compression can be selectively engaged on both a fragment-by-fragment basis and on a path-by-path basis.
SUMMARY OF THE INVENTION
The present invention meets this need by providing a mechanism for compressing data in a data stream without compressing destination information. The present invention is also capable of selectively compressing only those packets that are large enough that the time required for data compression will be offset by decreased transmission time. Finally, the present invention incorporates this compression technique in a multi-port gateway switch that can selectively engage compression only to those destinations that have the capability for decompression.
Specifically, the present invention teaches a method and apparatus for implementing efficient data compression. According to one embodiment, a data compression module comprises compression control circuitry and a plurality of compression engines. The compression module receives instructions for compressing or decompressing through a control block interface. FIFOs are provided for the compression engines to store unprocessed and processed data. The compression control circuitry manages the data flow between the FIFOs and external buffer memory. The status of the compression or decompression task is stored in the compression control.
In a further embodiment, the compression module is incorporated in a switching apparatus for selectively encoding messages transmitted over a network or channel protocol. The apparatus includes a microprocessor, a local memory coupled to the microprocessor, a multi-ported data buffer memory coupled to the microprocessor and a data compression module coupled to the buffer memory and the local memory. The microprocessor communicates to the microprocessor through control blocks stored in the local memory. Data to be processed and processed data are both stored in the multi-ported data buffer.
In a further embodiment, the switching apparatus supports multiple simultaneous network or channel interfaces. Each of these interfaces is able to convert data streams from a particular network or channel protocol to a common data protocol used for storing data in the buffer memory. The data that is being transmitted between various interfaces can be compressed or uncompressed, depending on the compressability of the data, the capabilities of the receiving device, or the size of the data segments being transmitted.
A method is also described that allows data in the buffer memory to be selectively compressed. This method includes the steps of writing a control block to the local memory, reading the control block to determine uncompressed data location and size as well as the location at which the compressed data is to be stored, compressing the data and storing the data at the appropriate locations in the buffer memory, and reporting the status and success of the compression by writing over a portion of the control block.
REFERENCES:
patent: 5627995 (1997-05-01), Miller et al.
patent: 5778255 (1998-07-01), Clark et al.
patent: 6145069 (2000-11-01), Dye
Jim Kunz, Channel Link Compression Module (CCMI), Hardware Specifications, Sep. 23, 1996.
CNT, Channelink Web Page.
Cain Richard L.
Collette William C.
Flattum Steve
Johnson Brian A.
Kunz Jim
Beck & Tysver P.L.L.C.
Computer Network Technology Corporation
Farooq Mohammad O.
Gaffin Jeffrey
LandOfFree
Compression of buffered data does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Compression of buffered data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Compression of buffered data will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3016626