Two dimensional context model obtained without a line buffer for

Image analysis – Pattern recognition – Context analysis or word recognition

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

382247, G06K 900

Patent

active

057456038

ABSTRACT:
An encoding method that allows the use of previous line data without requiring multiple accesses to the image or the use of a line buffer. The image, which is to be compressed, is divided into columns. Each column is traversed vertically. Each row within a column is compressed by horizontally moving across the row. After a row is compressed the next row in transferred to the compressor. Once a column is compressed, the compressor continues at the top of the next column. This process continues until all the columns have been compressed. By vertically traversing through the image, information about the previous row is limited by the width of the column. This limited storage of the previous row allows use of a two dimensional context model, which greatly increases the compression ratio.

REFERENCES:
patent: 4463380 (1984-07-01), Hooks, Jr.
patent: 4785349 (1988-11-01), Keith et al.
patent: 5299197 (1994-03-01), Schlafly
patent: 5381145 (1995-01-01), Allen et al.
patent: 5442458 (1995-08-01), Rabbani et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Two dimensional context model obtained without a line buffer for does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Two dimensional context model obtained without a line buffer for, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Two dimensional context model obtained without a line buffer for will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-1540879

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.