Method and apparatus of extracting text from document image...

Image analysis – Pattern recognition – Feature extraction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S237000, C382S167000

Reexamination Certificate

active

07813554

ABSTRACT:
The present invention discloses an apparatus of extracting text from document image with complex background, a method of extracting text from document image with complex background, computer program and storage medium thereof. The preferred method of extracting text from document image with complex background according to the present invention comprising the steps of: a first edge extracting step of extracting edges which have higher contrast than a first contrast threshold from said image; a searching step of searching connected edges from said extracted edges; a second edge extracting step of extracting edges which have higher contrast than a second contrast threshold in case that the pixels number of said searched connected edges is bigger than a predetermined size, wherein said second contrast threshold is higher than said first contrast threshold.

REFERENCES:
patent: 7024043 (2006-04-01), Fujimoto et al.
patent: 7437002 (2008-10-01), Tanaka
patent: 2000020714 (2000-01-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus of extracting text from document image... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus of extracting text from document image..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus of extracting text from document image... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4184536

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.