Text image deblurring by high-probability word selection

Image analysis – Image enhancement or restoration – Focus measuring or adjusting

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S290000

Reexamination Certificate

active

06282324

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a computerized system for analyzing blurred printed text by relating the text, on a word-for-word basis, with stored characteristics of text and fonts.
BACKGROUND OF THE INVENTION
In some aspects of image recognition, access to blurred messages occurs. The need to read these occasional anomalies is obvious but present means often require time-consuming digital procedures using the various algorithms such as LaPlacian, high-pass filtering and others currently available.
One of the “standard” approaches of both optical and digital is to use an inverse filter. That is, in an optical system or its digital equivalent, one takes a Fourier Transform of the blurred image and places a filter whose character is to be determined in the Fourier or spatial frequency plane. If properly designed, the filter upon reimaging (taking another Fourier Transform) will bring a degree of restoration to the image, rendering it understandable. That means perfect restoration (in one or more operations) is not necessary, or sometimes not even possible. The basis of restoration is summarized in the following sequence of equations:
g(x
2
,y
2
)=complex amplitude of image
h(x
1
,y
1
;x
2
,y
2
)=impulse response
f(x
1
,y
1
)=complex amplitude of object
g=f*h
G=F H
G H
−1
=F H H
−1
G H
−1
=F (Restored image)
where the capital letters refer to the Fourier Transforms of the corresponding functions and (*) denotes convolution. The result, in principle, is the inverse filter which, when inserted in the Fourier plane, should provide image restoration.
In
FIGS. 1A and 1B
we can see pictorially what is done. In
FIG. 1A
we have the absolute value of amplitude for an image with the modulus of the inverse filter shown in FIG.
1
B. In the simplest case the first and third orders would have negative phase and the second and fourth, positive. In reality the spectrum amplitude and phase are much more complicated in distribution throughout the spatial frequency domain.
Much work has been and is being done principally in the digital analysis world with such techniques as contrast enhancement routines, constrained least squares filtering, extended filters, optimizing mean square error filters, and other extensions or alterations of the Wiener filter. The work also includes the standard digital fare like high-pass filtering with convolution matrices, establishing median filters wherein each pixel is processed by giving it the median of its eight neighbors (in a 3×3 matrix) and Kalman filtering with various kernels. In others, adaptive filtering is performed. This is a technique of performing a large number of iterations of, in sequence, the Fourier Transform, assessment, modification, inverse transform, assessment, Fourier Transform, modification, and so forth. A priori knowledge or good guessing drive the modifications in the sequence. In some iterative routines, the investigator assumes that the degradation must lie between or within a set of parameters and uses these to make appropriate modifications based upon this.
Although the system of my co-pending application Ser. No. 08/351,707 is capable of restoring blurred images, it is believed that the present invention directed to word processing is more efficient. This is due to the fact that larger segments generally require the use of Fourier plane processing. In this case we would process signals like:
(w
1
+w
2
+w
3
. . . )(w
1
+w
2
+w
3
. . . )*=w
2
2
+w
2
2
. . . +w
1
w
2
+w
2
w
1
+
i.e., we would have complex intraword/intrasentence terms in addition to the word and sentence terms themselves making the process of sorting amplitudes and phases of an inverse filter more demanding than most applications warrant.
BRIEF DESCRIPTION OF THE PRESENT INVENTION
This invention is a system for capturing and measuring the characteristics of blurred text imagery. Such measurements are used in conjunction with a priori information to enable blurred imagery to be interpreted with a high degree of correctness.
The a priori information is the type and point value of fonts of interest. Information about the spacing of lines is generally used. Optical information like focal length, f/#, shutter characteristics, and film characteristics are generally known. The image is captured on a high resolution CCD camera and measurements are made by scanning the image in orthogonal directions with a series of scans. The vertical scan enables one to determine the line spacing as well as the degree of keystoning, or image tilt, when recorded. Using developed information the position of the defocused image when recorded can be determined.
A horizontal scan through the lines of the imagery is used to obtain the following information:
1—word length determination
2—word length location and isolation
3—sentence identification
4—paragraph identification
5—intercolumn location
6—single upper, center, and lower letter zone identification and location, and
7—digram and trigram identification and location.
The letters of the alphabet can be divided into three groups according to whether their structure extends vertically upward or downward from a central band. Capitals all extend upward in standard and PC Multimate-like word processing fonts. Computer fonts like the “IBM” 5×7 pixel and Japanese matchstick font are a single zone set of pixels and confined to its full extent so they do not fall into the fonts to be described. The letters of interest can be divided into three zones as shown in the table below, designated upper, central, and lower. Note that the center zone is twice the upper or lower dimensionally; and in some experiments, the center had 10 scan lines through it while, of course, each of the other two then had five. It should be noted that all capitals are upper zone letters.
Letters Extending to Upper Zone
b d f h i j k l t
A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z
Letters in Central Zone
a c e m n o r s u v w x z
Letters Extending to Lower Zone
g j p q y
Note that only “j” extends into the outer two zones. Thus, the detection of letter-presence in the upper and lower zones at the same position along a scan parallel to the word line immediately identifies the presence of a “j” in the word. Similarly, identification of a one-letter word as an upper zone element yields the narrow choice A or I.
In some of the frequencies of occurrence discussions, reference to the terminology “lcuu” is used to refer to the lower (
1
), center (c), and upper (u) zones as in the word yolk.
In addition to zone structure, the invention relies upon known frequency of letters, average paragraph size, and average word size.


REFERENCES:
patent: 4251799 (1981-02-01), Jih
patent: 4275265 (1981-06-01), Davida et al.
patent: 4654875 (1987-03-01), Srihari et al.
patent: 5001766 (1991-03-01), Baird
patent: 5020117 (1991-05-01), Ooi et al.
patent: 5075896 (1991-12-01), Wilcox et al.
patent: 5235653 (1993-08-01), Nakano et al.
patent: 5313527 (1994-05-01), Guberman et al.
patent: 5384863 (1995-01-01), Huttenlocher et al.
patent: 5392363 (1995-02-01), Fujisaki et al.
Russ, The Image Processing Handbook, 2nd Ed., 1994, pp. 199-201, Textbook.*
Stroke; Optical computing; 12/72; p. 26, p. 28; IEEE Spetrum.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Text image deblurring by high-probability word selection does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Text image deblurring by high-probability word selection, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Text image deblurring by high-probability word selection will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2498164

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.