Image analysis – Pattern recognition – Feature extraction
Reexamination Certificate
1999-09-09
2002-09-10
Au, Amelia M. (Department: 2623)
Image analysis
Pattern recognition
Feature extraction
C382S105000, C382S199000, C382S267000, C358S462000, C358S463000
Reexamination Certificate
active
06449391
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to digital image pattern recognition and, more particularly, to a method for segmenting an image for an exact character recognition.
BACKGROUND OF THE INVENTION
Pattern recognition has been used for many industrial applications. Many statistical and syntactical techniques have been developed for the classification of patterns. Techniques from pattern recognition play an important role in machine vision for recognizing objects.
Numerical data and symbolic data can be recognized by a computer-based machine through the pattern recognition. The machine vision is adapted for recognizing images.
Generally, an image may contain several objects and, in turn, each object may contain several regions corresponding to different parts of the object. For correct interpretation of an object, the object should be partitioned into multiple regions, each region corresponding to a divided part of the object.
In particular, an image containing characters, such as an automobile licence plate image, can be represented as a plurality of picture elements (hereinafter, “pixels”), each of which has a particular intensity throughout. Characters of a licence plate image are typically established by painting the characters against a light or dark colored background, so that the pixels containing character information have lower or higher intensity than those containing only background information.
In pattern recognition, all pixels are grouped in accordance with their corresponding regions. The respective grouped pixels are marked to indicate that they belong to a certain region. This process is called segmentation.
FIG. 1A
 illustrates an image of an automobile licence plate captured by a camera of a machine vision system used for recognizing objects. The image contains noise components 
11
 and 
12
. 
FIG. 1B
 illustrates a vertical projection result of a binary image of the automobile licence plate shown in 
FIG. 1A
 before performing an edge detection.
Referring to 
FIG. 1A
, the machine vision system serves to digitize the captured image, i.e., to transform the analog signal provided from the camera into a plurality of digital words each representing the intensity of a pixel of the captured image. The most commonly represented image intensities have 256 different gray levels.
Generally, the machine vision systems use a binary image rather than an image represented with gray levels for segmentation. Because the binary image contains only two gray levels, the machine vision systems using the binary image tend to be less expensive and faster than vision systems that operate on gray level or color images.
Thus, prior to the vertical and/or horizontal projection process, the image represented with gray levels shown in 
FIG. 1A
 is converted into the binary image. A method for making the binary image is disclosed in “THE POCKET HANDBOOK OF IMAGE PROCESSING ALGORITHM IN C ” by Harley R. Myler et al., Prentice Hall, pp. 239-240, published in 1995, and “MACHINE VISION ” by Ramesh Jain et al., McGraw-Hill, Inc., pp. 25-31, published in 1995.
Generally, an image data is often corrupted by random variations in intensity values so as to become noises (referring to the region 
11
 and 
12
 of FIG. 
1
A). Some common types of noises 
1
1
 and 
12
 are salt and pepper noises, impulse noises, and Gaussian noises.
Regarding the licence plate image of 
FIG. 1A
, the background image 
13
 may correspond to white pixels (for example, logic “0”s), while character image 
10
 correspond to black pixels (for example, logic “1”s), in its binary image. In addition, noise components 
11
 and 
12
 contained in the binary image may also correspond to black pixels (“1”s). For this reason, the pixels of noises 
11
 and 
12
 will be counted as the black pixels (“1”s) together with the pixels of the respective characters 
10
 during the vertical projection process. In other words, the noise components are considered as the character components. If the count value of any region is greater than a predetermined threshold Th, the region is classified as a character region. Accordingly, greatly erroneous segmentation will result from the noises 
11
 and 
12
 (see the regions 
14
 and 
15
 of FIG. 
1
B). An example of image segmentation is disclosed in U.S. Pat. No. 5,253,304, issued to Yann A. LeCun et al.
Segmentation of an image can also be achieved by finding pixels that lie at boundaries of the regions. These pixels, called edges, can be found by looking at neighboring pixels. Most edge detectors may utilize intensity characteristics as the basis for edge detection. Such a method using the edge detection is illustrated in FIG. 
2
.
Referring to 
FIG. 2
, the segmentation begins at step 
20
 by receiving a grey-level image data. In step 
21
, edge detection of image data is performed. Edges typically exist on the boundary between two different regions in an image. Detailed edge detecting techniques are disclosed in, for example, “MACHINE VISION” by Ramesh Jain et al., McGraw-Hill, Inc., pp. 140-181, published in 1995. At step 
22
, the image is segmented through horizontal projection and/or vertical projection with thresholds.
FIG. 3A
 illustrates detected edges of the automobile licence plate image of 
FIG. 1A
, and 
FIG. 3B
 is a diagram illustrating a vertical projection result of the image of FIG. 
3
A. Referring to 
FIG. 3A
, the edges are obtained by an edge detector, for example, Sobel operator, Prewitt operator, Laplacian operator, or Laplacian of gGaussian operator. The image of 
FIG. 3A
 has less noise than does the binary image of FIG. 
1
A. As shown in 
FIG. 3B
, however, the erroneous segmentation (referring to the regions 
34
 and 
35
 of 
FIG. 3B
) still remains because of the remaining noise components 
31
 and 
32
.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a method for segmenting an image to obtain more accurate recognition of characters in the image.
In order to attain the above object as well as other objects, there is provided a method for segmenting an image formed of a plurality of pixels, comprising the steps of detecting edges of the image, scanning rows of the image to discriminate between real edges and noises, eliminating the noises of the image, projecting the image, and segmenting the image.
REFERENCES:
patent: 4747149 (1988-05-01), Umeda et al.
patent: 5253304 (1993-10-01), LeCun et al.
patent: 5754684 (1998-05-01), Kim
patent: 5999647 (1999-12-01), Nakao et al.
patent: 6115497 (2000-09-01), Vaezi et al.
Au Amelia M.
Dastouri Mehrdad
F. Chau & Associates LLP
Samsung Electronics Co,. Ltd.
LandOfFree
Image segmentation method with enhanced noise elimination... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image segmentation method with enhanced noise elimination..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image segmentation method with enhanced noise elimination... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2847463