Automatic segmentation-based grass detection for real-time...

Image analysis – Color image processing – Pattern recognition or classification using color

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S164000

Reexamination Certificate

active

06832000

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to segmenting pixels based upon selected criteria. More specifically, the invention relates to classifying pixels based upon their color and texture, to allow for subsequent processing of pixels that receive a common classification.
BACKGROUND OF THE INVENTION
Segmentation of television images is the process wherein each frame of a sequence of images is subdivided into regions or segments. Each segment includes a cluster of pixels that encompass a region of the image with common properties. For example a segment may be distinguished by a common color, texture, shape, amplitude range or temporal variation. Several known methods of image segmentation use a process in which a binary decision determines how the pixels will be segmented. According to such a process, all pixels in a region either satisfy a common criteria for a segment and are therefore included in the segment, or they do not satisfy the criteria are completely excluded. While segmentation methods such as these are satisfactory for some purposes, they are unacceptable for many others. In the case of moving image sequences, small changes in appearance, lighting or perspective may only cause small changes in the overall appearance of the image. However, application of a segmentation method such as that described above tends to allow regions of the image that should appear to be the same to satisfy the segmentation criteria in one frame, while failing to satisfy it in another.
One of the main reasons for segmenting images is to conduct enhancement operations on the segmented portions. When the image is segmented according to a binary segmentation method such as that previously described, the subsequently applied enhancement operations often produce random variations in image enhancement, usually at the edges of the segmented regions. Such random variations in moving sequences represent disturbing artifacts that are unacceptable to viewers. Image enhancement in the television setting includes both global and local methods. While local enhancement methods are known, they are currently controlled by global parameters. For example, an edge enhancement algorithm may adapt to the local edge characteristics, but the parameters that govern the algorithm (e.g. filter frequency characteristics) are global—the enhancement operations that are applied are the same for all regions of the image. The use of global parameters limits the most effective enhancement that can be applied to any given image. Improved enhancement would be available if the algorithm could be trained to recognize the features depicted in different segments of the image and could, therefore, allow the image enhancement algorithms and parameters that are optimum for each type of image feature to be chosen dynamically.
The present invention combines segmentation and local enhancement to provide new enhancement functionality that has not been available within the prior art.
SUMMARY OF THE INVENTION
In one embodiment of the invention, pixels in an image are segmented based upon selected criteria. A signal, such as a baseband video signal, is used to calculate a color probability function for the pixels in the image. This color probability function estimates, for each pixel in the image, the probability that the color value of the pixel will lie within a range of values that represents a designated color. The pixels are also used to calculate a texture probability function that estimates, for each pixel in the image, the probability that the pixel represents a designated texture. Pixels are then segmented based upon the product or other combination of the color probability function and the texture probability function.
In another embodiment, the color probability function is defined as P
color
=exp(−(((y−y
0
)&sgr;y)
2
+((u−u
0
)/&sgr;u)
2
+((v−v
0
)/&sgr;v)
2
)) and the texture probability function is defined as P
texture
=((t/sqrt(m*t))*exp(−((t−m)/s)
2
), where y represents a pixel luminance value, u and v represent color coordinates in a YUV color space, t represents the root-mean-square variation of a pixel value luminance in a window surrounding the pixel, m is a value on a luminance scale that describes the location of the peak of the function and s is a value on the same luminance scale that describes the width of the same luminance function.
Other embodiments of the present invention and features thereof will become apparent from the following detailed description, considered in conjunction with the accompanying drawing figures.


REFERENCES:
patent: 6243713 (2001-06-01), Nelson et al.
patent: 6434272 (2002-08-01), Saarelma
patent: 6549660 (2003-04-01), Lipson et al.
patent: 6560360 (2003-05-01), Neskovic et al.
patent: 6642940 (2003-11-01), Dakss et al.
US 010123—“System and Method for Performing Segmentation-Based Enhancements of A Video Image” by Stephen Herman et al., U.S. patent application Ser. No. 09/819,360.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automatic segmentation-based grass detection for real-time... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automatic segmentation-based grass detection for real-time..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automatic segmentation-based grass detection for real-time... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3298506

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.