Method for determining a border in a complex scene with...

Image analysis – Pattern recognition – Feature extraction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S205000, C382S164000, C382S261000

Reexamination Certificate

active

06337925

ABSTRACT:

TECHNICAL FIELD
This invention relates generally to computing systems and more specifically to systems and methods for detecting a border in a digital image.
BACKGROUND
A digital image is a collection of digital information that may be cast into the form of a visual image. Digital images may include, for example, photographs, artwork, documents, and web pages. Digital images may be obtained, for example, from digital cameras, digital video, scanners, and facsimile. The images may be two-dimensional or multi-dimensional. For example, three-dimensional images may include representations of three-dimensional space, or of two-dimensional movies, where the third dimension is time.
The fundamental element of a digital image is a pixel. Referring to
FIG. 1
, a digital image
100
is shown which is 10 pixels wide and 10 pixels high. A single pixel
105
in this image
100
is represented by a square. Generally, a pixel has a specific location (designated in two-dimensional space as {right arrow over (r)}=(x, y)) in the digital image and it contains color information for that location. Color information represents a vector of values, the vector characterizing all or a portion of the image intensity information. Color information could, for example, represent red (R), green (G), and blue (B) intensities in an RGB color space. Or, as shown in
FIG. 1
, color information may represent a single luminosity in a grayscale color space.
A color space is a multi-dimensional space in which each point in the space corresponds to a color. For example, RGB color space is a color space in which each point is a color formed of the additive amounts of red, green and blue colors. As another example, color information could represent information such as cyan-magenta-yellow (CMY), cyan-magenta-yellow-black (CMYK), Pantone, Hexachrome, x-ray, infrared, and gamma ray intensities from various spectral wavelength bands. Thus, for example, in CMY color space, each point is a color formed of the combination of cyan, magenta, and yellow colors. Color information may, in addition, represent other modalities of information, such as acoustic amplitudes (sonar, ultrasound) or magnetic resonance imaging (MRI) amplitudes.
In RGB color space, levels of red, green, and blue can each range from 0 to 100 percent of full intensity. Each level can be represented by a range of decimal numbers from, for example, 0 to 255 (256 levels for each color) with an equivalent range of binary numbers extends from 00000000 to 11111111. The total number of available colors would therefore be 256×256×256, or 16,777,216 possible colors.
One way to express the color of a particular pixel relative to other pixels in an image is with a gradient vector (designated as {right arrow over (G)}). The gradient vector at the particular pixel is an indication of a direction and magnitude of change in colors of pixels relative to the particular pixel and it may be calculated using the color and position information inherently associated with the pixels in the image. Generally, the gradient vector for a pixel at a position {right arrow over (r)} may be designated as {right arrow over (G)}({right arrow over (r)})=(g({right arrow over (r)}) cos &ohgr;({right arrow over (r)}), g({right arrow over (r)}) sin &ohgr;({right arrow over (r)})), where G({right arrow over (r)})=|{right arrow over (G)}({right arrow over (r)})| is the magnitude of the gradient vector at the pixel located at position {right arrow over (r)} and &ohgr;({right arrow over (r)}) is the angle or direction of the gradient vector at the pixel located at position {right arrow over (r)}. An example of a gradient vector is shown schematically by vector
110
, which points in the direction of the greatest change in color, and whose magnitude indicates the amount of color change. In the case of a linear boundary that bisects a first region that is white and a second region that is black, the gradient vector at each pixel along the boundary would be of the same magnitude and angle (which would be perpendicular to the linear direction of the boundary). Moreover, the gradient magnitude at a pixel outside of the linear boundary and distinctly within one of the white or black regions would be zero because the surrounding pixels have the same color as that pixel.
It is common for one working with a digital image to cut or separate a foreground region of the image from a background region of the image. The foreground region often corresponds to an object or set of objects in the image. Alternatively, the foreground region may correspond to a region outside of an object or set of objects. In any case, regions of the image that are not part of the desired foreground region may be referred to as background regions.
Referring to the example of
FIGS. 2
a
and
2
b
, digital image
200
includes a foreground region
202
(the chair) and a background region
204
(the hallway, doors, floor, windows, and walls). While foreground region
202
only includes a single object (the chair) that is highlighted in
FIG. 2
b
, foreground region
202
can include plural objects some of which may overlap. For example, in
FIG. 2
a
, the user may have designated one of the doors as the foreground region. Or, the user may have designated the combination of the floor and the chair as the foreground region.
In a method for identifying the foreground region
202
in the digital image
200
, the user can select, using a graphical interface device (or brush)
207
, boundary
210
(shown in
FIG. 2
b
) in the digital image
200
that encompasses or traces the chair and then designates the chair as the foreground region
202
. The graphical interface device is a mechanism that enables the user to indicate or “paint” the boundary, much like a brush is used by a painter.
Boundary
210
bounds the chair and can also include portions of other objects. For example, boundary
210
may include portions of a door or the floor if the user wants to include those objects in the foreground region.
FIG. 2
b
shows a highlighted portion of the boundary of the chair.
Defining a boundary
210
that only encompasses the chair can be difficult. For example, the user can trace with a relatively larger brush around the top of the chair, but a relatively smaller brush is required near the wheels and the arm rests. The user may select different sized brushes depending on the region that will be traced. However, in order to ensure that the region to be highlighted actually covers the boundary to be traced, a larger brush is typically selected. Moreover, even when the user traces with a relatively narrower brush around the wheels and arm rests, the narrow brush may still cover many features of the chair and the background region.
In addition to being time consuming, tracing along the boundary won't resolve how much of the pixel color for a pixel in the boundary came from the object (for example, the chair) and how much came from the background region. The process of characterizing individual pixels in the boundary is difficult because their data is a blend of both the object data and the background region data.
A portion or object of a digital image may be identified for further processing using an identification or selection operation. An example of such operation is a masking operation in which an object in a digital image is cut so that the object can be manipulated (for example, blended into another region or otherwise manipulated). Masking typically includes defining an opacity (conventionally represented by alpha &agr;) for pixels in the masked and unmasked regions, where the opacity specifies the degree to which an associated pixel is selected (for example, identified or masked). A value of 1 can be used to indicate that the pixel belongs completely to the object or foreground region. A value of 0 can be used to indicate that the pixel belongs completely to the background region. Values between 0 and 1 indicate partial membership in both.
Referring also to the digital image of
FIG. 3
a
, a f

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for determining a border in a complex scene with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for determining a border in a complex scene with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for determining a border in a complex scene with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2820557

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.