Photoreceptor array for linear optical flow measurement

Radiant energy – Photocells; circuits and apparatus – Photocell controlled circuit

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C356S028000, C382S278000

Reexamination Certificate

active

06194695

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to the general field of optical flow computation in images, and more specifically, to the field of visual motion sensing chips.
2. Description of the Related Art
The term “optical flow” generally refers to the motion of texture seen by an agent (such as an animal or a robot) as a result of relative motion between the agent and other objects in the environment. It is well known that animals, especially insects, use information from the optical flow for depth perception and to move about in an environment without colliding into obstacles. Robotics and machine vision researchers have borrowed from these ideas in biology to build machine vision systems that successfully use optical flow for depth perception and obstacle avoidance. These successes verify that optical flow can indeed be used for depth perception and obstacles avoidance in real systems.
In robotics and machine vision, optical flow is mathematically expressed as a vector field either in the two-dimensional visual field
105
or in an image focal plane created by a lens. In the subsequent discussion, no differentiation is made between these two variations.
The value of the optic flow vector
103
(
FIG. 1
) at a location in the visual field
105
(
FIG. 1
) refers to the velocity at which texture
107
(
FIG. 1
) is moving in that location of the visual field
105
. To measure optical flow, typical machine vision systems grab sets of two-dimensional pixel array images using a camera, and use these sequential images to measure the optical flow field. In general, algorithms for computing optical flow from a set of images are computationally intensive, resulting in the need for a large amount of computing power for the computation of optical flow. This is because optical flow is a phenomena that has both spatial and temporal aspects, hence a large amount of data (two-dimensions of image and one dimension of time) needs to be processed. This is especially true if the image is sensed with a charge coupled device (CCD) or another pixel-structured type of image sensor. Almost all modem machine vision systems use such pixel-structured cameras. A discussion of the variety of algorithms used to measure optical flow is beyond the scope of this invention. However two of the more successful and popular approaches, the correlation method and the gradient method, are applicable to this sensor.
The correlation method of measuring optic flow will be qualitatively discussed by now referring to
FIGS. 2
a
through
2
c
.
FIGS. 2
a
and
2
b
show two successive frames or images of a video sequence.
FIG. 2
a
shows the first frame
115
, a location
117
in the first frame, and a block of texture
119
in the first frame.
FIG. 2
b
shows the second frame
121
, the search space
123
for a block that matches the first frame's block
119
, and the best matching block
125
.
FIG. 2
c
shows a resulting optic flow vector
127
. In the correlation method, to measure the optic flow at a location
117
in the visual field
105
generating the first frame
115
, first a block of texture
119
is selected that contains the location
117
. Next a search is conducted in the second frame
121
over a search space
123
for the block
125
that best matches the first image's block
119
. A correlation function is typically used to measure how well two blocks
119
and
125
match, hence the name of the algorithm. A distance function between the two blocks
119
and
125
can also be used, where the best match is found by minimizing distance. The displacement between the first frame's block
119
and the second framers best matching block
125
is used to form the resulting optic flow vector
127
, shown in
FIG. 2
c
. This process is repeated over the entire image
115
to compute a complete optic flow field. To summarize. essentially this algorithm tracks texture and picture details such as points and edges, and uses the resulting displacements to measure optic flow. In more advanced versions of this algorithm, ideas from signal processing and estimation theory can be used to improve performance. For example, Kalman filters can be used to aid the tracking of features over multiple sequential images. A discussion of these techniques is beyond the scope of this invention but can be found in the general open literature.
This method of computing optical flow is intuitive. However it is brute force, and hence computationally intensive when performed on two-dimensional images. However the computational complexity is greatly reduced when performed on one-dimensional images. This is for two reasons: first, because block matching computations need to be performed with blocks of one-dimension rather than two: and second, the range of potential displacements to be searched covers one-dimension rather than two. It will be seen that the photoreceptor array of this invention provides a useful one-dimensional image.
The gradient method of measuring optic flow will now be discussed. The gradient method of computing optical flow assumes that the local shape of the image remains constant while it is moving. Then by comparing the spatial partial derivatives of the image intensity, which contains information on the shape of the image if plotted as a curve, with the temporal partial derivatives, which is how fast the intensity at a location is getting brighter or dimmer, knowledge about the image motion can be had.
FIG. 2
is used to explain the gradient method on a one dimensional image. In
FIG. 3
, shown is a time varying image intensity curse I(x,t)
129
, the velocity &ngr;
131
, the slope of the spatial intensity partial derivative I
x
(x,t)
133
, and the temporal intensity partial derivative I
t
(x,t)
135
are shown. In the gradient method, just like the correlation method, it is assumed that the general shape of the image intensity I(x,t)
129
remains the same but translates at the constant velocity &ngr;
131
. This assumption is expressed by the mathematical equation
I(
x,t+&Dgr;t
)=I(
x−&ngr;&Dgr;t,t
).
The spatial intensity derivative I
x
(x,t)
133
, formally given as
I
x

(
x
,
t
)



x

I

(
x
,
t
)
expresses how fast the image intensity
129
changes when time is frozen and as the variable x moves in the positive X direction. The temporal intensity derivative I
t
(x,t)
135
, formally given as
I
t

(
x
,
t
)



t

I

(
x
,
t
)
expresses how fast the image intensity at a single location changes as a result of texture motion.
Given the above assumption that the shape of the image remains the same and just translates in time, the spatial partial derivative I
x
(x,t)
133
and temporal partial derivative I
t
(x,t)
135
follow the relationship
I
t
(
x,t
)=−&ngr;I
x
(
x,t
).
Thus
v
=
-
I
t

(
x
,
t
)
I
x

(
x
,
t
)
.
Therefore if both I
x
(x,t)
133
and I
t
(x,t)
135
are known, then the image motion &ngr;
131
can be estimated.
The values I
x
(x,t) and I
t
(x,t) are easy to compute when using a one dimensional image. I
x
(x,t) can be approximated by
I
x

(
x
,
t
)

I

(
x
+
Δ

x
,
t
)
-
I

(
x
,
t
)
Δ

x
where &Dgr;x is the distance between two nearby pixels on the image. Essentially two nearby locations of the image are looked at simultaneously and then their difference computed to approximate the spatial intensity derivative I
x
(x,t). Likewise I
t
(x,t) can be approximated by
I
t

(
x
,
t
)

I

(
x
,
t
+
Δ

t
)
-
I

(
x
,
t
)
Δ

t
where &Dgr;t is the time interval between two samplings of an image location. Essentially the intensity at the image location is looked at twice at two different times, and the change in intensity used to compute the temporal intensity derivative I
t
(x,t). The two above methods of estimating the intensity derivatives
133
and
135
is derived straight from basic differential calculus.
In two dimensions the gen

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Photoreceptor array for linear optical flow measurement does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Photoreceptor array for linear optical flow measurement, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Photoreceptor array for linear optical flow measurement will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2558721

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.