Optics: measuring and testing – Range or remote distance finding – With photodetection
Reexamination Certificate
2000-11-28
2003-02-18
Buczinski, Stephen C. (Department: 3662)
Optics: measuring and testing
Range or remote distance finding
With photodetection
C356S005040, C356S005080, C356S141100
Reexamination Certificate
active
06522395
ABSTRACT:
FIELD OF THE INVENTION
The invention relates generally to range finder type image sensors, and more particularly to such sensors as may be implemented on a single integrated circuit using CMOS fabrication.
BACKGROUND OF THE INVENTION
Electronic circuits that provide a measure of distance from the circuit to an object are known in the art, and may be exemplified by system
10
FIG.
1
. In the generalized system of
FIG. 1
, imaging circuitry within system
10
is used to approximate the distance (e.g., Z
1
, Z
2
, Z
3
) to an object
20
, the top portion of which is shown more distant from system
10
than is the bottom portion. Typically system
10
will include a light source
30
whose light output is focused by a lens
40
and directed toward the object to be imaged, here object
20
. Other prior art systems do not provide an active light source
30
and instead rely upon and indeed require ambient light reflected by the object of interest.
Various fractions of the light from source
30
may be reflected by surface portions of object
20
, and is focused by a lens
50
. This return light falls upon various detector devices
60
, e.g., photodiodes or the like, in an array on an integrated circuit (IC)
70
. Devices
60
produce a rendering of the luminosity of an object (e.g., 10) in the scene from which distance data is to be inferred. In some applications devices
60
might be charge coupled devices (CCDs) or arrays of CMOS devices.
CCDs typically are configured in a so-called bucket-brigade whereby light-detected charge by a first CCD is serial-coupled to an adjacent CCD, whose output in turn is coupled to a third CCD, and so on. This bucket-brigade configuration precludes fabricating processing circuitry on the same IC containing the CCD array. Further, CCDs provide a serial readout as opposed to a random readout. For example, if a CCD range finder system were used in a digital zoom lens application, even though most of the relevant data would be provided by a few of the CCDs in the array, it would nonetheless be necessary to readout the entire array to gain access to the relevant data, a time consuming process. In still and some motion photography applications, CCD-based systems might still find utility.
As noted, the upper portion of object
20
is intentionally shown more distant that the lower portion, which is to say distance Z
3
>Z
3
>Z
1
. In an range finder autofocus camera environment, devices
60
approximate average distance from the camera (e.g., from Z=0) to object
10
by examining relative luminosity data obtained from the object. In
FIG. 1
, the upper portion of object
20
is darker than the lower portion, and presumably is more distant than the lower portion. In a more complicated scene, focal distance to an object or subject standing against a background would be approximated by distinguishing the subject from the background by a change in luminosity. In a range finding binocular application, the field of view is sufficiently small such that all objects in focus are at substantially the same distance. In the various applications, circuits
80
,
90
,
100
within system
10
would assist in this signal processing. As noted, if IC
70
includes CCDs
60
, other processing circuitry such as
80
,
90
,
100
are formed off-chip.
Unfortunately, reflected luminosity data does not provide a truly accurate rendering of distance because the reflectivity of the object is unknown. Thus, a distant object surface with a shiny surface may reflect as much light (perhaps more) than a closer object surface with a dull finish.
Other focusing systems are known in the art. Infrared (IR) autofocus systems for use in cameras or binoculars produce a single distance value that is an average or a minimum distance to all targets within the field of view. Other camera autofocus systems often require mechanical focusing of the lens onto the subject to determine distance. At best these prior art focus systems can focus a lens onto a single object in a field of view, but cannot simultaneously measure distance for all objects in the field of view.
In general, a reproduction or approximation of original luminosity values in a scene permits the human visual system to understand what objects were present in the scene and to estimate their relative locations stereoscopically. For non-stereoscopic images such as those rendered on an ordinary television screen, the human brain assesses apparent size, distance and shape of objects using past experience. Specialized computer programs can approximate object distance under special conditions.
Stereoscopic images allow a human observer to more accurately judge the distance of an object. However it is challenging for a computer program to judge object distance from a stereoscopic image. Errors are often present, and the required signal processing requires specialized hardware and computation. Stereoscopic images are at best an indirect way to produce a three-dimensional image suitable for direct computer use.
Many applications require directly obtaining a three-dimensional rendering of a scene. But in practice it is difficult to accurately extract distance and velocity data along a viewing axis from luminosity measurements. Nonetheless many application require accurate distance and velocity tracking, for example an assembly line welding robot that must determine the precise distance and speed of the object to be welded. The necessary distance measurements may be erroneous due to varying lighting conditions and other shortcomings noted above. Such applications would benefit from a system that could directly capture three-dimensional imagery.
Although specialized three dimensional imaging systems exist in the nuclear magnetic resonance and scanning laser tomography fields, such systems require substantial equipment expenditures. Further, these systems are obtrusive, and are dedicated to specific tasks, e.g., imaging internal body organs.
In other applications, scanning laser range finding systems raster scan an image by using mirrors to deflect a laser beam in the x-axis and perhaps the y-axis plane. The angle of defection of each mirror is used to determine the coordinate of an image pixel being sampled. Such systems require precision detection of the angle of each mirror to determine which pixel is currently being sampled. Understandably having to provide precision moving mechanical parts add bulk, complexity, and cost to such range finding system. Further, because these systems sample each pixel sequentially, the number of complete image frames that can be sampled per unit time is limited.
Attempts have been made in the prior art to incorporate some logic at each image sensor pixel to process at least some data acquired by the pixel. Such implementations are sometimes referred to as smart pixels. For example, El Gamal et al has attempted to provide special circuitry within pixel to carry-out analog-to-digital conversion of all light sensed by the pixel. Carver Mead and others have configured pixels such that pixels can communicate with to adjacent pixels, in an attempt to directly detect object contours by examining discontinuities in the sensed brightness pattern.
El Gamal, Carver Mead, and other prior art smart pixel approaches to imaging essentially process images based upon overall brightness patterns. But in an image system that seeks to acquire three-dimensional data, the performance requirement for the sensor pixels requires capabilities quite different than what suffices for ordinary brightness acquisition and processing. Further, whereas prior art smart pixel approaches result in brightness-based data that is presented to an image viewable by humans, three-dimensional data should be in a format readily processed and used by digital computer or processor systems.
Thus, there is a need for a new type of smart pixel implantation for use in a direct three-dimensional imaging system, in which parameters of smart pixels in an array could advantageously be controlled dynamically by a processor or computer system that prefer
Bamji Cyrus
Charbon Edoardo
Shivji Shiraz
Buczinski Stephen C.
Canesta, Inc.
Dorsey & Whitney LLP
LandOfFree
Noise reduction techniques suitable for three-dimensional... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Noise reduction techniques suitable for three-dimensional..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Noise reduction techniques suitable for three-dimensional... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3181113