Three dimension imaging by dual wavelength triangulation

Television – Stereoscopic

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C356S004010, C250S201100

Reexamination Certificate

active

06765606

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to three dimensional visual imaging, and in particular to recovering a three dimensional image of an object from a two dimensional image of the object.
BACKGROUND OF THE INVENTION
The real world has three spatial dimensions. The biological eye, cameras and other visual sensing devices however, image reality by projecting light from three dimensional objects onto two dimensional surfaces (more accurately, surfaces defined by two independent variables). These imaging systems thereby “lose” one of the three dimensions of reality. The depth of view, or the distance of objects from the vision imaging system is not registered. Nature has invested heavily in recovering the third dimension and has provided living systems with 3D visual imaging systems. Biological systems do this generally by coupling the eyes in a stereoscopic geometry and providing the brain with highly sophisticated pattern recognition capabilities that interpret what the eyes see.
Human technology is subject to considerable stimulus to copy nature and provide man made vision imaging systems that provide some of the same real time 3D capabilities of biological systems. As human technology progresses and becomes more sophisticated, the need and demand for such 3D vision imaging systems become ever more intense.
Three dimensional visual imaging systems are needed for a rapidly growing list of many different applications, such as profile inspection of manufactured goods, thickness measurements, CAD verification and robot vision. Many of these applications require 3D visual imaging systems that provide a complete 3D “depth map” of the surface of the object in real time. A depth map is basically a topographical map, very much like a geographic topographical map, of the surface of the object as seen from the perspective of the imaging system. Real time imaging is considered to be imaging that can provide image frames at video frame rates from 25 Hz and up.
Different technical approaches are used to provide 3D visual imaging. Many of these approaches, at the present level of technology, are unable to provide satisfactory low cost 3D imaging in real time.
Attempts to copy nature and provide man made vision with real time stereoscopic vision have proven to be extremely difficult. Stereo vision systems which are useable for non quantitative depth sensing are computationally complicated, and at the present state of the art are inaccurate and too slow for many applications and for real time 3D imaging.
Visual image systems using range from focus, which determine range by determining at what focal length features on an object are in focus, are often unsatisfactory, at least in part, because they are slow. Range from focus systems require the acquisition and analysis of many image frames taken at different focus settings of the system's optics to determine range to a particular feature or group of features of an object imaged. On the other hand, an imaging system using range from defocus, as reported in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 18, No 12, December 1996 by S. K. Nayar et al, is rapid and can produce a 512×480 depth map of an object at video frame rates (30 Hz). However, it is relatively inaccurate and limited in range.
Time of flight systems, using laser ranging, measure the range to many points on the surface of an object to produce a 3D map of the object. Points on the surface are sequentially illuminated by laser light and the traversal time of the light to each point and back to the imaging system is measured. While capable of providing accurate distance measurements, the sequential nature of the measurement process causes such systems to be slow.
A type of 3D imaging system which is very rapid is described in PCT patent application PCT/IL/96/00020 filed through the Israel Patent Office by the same applicant as the applicant of the present application, and published as International Publication WO97/01111, which is incorporated herein by reference.
A visual imaging system described in an article “Wavelength Scanning Profilometry for Real Time Surface Shape Measurement” by S. Kuwamura and Ichirou Yamaguchi, in Applied Optics, Vol. 36, No 19, July 1997, proposes 3D imaging by ranging using interference between two variable wave length beams. One of the beams is reflected off a reference mirror and the other off the surface of an object being visualized. The wavelength of the laser light beams is varied over a continuous range and the intensity of the interfering laser beams at each pixel of a CCD detector array is monitored. The range to a point can be determined by the number of times the interference pattern at a pixel at which the point is imaged goes through a minimum for a given change in laser wavelength. The concept is in the experimental stage and, while accurate, it appears to be limited in field of view and in the maximum difference between distances to the surface that it can effectively measure.
A particularly robust and simple method of 3D imaging is active triangulation. In active triangulation, generally, a thin fan beam of laser light illuminates the surface of an object along a thin stripe on the surface. Light reflected from the illuminated stripe is incident on pixels in a detector array, such as a detector array in a CCD camera. Each illuminated pixel in the array is illuminated by light reflected from a different, highly localized, spot on the stripe. The position of an illuminated pixel in the array and the angle, hereafter the “scan angle”, that the plane of the fan beam makes with a reference coordinate system is sufficient to determine the three spatial coordinates of the spot on the surface of the object which is the source of reflected laser light illuminating the pixel. To produce a complete 3D map of the surface of the object, the scan angle is incremented so that the fan beam scans the surface of the object, illuminating it successively along different closely spaced stripes on the surface. For each of these closely spaced stripes, the 3D coordinates of the spots corresponding to illuminated pixels are calculated.
However, if all the illuminated pixels in the scan are recorded on a same CCD image frame it is impossible to associate a particular pixel with the scan angle from which it was illuminated. Conventionally, therefore, a complete, single, image frame is acquired and processed for each illuminated stripe in order to maintain correspondence between illuminated pixels and the scan angles at which they are illuminated. Since only a small fraction of the pixels in a frame are illuminated for each stripe this is an extremely wasteful and slow way to acquire data. Furthermore each of the pixels in a frame has to be processed in order to determine which of the pixels is illuminated. Not only is the data collection method wasteful but it is also very slow.
In an article entitled “A Very Compact Two-dimensional Triangulation-based Scanning System for Robot Vision” by Jurgen Klicker in SPIE Vol. 1822 (1992)/217, an active triangulation imaging system is presented that is fast enough for real time imaging. This system is however relatively expensive, and it requires integrating the system's camera with special circuits that translate data processing tasks into hardware.
Another attempt to increase the speed of 3D imaging using active triangulation is reported by Z. Jason Geng, in Opt. Eng. 35(2) 376-383 (February 1996). In this method, an object is illuminated by a wide solid angle cone of light. The color of light rays in the cone is varied as a function of position in the cone in such a way that light rays having the same color illuminate the object at a same angle. The angle is known from the geometry of the cone. The color of the light reflected from a point on the surface of the object therefore identifies the angle at which the point is illuminated by light from the light cone The system, while appearing to be fast has yet to be proven to be accurate. Accuracy will most probably be affected by

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Three dimension imaging by dual wavelength triangulation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Three dimension imaging by dual wavelength triangulation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Three dimension imaging by dual wavelength triangulation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3230722

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.