Method and apparatus for single camera 3D vision guided...

Data processing: generic control systems or specific application – Specific application – apparatus or process – Robot control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C700S245000, C700S247000, C700S251000, C700S252000, C700S258000, C700S260000, C318S568110, C318S568130, C318S573000, C318S574000, C318S577000, C901S009000, C901S014000, C901S042000, C901S045000, C901S047000

Reexamination Certificate

active

06816755

ABSTRACT:

TECHNICAL FIELD
The invention relates to the field of vision guided robotics, and more particularly to a method and apparatus for single camera three dimensional vision guided robotics.
BACKGROUND
Robots have long been widely used in manufacturing processes for many applications. Many different types of sensors are used to guide the robot but machine vision is increasingly being used to guide robots in their tasks. Typically such machine vision is used in a two-dimensional application wherein the target object need only be located in an x-y plane, using a single camera. For example see U.S. Pat. No. 4,437,114 LaRussa. However many robotic applications require the robot to locate and manipulate the target in three dimensions. In the past this has involved using two or more cameras. For example see U.S. Pat. No. 4,146,924 Birk et al.; and U.S. Pat. No. 5,959,425 Bieman et al. In order to reduce hardware costs it is preferable to use a single camera. Prior single camera systems however have used laser triangulation which involves specialized sensors, must be rigidly packaged to maintain geometric relationships, require sophisticated inter-tool calibration methods and tend to be susceptible to damage or misalignment when operating in industrial environments.
Target points on the object have also been used to assist in determining the location in space of the target object using single or multiple cameras. See U.S. Pat. Nos. 4,219,847 Pinkney et al. and U.S. Pat. Nos. 5,696,673; 5,956,417; 6,044,183 and 6,301,763 all of Pryor. Typically these methods involve computing the position of the object relative to a previous position, which requires knowledge of the 3D location of the object at the starting point. These methods also tend to not provide the accuracy and repeatability required by industrial applications. There is therefore a need for a method for calculating the 3D position of objects using only standard video camera equipment that is capable of providing the level of accuracy and repeatability required for vision guidance of robots as well as other applications requiring 3D positional information of objects.
SUMMARY OF INVENTION
A method for three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot end-effector is disclosed in which object features or landmarks are used to calculate the three-dimensional pose of the object. The process is performed in three main steps:
a) calibration of the camera;
b) selecting of the features on the object;
c) finding the three-dimensional pose of the object and using this information to guide the robot to the object to perform any operations (e.g. handling, cutting, etc.).
According to one aspect of the invention, the invention provides a method of three-dimensional handling or an object by a robot using a tool and one camera mounted on the robot. The method involves first calibrating the camera by finding a) the camera intrinsic parameters; b) the position of the camera relative to the tool of the robot (“hand-eye” calibration); and c) the position of the camera in a space rigid to the place where the object will be trained (“Training Space”). Next the object features are taught by a) putting the object in the “Training Space” and capturing an image of the object with the robot in the calibration position where the “Camera to Training Space” transformation was calculated; b) selecting at least 6 visible features from the image; c) calculating the 3D position of each feature in “Training Space”; d) defining an “Object Space” aligned with the “Training Space” but connected to the object and transposing the 3D coordinates of the features into the “Object Space”; c) computing the “Object Space to Camera” transformation using the 3D position of the features inside the “Object Space” and the positions of the features in the image; f) defining an “Object Frame” inside “Object Space” to be used for teaching the intended operation path; g) computing the Object Frame position and orientation in “Tool Frame” using the transformation from, “Object Frame to Camera” and “Camera to Tool”; h) sending the “Object Frame” to the robot; and i) training the intended operation path relative to the “Object Frame” using the robot. Next the object finding and positioning is carried out by a) positioning the robot in a predefined position above the bin containing the object and capturing an image of the object; b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located; c) with the positions of features from the image and their corresponding positions in “Object Space” as calculated in the training step, computing the object location as the transformation between the “Object Space” and “Camera Space ”; d) using the transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object; e) moving the robot to the position calculated in step d); f) finding the “Object Space to Camera Space” transformation in the same way as in step c); g) computing the object frame memorized at training using the found transformation and “Camera to Tool” transformation; h) sending the completed “Object Frame” to the robot; and i) using the “Tool” position to define the frame in “Robot Space” and performing the intended operation path on the object inside the “Robot Space”.
According to a further aspect of the invention, there is provided a method of three-dimensional bridling of an object by a robot rising a tool and one camera mounted on the robot. The method involves first calibrating the camera by finding a) the camera intrinsic parameters; and b) the position of the camera relative to the tool of the robot (“hand-eye” calibration). Next the object features are taught by a) putting the object in the field of view of the camera and capturing an image of the object; b) selecting at least 6 visible features from the image; c) calculating the 3D position in real world co-ordinates of the selected features inside a space connected to the object (“Object Space”); d) computing the “Object Space to Camera” transformation using the 3D position of the features inside this space and the position in the image; e) defining an “Object Frame” inside “Object Space” to be used for teaching the handling path; f) computing the “Object Frame” position and orientation in “Tool Frame” using the transformation from “Object Frame to Camera” and “Camera to Tool”; g) sending the computed “Object Frame” to the robot; and h) training the intended operation path inside the “Object Frame”. Next the object finding and positioning is carried out by a) positioning the robot in a predefined position above the bin containing the target object; b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located; c) with the positions of features from the image and their corresponding position in “Object Space” as calculated in the training session, computing the object location as the transformation between the “Object Space” and “Camera Space”; d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object; e) finding the “Object Space to Camera Space” transformation in the same way as in step d); f) computing the object frame memorized at training using the found transformation and “Camera to Tool” transformation; g) sending the computed “Object Frame” to the robot; and h) using the “Tool” position to define the frame in “Robot Space” and performing the intended operation path on the object inside the “Robot Space”. The invention also provides a system for carrying out the foregoing methods.


REFERENCES:
patent: 3986007 (1976-10-01), Ruoff, Jr.
patent: 4146924 (1979-03-01), Birk et al.
patent: 4219847 (1980-08-01), Pinkney et al.
patent: 4305130 (1981-12-01), Kelley et al.
patent: 4334241 (1982-06-01), Kashioka et al.
patent: 4437114 (1984-03-01), LaRussa
patent: 4578561 (1986-03-01), Corby et al.
patent: 4

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for single camera 3D vision guided... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for single camera 3D vision guided..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for single camera 3D vision guided... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3321015

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.