Mobile camera-space manipulation

Electricity: motive power systems – Positional servo systems – Vehicular guidance systems with single axis control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C318S586000, C901S001000, C901S047000

Reexamination Certificate

active

06194860

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a practical means of using computer vision to control systems consisting of a combination of holonomic and nonholonomic degrees of freedom in order to perform user-designated operations on stationary objects. Examples of combination holonomic
onholonomic systems are a wheeled rover equipped with a robotic arm, a forklift, and earth-moving equipment such as a backhoe or a front-loader and even an underwater vehicle with attached robotic arm.
The present invention eliminates the need for direct, ongoing human participation in the control loop for completing a given task such as engaging a pallet with a forklift. Whereas, depending upon the application of the present invention, the human may supply some high-level supervision for the system such as “engage pallet,” the effect of the new art is to create fully autonomous response of the system, synchronized between control of the holonomic and nonholonomic degrees of freedom, which produces effective, precise, reliable and robust direction and control of the mechanism without any subsequent human intervention.
2. References
The remainder of this specification refers to various individual publications listed below by number by reciting, for example, “[1]”, or “[2]”, and so forth.
[1] E. Gonzalez-Galvan and S. B. Skaar, “Application of a precision enhancing measure in 3-D rigid-body positioning using camera-space manipulation,”
International Journal of Robotics Research,
Vol. 16, No. 2, pp. 240-257, 1997.
[2] B. Horn,
Robot Vision,
MIT Press, Cambridge, 1986.
[3] M. Seelinger, S. B. Skaar, and M. Robinson, “An Alternative Approach for Image-Plane Control of Robots,”
Lecture Notes in Control and Information Sciences,
Eds. D. Kriegman, G. Hager, and S. Morse, pp. 41-65, Springer, London, 1998.
[4] E. Gonzalez-Galvan and S. B. Skaar, “Servoable cameras for three dimensional positioning with camera-space manipulation,”
Proc. LASTED Robotics and Manufacturing,
pp. 260-265, 1995.
[5] S. B. Skaar, I. Yalda-Mooshabad, and W. H, Brockman, “Nonholonomic camera-space manipulation,”
IEEE Trans. on Robotics and Automation,
Vol. 13, No. 3, pp. 464-479, August 1992.
[6] R. K. Miller, D. G. Stewart, H. Brockman, and S. B. Skaar, “A camera space control system for an automated forklift,”
IEEE Trans. on Robotics and Automation,
Vol. 10, No. 5, pp. 710-716, October 1994.
[7] Y. Hwang. “Motion Planning of a Robotic Arm on a Wheeled Vehicle on a Rugged Terrain,” L. A. Demsetz, ed.,
Robotics for Challenging Environments, Proc. of RCE II,
pp. 57-63, 1996.
[8] T. Lueth, U. Nassal, U. Rembold. “Reliability and Integrated Capabilities of Locomotion and Manipulation for Autonomous Robot Assembly,”
Robotics and Autonomous Systems.
Vol. 14, No.2-3, pp. 185-198, May 1995.
[9] MacKenzie, D. and Arkin, R. “Behavior-Based Mobile Manipulations for Drum Sampling,”
Proceedings of the
1996
IEEE Int. Conf. On Robotics and Automation,
pp 2389-2395, April 1996.
[10] C. Perrier, P. Dauchez, F. Pierrot. “A Global Approach for Motion Generation of Non-Holonomic Mobile Manipulators, ”
Proc. IEEE Int. Conference on Robotics and Automation
pp. 2971-2976, 1998.
[11] O. Khatib, “Mobile manipulation: The robotic assistant,”
Robotics and Autonomous Systems,
Vol. 26, pp. 175-183, 1999.
3. Nomenclature
The following is a summary of notation used in this specification:
C
j
=[C
1
j
,C
2
j
, . . . ,C
6
j
]
T
view parameters for camera j
&THgr;=[&thgr;
1
,&thgr;
2
, . . . ,&thgr;
n
]
T
internal joint configuration of an n-degree of freedom system
(x
c
i
j
, y
c
i
j
) camera space location of point i in camera j
(f
x
,f
y
) orthographic camera model
J
1
,J
2
,J
3
scalar quantities minimized to estimate various parameters
n
cam
number of cameras in system
n
c
(j) number of visual features used in any given summation
p number of poses in the pre-plan trajectory
W
ik
relative weight given to each visual sample
DESCRIPTION OF THE PRIOR ART
1. Camera-Space Manipulation
Camera-space manipulation, hereafter referred to as CSM, was developed as a means of achieving highly precise control of the positioning and orienting of robotic manipulators in the presence of uncertainties in the workspace of the robot. These uncertainties include such things as kinematic errors, kinematic changes due to temperature changes or dynamic loads, or workpieces in unknown or varying positions. U.S. Pat. No. 4,833,383 to Skaar et al., describes CSM. CSM uses computer vision in order to enable a manipulator's load or tool to be positioned and oriented highly accurately relative to an arbitrarily positioned and oriented workpiece. This high accuracy is extremely robust to uncertainties in the robot's workspace. CSM neither relies on the calibration of the camera(s) nor the robot. CSM works in an open-loop fashion, thus real-time image processing is not required.
CSM can operate in a fully autonomous fashion or with supervisory control. A graphical user interface was developed for use with CSM. Through this interface, the user can view either a live or still-frame image of the workspace of the robot. By clicking on this image, the user selects the surface, or region or juncture on the surface, upon which the operation will be performed. The user also selects the type of task to be performed by the robot from the interface program Additionally, the user sets other operating parameters such as the speed of the manipulator.
2. Description of CSM
CSM works by establishing a relationship between the appearance of image-plane visual features located on the manipulator with the internal joint configuration of the robot. If the positioning task involves more than two dimensions, then at least two cameras must be used. The relationship, described with a set of view parameters given by C=[C
1
,C
2
, . . . ,C
6
]
T
, is determined for each of the participating cameras. This relationship is based on the orthographic camera model:
x
c
j
=((
C
1
j
)
2
+(
C
2
j
)
2
−(
C
3
j
)
2
−(
C
4
j
)
2
)
X+
2(
C
2
j
C
3
j
+C
1
j
C
4
j
)
Y+
2(
C
2
j
C
4
j
−C
1
j
C
3
j
)
Z+C
5
j
y
c
j
=2(
C
2
j
C
3
j
−C
1
j
C
4
j
)
X+
((
C
1
j
)
2
−(
C
2
j
)
2
+(
C
3
j
)
2
−(
C
4
j
))
2
Y
+2(
C
3
j
C
4
j
+C
1
j
C
2
j
)
Z+C
6
j
  (1)
where (x
c
j
, y
c
j
) represents the estimated image-plane location of a feature on the robot's end effector in the j
th
participating camera. The position vector (X,Y,Z), describes the location of the manipulator feature relative to a reference frame tied to the robot. It is a function of the internal joint configuration, &THgr;=[&thgr;
1
,&thgr;
2
, . . . ,&thgr;
n
]
T
for an n degree-of-freedom robot, and the model of the robot's forward kinematics. For convenience, Eq. (1) is rewritten as:
x
c
j
=f
x
(&THgr;, C
j
)
y
c
j
=f
y
(&THgr;, C
j
)
The view parameters are initialized through a process called the pre-plan trajectory. During the pre-plan trajectory, the robot is driven to a set number of poses (between 10 and 20), spanning both a large region of the robot's joint space as well as wide regions of the camera spaces. At each of these poses, images are acquired in all participating cameras and the locations of the designated manipulator features are found in each of these images. Then the view parameters for the j
th
camera are estimated by minimizing over all C=[C
1
,C
2
, . . . ,C
6
]
T
:
J
1
=

k
=
1
p

[

i
=
1
n
c

(
k
)

{
[
x
c
i
j
-
f
x

(
Θ
,
C
j
)
]
2
+
[
y
c
i
j
-
f
y

(
Θ
,
C
j
)
]
2
}

W
ik
]
where p is the number of poses in the pre-plan trajectory, n
c
(k) is the number of features found in the image corresponding to camera j for pose number k, W
ik
is the relative weight given to feature numbe

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Mobile camera-space manipulation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Mobile camera-space manipulation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mobile camera-space manipulation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2591742

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.