Augmented-reality tool employing scene-feature...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S629000, C345S632000, C345S633000, C382S103000, C382S154000, C348S169000

Reexamination Certificate

active

06765569

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a tool and method for producing an augmented image by combining computer-generated virtual-images with a real-world view, and more particularly to a tool and method for using the autocalibration of scene features to produce the augmented image.
2. General Background and State of the Art
An Augmented Reality (AR) is a view of the world that contains a mixture of real and computer-generated (CG) objects. Computer generated objects can include text, images, video, 3-dimensional models, or animations. Augmented reality is distinguished from simple overlays by the fact that the combined real and computer generated objects are observed and behave as if they were in some defined spatial relationship to each other. For example, an augmented reality scene may contain a computer generated picture on a real wall, or a real picture that appears to be hanging on a virtual wall. In some cases the real world may be completely occluded by the computer generated objects and in others, the computer generated objects may not be visible in a particular view of the real-world view.
As the viewing position and orientation (also known as view pose) changes, the real and computer generated objects shift together to preserve the viewer's sense that their spatial relationships are maintained. For example, a computer generated cup positioned to appear on a real table-top will maintain the appearance of being on the table from as many viewing directions and positions as possible. To maintain the real and computer generated object relationships as the viewing pose changes, the computer generated system must have information about the view pose to produce an appropriate view of the computer generated object to merge with the real world. The process whereby this view pose information is obtained is known as Pose Tracking (PT).
Pose tracking is performed with measurement systems using a variety of sensors and signals including ultrasound, optical beacons, or inertial technologies. Relevant to this patent application are the methods using images from still or video cameras to determine viewing pose. Many approaches are known for computing where a camera must be (i.e., its pose) given a particular image or series of images.
Pose tracking with cameras often relies on the detection and tracking of features and their correspondences to calibrated positions or coordinates. These terms are further defined below:
Features are any identifiable parts of a scene that can be located in one or more images. Examples include points, corners, edges, lines, and curves. Regions with numerous intensity variations are called texture regions. Examples of texture regions include a text character, lines of text on a page, foliage on trees, a photograph on a wall.
Feature detection is performed by a computer analysis of an image. The detection process searches for a particular type of feature or texture region and computes its 2D coordinates within the image.
Features are tracked between images by computer analysis of two images containing the same features. For example, as a camera pans to the right, the features in the first image appear shifted to the left in the second image. Feature tracking computes that change in the 2D position of a feature from one image to the next. Tracking matches or corresponds the features in one image to another image.
Correspondences identify matching image features or their coordinates. There are two often-used forms of correspondences. 2D correspondences identify the same features detected in two or more images. 3D correspondences identify an image feature and its known 3D coordinates. Features with 3D correspondences are called calibrated features.
Features can also be corresponded by recognizing their association with color, shape, or texture regions. For example, consider a series of images showing a single blue dot among many red dots on a white wall. In any image, the detected blue dot feature can be corresponded to the same blue dot appearing in another image (2D correspondence) because of its unique color. In any image the detected blue dot can also be corresponded to its known 3D coordinate (3D correspondence) because of its unique color. Just as color can distinguish between otherwise similar features, shape or texture regions can distinguish otherwise similar features. For example, a blue triangle can be distinguished from blue dots. In another example, a blue dot with a letter “T” in it is distinguishable from a blue dot with a letter “W”. Some features have recognizable attributes. Some augmented reality tools have recognition capabilities.
Camera pose (position and orientation) can be computed when three or more calibrated features in an image are detected. Given the camera pose, computer generated objects can be mixed into the scene image observed by the user to create an augmented reality.
An augmented reality can be presented to an observer through an optical mixing of real and computer generated images or a mixing of real scene video with computer generated video. The augmented reality images may be still or moving images. Augmented reality images can be produced in real-time while a user is physically viewing a scene, augmented reality images can also be produced off-line from recorded images.
Prior art by the inventors of the present invention discloses the use of autocalibration for producing augmented reality images. The autocalibration method is used to increase the number of calibrated features in the scene from which to compute camera pose. Generally, with more calibrated features visible in an image, more reliable and accurate camera poses are computed. As the augmented reality system is used, autocalibration produces more and more calibrated features from which to compute pose.
Autocalibration is accomplished using natural features (NF) or intentional fiducials (IF) within the scenes being viewed by the camera. The natural feature and intentional fiducial are detected as points with 2D image positions. Autocalibration can be accomplished as the camera moves as follows:
1) Initially, the scene must contain at least three calibrated natural features or intentional fiducials. Remaining natural features and intentional fiducial are uncalibrated.
2) A user begins an augmented reality session by pointing the camera at the calibrated natural features or intentional fiducials, enabling the system to compute the camera pose.
3) The user moves the camera around, always keeping the calibrated natural features or intentional fiducials in view. Autocalibration occurs during this and subsequent camera motions. As the camera moves, the system computes camera pose from the calibrated features and detects and tracks uncalibrated natural features and/or intentional fiducial from different camera positions. Each view of a tracked natural feature or intentional fiducial contributes to an improved estimate of the feature's 3D position.
4) Once a natural feature or intentional fiducial position estimate is known to an acceptable tolerance it becomes an autocalibrated feature (AF) and it is useable as an additional calibrated feature for estimating camera poses.
Autocalibration can be computed during on-line real-time augmented reality system use, or it can be computed during off-line video processing. The intent of autocalibration is to increase the number of calibrated features for tracking. This purpose does not rely upon any particular mathematical method of autocalibration. The prior art by the inventors of the present invention describes the use of several variants of Extended Kalman Filters for performing autocalibration. Other methods, such as shape from motion, are suitable as well. The end result of any method of autocalibration is an increased number of calibrated features in the scene to support pose tracking.
Autocalibration is computed by processing data from a start time forward for either on-line or off-line cases. Batch methods of autocalibration process data in time-forward and ti

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Augmented-reality tool employing scene-feature... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Augmented-reality tool employing scene-feature..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Augmented-reality tool employing scene-feature... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3244759

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.