Method for extracting static and dynamic super-resolution...

Computer graphics processing and selective visual display system – Computer graphics processing – Attributes

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S419000

Reexamination Certificate

active

06650335

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to computer graphics, and more particularly to a method for extracting textures from a sequence of images.
BACKGROUND OF THE INVENTION
Texture-mapping is a well known method for adding detail to renderings of computer graphic scenes. During texture-mapping, textures are applied to a graphics model. The model typically includes a set of 3D points and a specification of the edges, surfaces, or volumes that connect the points. If the rendering is volumetric, e.g., the rendering represents a solid object and not just its surface, then the textures can be in 3D. Texture-mapping gives the illusion of greater apparent detail than present in the model's geometry. If the textures are extracted from photographs or images, the rendered images can appear quite realistic. Textures can be in the form of texture-maps, i.e., data, or texture-functions, i.e., procedures.
In general, textures and images are sparse samplings of a light field, typically, the reflected light from a surface. Digital images, including individual video frames, are fairly low-resolution samplings of reflectance. However, if there is motion in a sequence of images, then every image gives a slightly different sampling of the light field, and this information can be integrated over time to give a much denser sampling, also known as a super-resolution texture.
It is desired to provide super-resolution textures from a low-resolution sequence of images.
SUMMARY THE INVENTION
The method according to the invention extracts texture-maps and texture-functions, generally “textures,” from a sequence of images and image-to-image correspondences. The extracted textures have a higher resolution and finer detail than the sequence of images from which they are extracted. If every image is annotated with control parameters, then the invention produces a procedure that generates super-resolution textures as a function of these parameters. This enables dynamic textures for surfaces or volumes that undergo appearance change, for example, for skin that smooths and pales when stretched and wrinkles and flushes when relaxed. These control parameters can be determined from the correspondences themselves.
In the following text the term “image” will be used for 2D or 3D arrays of data and “video” will be used for time-series of such arrays.
More specifically, the invention provides a method that constructs a super-resolution texture from a sequence of images of a non-rigid three-dimensional object. The object need not be non-rigid. A shape of the object is represented as a matrix of 3D points. A basis set of possible deformations of the object is represented as a matrix of displacements of the points. The matrices of 3D points and displacements form a model of the object in the video and its possible deformations. The points in the model are connected to form a triangle or tetrahedron mesh, depending on whether the images are 2D or 3D. For each image, a set of correspondences between the points in the model and the object in the image is formed. The correspondences are used to map the mesh into each image as a texture mesh. Each mesh, and the image texture the mesh covers, is warped to a common coordinate system, resulting in a texture that appears to be a deformation of the original image. The warp is done with super-sampling so that the resulting texture has many more pixels than the original image. The warped and super-sampled textures are averaged to produce a static super-sampled texture of the object in the image. Various weighted averages of the warped and super-sampled textures produce dynamic textures that can vary according to the deformation and/or pose of the object.


REFERENCES:
patent: 5969722 (1999-10-01), Palm
patent: 6047088 (2000-04-01), van Beek et al.
patent: 6064393 (2000-05-01), Lengyel et al.
patent: 6072496 (2000-06-01), Guenter et al.
patent: 6504546 (2003-01-01), Cosatto et al.
patent: 6525728 (2003-02-01), Kamen et al.
Barron et al. , “The Feasibility of Motion and Structure from Noisy Time-Varying Image Velocity Information”; International Journal of Computer Vision, 5:3, pp. 239-269, 1990.
Heyden et al., “An Iterative Factorization Method for Projective Structure and Motion from Image Sequences”; Image and Vision Computing 17, pp. 981-991, 1999.
Stein et al., “Model-Based Brightness Constraints: On Direct Estimation of Structure and Motion”; IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, No. 9, pp. 992-1015, Sep. 2000.
Sugihara et al., “Recovery of Rigid Structure from Orthographically Projected Optical Flow”; Computer Vision, Graphics and Image Processing 27, pp. 309-320, 1984.
Waxman et al., “Surface Structure and Three-Dimensional Motion from Image Flow Kinematics”; The International Journal of Robotics Research, 4 (3), pp. 72-94, 1985.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for extracting static and dynamic super-resolution... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for extracting static and dynamic super-resolution..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for extracting static and dynamic super-resolution... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3174829

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.