Computer graphics processing and selective visual display system – Computer graphics processing – Animation
Reexamination Certificate
2000-05-30
2003-10-21
Padmanabhan, Mano (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Animation
C345S215000, C348S700000
Reexamination Certificate
active
06636220
ABSTRACT:
BACKGROUND
1. Technical Field
The invention is related to video techniques, and more particularly to a system and process for generating a new video sequence from the frames of a finite-length video clip.
2. Background Art
A picture is worth a thousand words. And yet there are many phenomena, both natural and man-made, that are not adequately captured by a single static photo. A waterfall, a flickering flame, a swinging pendulum, a flag flapping in the breeze—each of these phenomena has an inherently dynamic quality that a single image simply cannot portray.
The obvious alternative to static photography is video. But video has its own drawbacks. For example, if it is desired to store video on a computer or some other storage device, it is necessary to use a video clip of finite duration. Hence, the video has a beginning, a middle, and an end. Thus, the video becomes a very specific embodiment of a very specific sequence in time. Although it captures the time-varying behavior of the phenomenon at hand, it lacks the “timeless” quality of the photograph. A much better alternative would be to use the computer to generate new video sequences based on the input video clip.
There are current computer graphics methods employing image-based modeling and rendering techniques, where images captured from a scene or object are used as an integral part of the rendering process. To date, however, image-based rendering techniques have mostly been applied to still scenes such as architecture. These existing methods lack the ability to generate new video from images of the scene as would be needed to realize the aforementioned dynamic quality missing from single images.
The ability to generate a new video sequence from a finite video clip parallels somewhat an effort that occurred in music synthesis a decade ago, when sample-based synthesis replaced more algorithmic approaches like frequency modulation. However, to date such techniques have not been applied to video. It is a purpose of the present invention to fill this void with a technique that has been dubbed “video-based rendering”.
It is noted that in the remainder of this specification, the description refers to various individual publications identified by a numeric designator contained within a pair of brackets. For example, such a reference may be identified by reciting, “reference [1]” or simply “[1]”. Multiple references will be identified by a pair of brackets containing more than one designator, for example, [1, 2]. A listing of the publications corresponding to each designator can be found at the end of the Detailed Description section.
SUMMARY
The present invention involves a new type of medium, which is in many ways intermediate between a photograph and a video. This new medium, which is referred to as a video texture, can provide a continuous, infinitely varying stream of video images. The video texture is synthesized from a finite set of images by rearranging (and possibly blending) original frames from a source video. While individual frames of a video texture may be repeated from time to time, the video sequence as a whole should never be repeated exactly. Like a photograph, a video texture has no beginning, middle, or end. But like a video, it portrays motion explicitly.
Video textures therefore occupy an interesting niche between the static and the dynamic realm. Whenever a photo is displayed on a computer screen, a video texture might be used instead to infuse the image with dynamic qualities. For example, a web page advertising a scenic destination could use a video texture of palm trees blowing in the wind rather than a static photograph. Or an actor could provide a dynamic “head shot” with continuous movement on his home page. Video textures could also find application as dynamic backdrops for scenes composited from live and synthetic elements.
The basic concept of a video texture can be extended in several different ways to further increase its applicability. For backward compatibility with existing video players and web browsers, finite duration video loops can be created to play back without any visible discontinuities. The original video can be split into independently moving regions and each region can be analyzed and rendered independently. It is also possible to use computer vision techniques to separate objects from the background and represent them as video sprites, which can be rendered in arbitrary image locations. Multiple video sprites or video texture regions can be combined into a complex scene.
It would also be possible to put video textures under interactive control—to drive them at a high level in real time. For instance, by judiciously choosing the transitions between frames of a source video, a jogger can be made to speed up and slow down according to the position of an interactive slider. Or an existing video clip can the shortened or lengthened by removing or adding to some of the video texture in the middle.
Creating video textures and applying them in all of the foregoing ways requires solving a number of problems. The first difficulty is in locating potential transition points in the video sequences, i.e., places where the video can be looped back on itself in a minimally obtrusive way. A second challenge is in finding a sequence of transitions that respects the global structure of the video. Even though a given transition may, itself, have minimal artifacts, it could lead to a portion of the video from which there is no graceful exit, and therefore be a poor transition to take. A third challenge is in smoothing visual discontinuities at the transitions using morphing techniques. A fourth problem is in factoring video frames into different regions that can be analyzed and synthesized independently. Furthermore, various extensions involve additional challenges: the creation of good, fixed-length cycles; separating video texture elements from their backgrounds so that they can be used as video sprites; applying view morphing to video imagery; and generalizing the transition metrics to incorporate real-time user input.
The naïve approach to the problem of generating video would be to take the input video and loop it, restarting it whenever it has reached the end. Unfortunately since the beginning and the end of the sequence almost never match, a visible motion discontinuity occurs. A simple way to avoid this problem is to search for a frame in the sequence that is similar to the last frame and to loop back to this similar frame to create a repeating single loop video. For certain continually repeating motions, like a swinging pendulum, this approach might be satisfactory. However, for other scenes containing more random motion, the viewer may be able to detect that the motion is being repeated over and over. Accordingly, it would be desirable to generate more variety than just a single loop.
The desired variety can be achieved by producing a more random rearrangement of the frames taken from the input video so that the motion in the scene does not repeat itself over and over in a single loop. Essentially, the video sequence can be thought of as a network of frames linked by transitions. The goal is to find good places to jump from one sequence of frames to another so that the motion appears as smooth as possible to the viewer. One way to accomplish this task is to compute the similarity between each pair of frames of the input video. Preferably, these similarities are characterized by costs that are indicative of how smooth the transition from one frame to another would appear to a person viewing a video containing the frames played in sequence. Further, the cost of transitioning between a particular frame and another frame is computed using the similarity between the next frame in the input video following the frame under consideration. In other words, rather than jumping to a frame that is similar to the current frame under consideration, which would result in a static segment, a jump would be made from the frame under consideration to a frame that is similar to th
Salesin David
Schödl Arno
Szeliski Richard S.
Lyon Richard T.
Lyon & Harr LLP
Microsoft Corporation
Padmanabhan Mano
LandOfFree
Video-based rendering does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Video-based rendering, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Video-based rendering will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3130288