Video-based rendering with user-controlled movement

Computer graphics processing and selective visual display system – Computer graphics processing – Animation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S474000, C345S422000, C345S421000, C345S215000, C348S700000

Reexamination Certificate

active

06600491

ABSTRACT:

BACKGROUND
1. Technical Field
The invention is related to video techniques, and more particularly to a system and process for generating a video animation from the frames of a video sprite.
2. Background Art
A picture is worth a thousand words. And yet there are many phenomena, both natural and man-made, that are not adequately captured by a single static photo. A waterfall, a flickering flame, a swinging pendulum, a flag flapping in the breeze—each of these phenomena has an inherently dynamic quality that a single image simply cannot portray.
The obvious alternative to static photography is video. But video has its own drawbacks. For example, if it is desired to store video on a computer or some other storage device, it is necessary to use a video clip of finite duration. Hence, the video has a beginning, a middle, and an end. Thus, the video becomes a very specific embodiment of a very specific sequence in time. Although it captures the time-varying behavior of the phenomenon at hand, it lacks the “timeless” quality of the photograph. A much better alternative would be to use the computer to generate new video sequences based on the input video clip.
There are current computer graphics methods employing image-based modeling and rendering techniques, where images captured from a scene or object are used as an integral part of the rendering process. To date, however, image-based rendering techniques have mostly been applied to still scenes such as architecture. These existing methods lack the ability to generate new video from images of the scene as would be needed to realize the aforementioned dynamic quality missing from single images.
The ability to generate a new video sequence from a finite video clip parallels somewhat an effort that occurred in music synthesis a decade ago, when sample-based synthesis replaced more algorithmic approaches like frequency modulation. However, to date such techniques have not been applied to video. It is a purpose of the present invention to fill this void with a technique that has been dubbed “video-based rendering”.
It is noted that in the remainder of this specification, the description refers to various individual publications identified by a numeric designator contained within a pair of brackets. For example, such a reference may be identified by reciting, “reference [
1
]” or simply “[
1
]”. Multiple references will be identified by a pair of brackets containing more than one designator, for example, [
1
,
2
]. A listing of the publications corresponding to each designator can be found at the end of the Detailed Description section.
SUMMARY
The present invention is related to a new type of medium, which is in many ways intermediate between a photograph and a video. This new medium, which is referred to as a video texture, can-provide a continuous, infinitely varying stream of video images. The video texture is synthesized from a finite set of images by rearranging (and possibly blending) original frames from a source video. While individual frames of a video texture may be repeated from time to time, the video sequence as a whole should never be repeated exactly. Like a photograph, a video texture has no beginning, middle, or end. But like a video, it portrays motion explicitly. Video textures therefore occupy an interesting niche between the static and the dynamic realm. Whenever a photo is displayed on a computer screen, a video texture might be used instead to infuse the image with dynamic qualities. For example, a web page advertising a scenic destination could use a video texture of palm trees blowing in the wind rather than a static photograph. Or an actor could provide a dynamic “head shot” with continuous movement on his home page. Video textures could also find application as dynamic backdrops for scenes composited from live and synthetic elements.
Further, the basic concept of a video texture can be extended in several different ways to further increase its applicability. For backward compatibility with existing video players and web browsers, finite duration video loops can be created to play back without any visible discontinuities. The original video can be split into independently moving regions and each region can be analyzed and rendered independently. It is also possible to use computer vision techniques to separate objects from the background and represent them as video sprites, which can be rendered in arbitrary image locations. Multiple video sprites or video texture regions can be combined into a complex scene. It is also possible to put video textures under interactive control—to drive them at a high level in real time. For instance, by judiciously choosing the transitions between frames of a source video, a jogger can be made to speed up and slow down according to the position of an interactive slider. Or an existing video clip can the shortened or lengthened by removing or adding to some of the video texture in the middle.
The basic concept of the video textures and the foregoing extensions are the subject of the above-identified parent patent application entitled “Video-Based Rendering”. However, the concept of video textures can be extended even further. For example, another application of the video sprite concept involves objects that move about the scene in the input video clip-such as an animal, vehicle, and person. These objects typically exhibit a generally repetitive motion, independent of their position. Thus, the object could be extracted from the frames of the input video and processed to generate a new video sequence or video sprite of that object. This video sprite would depict the object as moving in place. Further, the frames of the video sprite could be inserted into a previously derived background image (or frames of a background video) at a location dictated by a prescribed path of the object in the scene. In this regard, a user of the system could be allowed to specify the path of the object, or alternately cause a path to generated and input into the system. It is this extension of the basic video textures concept that the present invention is directed toward.
Before describing the particular embodiments of the present invention, it is useful to understand the basic concepts associated with video textures. The naive approach to the problem of generating video would be to take the input video and loop it, restarting it whenever it has reached the end. Unfortunately since the beginning and the end of the sequence almost never match, a visible motion discontinuity occurs. A simple way to avoid this problem is to search for a frame in the sequence that is similar to the last frame and to loop back to this similar frame to create a repeating single loop video. For certain continually repeating motions, like a swinging pendulum, this approach might be satisfactory. However, for other scenes containing more random motion, the viewer may be able to detect that the motion is being repeated over and over. Accordingly, it would be desirable to generate more variety than just a single loop.
The desired variety can be achieved by producing a more random rearrangement of the frames taken from the input video so that the motion in the scene does not repeat itself over and over in a single loop. Essentially, the video sequence can be thought of as a network of frames linked by transitions. The goal is to find good places to jump from one sequence of frames to another so that the motion appears as smooth as possible to the viewer. One way to accomplish this task is to compute the similarity between each pair of frames of the input video. Preferably, these similarities are characterized by costs that are indicative of how smooth the transition from one frame to another would appear to a person viewing a video containing the frames played in sequence. Further, the cost of transitioning between a particular frame and another frame is computed using the similarity between the next frame in the input video following the frame under consideration. In other words, rather than jumping

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Video-based rendering with user-controlled movement does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Video-based rendering with user-controlled movement, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Video-based rendering with user-controlled movement will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3037092

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.