Interpolation of a sequence of images using motion analysis

Television – Format conversion – Progressive to interlace

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S276000

Reexamination Certificate

active

06570624

ABSTRACT:

BACKGROUND
For applications such as standards conversion and generation of slow and fast motion in film, television and other video productions, images in a sequence of images may be simply repeated or dropped to achieve a desired sampling rate. Such a technique, however, generally produces unwanted visible artifacts such as jerky motion. Analysis of motion in a sequence of images is commonly used to improve interpolation of the sequence of images.
Motion analysis generally is performed by determining a set of motion parameters that describe motion of pixels between a first image and a second image. For example, the motion parameters may describe forward motion of pixels from the first image to the second image, and/or backward motion of pixels from the second image to the first image. The motion parameters may be defined at a time associated with either or both of the first and second images or at a time between the first and second images. These motion parameters are then used to warp the first and second images to obtain an interpolated image between the first and second images. This process generally is called motion compensated interpolation.
SUMMARY
Two images are analyzed to compute a set of motion vectors that describes motion between the first and second images. A motion vector is computed for each pixel in an image at a time between the first and second images. This set of motion vectors may be defined at any time between the first and second images, such as the midpoint. The motion vectors may be computed using any of several techniques. An example technique is based on the constant brightness constraint, also referred to as optical flow. Each vector is specified at a pixel center in an image defined at the time between the first and second images. The vectors may point to points in the first and second images that are not on pixel centers.
The motion vectors are used to warp the first and second images to a point in time of an output image between the first and second images using a factor that represents the time between the first and second image at which the output image occurs. The warped images are then blended using this factor to obtain the output image at the desired point in time between the first and second images. The point in time at which the output image occurs may be different from the time at which the motion vectors are determined. The same motion vectors may be used to determine two or more output images at different times between the first and second images.
The images may be warped using a technique in which many small triangles are defined in an image corresponding in time to the point in time between the first and second images at which the motion vectors are determined. A transform for each small triangle from the point in time at which the motion vectors are determined to the desired interpolated image time is determined, e.g., the triangle is warped using the motion vectors associated with its vertices. For each pixel in each triangle in the output image, corresponding points in the first and second images are determined, and the first and second images are spatially sampled at these points. These samples for each pixel are combined to produce a value for that pixel in the output image.
Motion compensated interpolation also may be performed on two or more images that are dissimilar, or that are non-sequential, or that are not contiguous in any one sequence of images. Thus, motion analysis may be used to process transitions between different sequences of images, such as a dissolve or a jump cut. If two consecutive sequences of images have corresponding audio tracks, the audio tracks may be processed to identify a point in time at which motion compensated interpolation of the transition between the sequences should be performed.
Motion compensated interpolation of a sequence of images also may be performed in conjunction with audio processing. For example, if interpolation of the sequence of images changes the duration of the sequence, the duration of a corresponding audio track may be changed to retain synchronization between the audio and the sequence of images. Resampling of the audio may be used to change the duration of the audio, but results in a change in pitch. Time scaling of the audio also may be used to change the duration of the audio without changing the pitch.
Occasionally, such interpolation creates visible artifacts in the resulting output images, particularly if there is a foreground object that occludes then reveals a background object, or if there is an object that appears or disappears in the images. In some cases, the foreground may appear to stretch or distort, or the background may appear to stretch or distort, or both. In such cases, a region in an image may be defined. The region may be segmented into foreground and background regions. A tracker then may be used to track either the foreground region or the background region or both as an object. A single motion vector or a parameterized motion model obtained from the tracker may be assigned to the tracked region. A combination map also may be defined to control which pixels of the input images are used to contribute to each pixel of an output image based on how a motion vector transforms a pixel from the input image to the output image.
For interlaced media, a vector map of motion also can be computed between fields of opposite sense, i.e., odd and even fields, by treating the two fields as if they are two images of the same type. The resulting vector map, when generated using two fields of opposite field sense, has a vertical offset of about one half of a line. This vector map is then modified by adjusting the vertical component of all of the vectors either up or down half a line. Warping operations then are performed using the modified vector map. However, when sampling a field of one field sense to generate a field of an opposite field sense, the sampling region is translated either up or down half a line.
Accordingly, in one aspect, an output image associated with a point in time between a first image and a second image is generated by determining a motion vector for each pixel in an image at a map time between the first image and the second image, wherein the map time is different from the point in time of the output image. Each motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image. A factor that represents the point in time between the first image and the second image at which the output image occurs is calculated. The first image is warped according to the determined motion vectors and the factor. The second image is warped according to the determined motion vectors and the factor. The warped first image and the warped second image are blended according to the factor to obtain the output image.
In another aspect, a plurality of output images, wherein each output image is associated with a different point in time between a first image and a second image, is generated by determining a motion vector for each pixel in an image at a map time between the first image and the second image. Each motion vector describes motion of a pixel of the image at the map time to a first point in the first image and a second point in the second image. For each output image, a factor that represents the point in time between the first image and the second image at which the output image occurs is calculated. For each output image, the first image is warped according to the determined motion vectors and the factor for the output image. For each output image, the second image is warped according to the determined motion vectors and the factor for the output image. For each output image, the warped first image and the warped second image are blended according to the factor for the output image.
In one embodiment, the first image is in a first sequence of images and the second image is in a second sequence of images such that the first image is not contiguous with the second image in a sequence

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Interpolation of a sequence of images using motion analysis does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Interpolation of a sequence of images using motion analysis, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Interpolation of a sequence of images using motion analysis will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3038807

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.