Apparatus and method for describing the motion parameters of...

Image analysis – Image compression or coding – Interframe coding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S107000

Reexamination Certificate

active

06366701

ABSTRACT:

The invention relates to image processing for describing the motion of object(s) in image sequences, e.g., video. More particularly, the invention relates to an efficient framework for object trajectory segmentation, which in turn, can be employed to improve image processing functions, such as context-based indexing and retrieval of image sequences with emphasis on motion description.
BACKGROUND OF THE DISCLOSURE
With the explosion of available multimedia content, e.g., audiovisual content, the need for organization and management of this ever growing and complex information becomes important. Specifically, as libraries of multimedia content continue to grow, it becomes unwieldy in indexing this highly complex information to facilitate efficient retrieval at a later time.
By standardizing a minimum set of descriptors that describe multimedia content, content present in a wide variety of databases can be located, thereby making the search and retrieval more efficient and powerful. International standards such as Moving Picture Experts Group (MPEG) have embarked on standardizing such an interface that can be used by indexing engines, search engines, and filtering agents. This new member of the MPEG standards is named multimedia content description interface and has been code-named “MPEG-7”.
For example, typical content description of a video sequence can be obtained by dividing the sequence into “shots”. A “shot” can be defined as a sequence of frames in a video clip that depicts an event and is preceded and followed by an abrupt scene change or a special effect scene change such as a blend, dissolve, wipe or fade. Detection of shot boundaries enables event-wise random access into a video clip and thus constitutes the first step towards content search and selective browsing. Once a shot is detected, representative frames called “key frames” are extracted to capture the evolution of the event, e.g., key frames can be identified to represent an explosion scene, an action chase scene, a romantic scene and so on. This simplifies the complex problem of processing many video frames of an image sequence to just having to process only a few key frames. The existing body of knowledge in low-level abstraction of scene content such as color, shape, and texture from still images can then be applied to extract the meta-data for the key frames.
While offering a simple solution to extract meta-data, the above description has no motion-related information. Motion information can considerably expand the scope of queries that can be made about content (e.g., queries can have “verbs” in addition to “nouns”). Namely, it is advantageous to have additional conditions on known information based on color, shape, and texture descriptors, be correlated to motion information to convey a more intelligent description about the dynamics of the scene that can be used by a search engine. Instead of analyzing a scene from a single perspective and storing only the corresponding meta-data, it is advantageous to capture relative object motion information as a descriptor that will ultimately support fast analysis of scenes on the fly from different perspectives, thereby enabling the ability to support a wider range of unexpected queries. For example, this can be very important in application areas such as security and surveillance, where it is not always possible to anticipate the queries.
Therefore, there is a need in the art for an apparatus and method for extracting and describing motion information in an image sequence, thereby improving image processing functions such as content-based indexing and retrieval, and various encoding functions.
SUMMARY OF THE INVENTION
One embodiment of the present invention is an apparatus and method for implementing object trajectory segmentation for an image sequence, thereby improving or offering other image processing functions such as context-based indexing of the input image sequence by using motion-based information. More specifically, block-based motion vectors are used to derive optical flow motion parameters, e.g., affine motion parameters. These optical flow motion parameters are employed to develop a prediction that is used to effect object trajectory segmentation for an image sequence.
Specifically, optical flow (e.g., affine) object motion segmentation is initially performed for a pair of adjacent frames. Namely, optical flow motion parameters between adjacent frames that describe the position of each point on a region at each time instant are made available to the present object trajectory segmenter. The present invention is not limited by the method or model that is employed to provide the initial optical flow motion parameters between adjacent frames.
The object trajectory segmenter applies the optical flow motion parameters to form a new prediction or method for predicting the positions of all the points on an object over time within an interval. For example, the optical flow motion parameters are code fitted to form the new prediction. The new prediction is then applied and the result is compared with an error metric. For example, the error metric measures the sum of deviations in distance at each point on the region at each time instant based on the new prediction compared to the original predictions. The results from such comparison with the error metric will dictate the proper intervals (temporal boundaries) of the image sequence at which the motion parameters are valid for various key objects. In other words, it is important to detect what motion segments or temporal boundaries are for a key object. In doing so, the present object trajectory segmenter obtains two sets of important information: the motion parameter values that accurately describe the object's motion and for which frames the parameters are valid.
Namely, the optical flow (e.g., affine) motion parameters generated for each identified key object for each adjacent pair of frames are processed over an interval of the image sequence to effect object trajectory segmentation. Namely, motion trajectory such as direction, velocity and acceleration can be deduced for each key object over some frame interval, thereby providing an another aspect of motion information that can be exploited by query.


REFERENCES:
patent: 5521843 (1996-05-01), Arman et al
patent: 5930379 (1999-07-01), Rehg et al.
patent: 0 805 405 (1997-05-01), None
International Searh Report for Int'l Appliction No. PCT/US00/02364 dated Jun. 13, 2000.
Jacek Nieweglowski, et al., A Novel Video Coding Scheme Based on Temporal Prediction Using Digital Image Warping, IEEE Transaction on Consumer Electronics, 39 (1993) Aug., No. 3, New York, US.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for describing the motion parameters of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for describing the motion parameters of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for describing the motion parameters of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2821370

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.