Apparatus and method for context-based indexing and...

Image analysis – Applications – Motion or velocity measuring

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S103000, C382S173000, C382S224000, C348S699000

Reexamination Certificate

active

06643387

ABSTRACT:

The invention relates to image processing for indexing and retrieval of image sequences, e.g., video. More particularly, the invention relates to an efficient framework for context-based indexing and retrieval of image sequences with emphasis on motion description.
BACKGROUND OF THE DISCLOSURE
With the explosion of available multimedia content, e.g., audiovisual content, the need for organization and management of this ever growing and complex information becomes important. Specifically, as libraries of multimedia content continue to grow, it becomes unwieldy in indexing this highly complex information to facilitate efficient retrieval at a later time.
By standardizing a minimum set of descriptors that describe multimedia content, content present in a wide variety of databases can be located, thereby making the search and retrieval more efficient and powerful. International standards such as Moving Picture Experts Group (MPEG) have embarked on standardizing such an interface that can be used by indexing engines, search engines, and filtering agents. This new member of the MPEG standards is named multimedia content description interface and has been code-named “MPEG-7”.
For example, typical content description of a video sequence can be obtained by dividing the sequence into “shots”. A “shot” can be defined as a sequence of frames in a video clip that depicts an event and is preceded and followed by an abrupt scene change or a special effect scene change such as a blend, dissolve, wipe or fade. Detection of shot boundaries enables event-wise random access into a video clip and thus constitutes the first step towards content search and selective browsing. Once a shot is detected, representative frames called “key frames” are extracted to capture the evolution of the event, e.g., key frames can be identified to represent an explosion scene, an action chase scene, a romantic scene and so on. This simplifies the complex problem of processing many video frames of an image sequence to just having to process only a few key frames. The existing body of knowledge in low-level abstraction of scene content such as color, shape, and texture from still images can then be applied to extract the meta-data for the key frames.
While offering a simple solution to extract meta-data, the above description has no motion-related information. Motion information can considerably expand the scope of queries that can be made about content (e.g., queries can have “verbs” in addition to “nouns”). Namely, it is advantageous to have additional conditions on known information based on color, shape, and texture descriptors, be correlated to motion information to convey a more intelligent description about the dynamics of the scene that can be used by a search engine. Instead of analyzing a scene from a single perspective and storing only the corresponding meta-data, it is advantageous to capture relative object motion information as a descriptor that will ultimately support fast analysis of scenes on the fly from different perspectives, thereby enabling the ability to support a wider range of unexpected queries. For example, this can be very important in application areas such as security and surveillance, where it is not always possible to anticipate the queries.
Therefore, there is a need in the art for an apparatus and method for extracting and describing motion information in an image sequence, thereby improving image processing functions such as content-based indexing and retrieval, and various encoding functions.
SUMMARY OF THE INVENTION
One embodiment of the present invention is an apparatus and method for implementing object motion segmentation and object trajectory segmentation for an image sequence, thereby improving or offering other image processing functions such as context-based indexing of the input image sequence by using motion-based information. More specifically, block-based motion vectors are used to derive optical flow motion parameters, e.g., affine motion parameters.
Specifically, optical flow (e.g., affine) object motion segmentation is initially performed for a pair of adjacent frames. The affine motion parameters are then used to determine or identify key objects within each frame. These key objects are then monitored over some intervals of the image sequence (also known as a “shot” having a number of frames of the input image sequence) and their motion information is extracted and tracked over those intervals.
Next, optical flow (e.g., affine) trajectory segmentation is performed on the image sequence. Specifically, the affine motion parameters generated for each identified key object for each adjacent pair of frames are processed over an interval of the image sequence to effect object trajectory segmentation. Namely, motion trajectory such as direction, velocity and acceleration can be deduced for each key object over some frame interval, thereby providing an another aspect of motion information that can be exploited by query.


REFERENCES:
patent: 5557684 (1996-09-01), Wang et al.
patent: 5734737 (1998-03-01), Chang et al.
patent: 5787205 (1998-07-01), Hirabayashi
patent: 5802220 (1998-09-01), Black et al.
patent: 5909251 (1999-06-01), Guichard et al.
patent: 5930379 (1999-07-01), Rehg et al.
patent: 6097832 (2000-08-01), Guillotel et al.
patent: 6154578 (2000-11-01), Park et al.
patent: 6192156 (2001-02-01), Moorby
patent: 6236682 (2001-05-01), Ota et al.
patent: 6263089 (2001-07-01), Otsuka et al.
patent: 6400830 (2002-06-01), Christian et al.
patent: 6400831 (2002-06-01), Lee et al.
patent: 0 805 405 (1997-11-01), None
Bradshaw et al, The active recovery of 3D motion trajectories and their use in prediction, IEEE Transactions on Pattern Analysis and Machine Intelligence, Mar. 1997, vol 19, iss 3, p 219-234.*
Willersinn et al, Robust obstacle detection and tracking by motion analysis, IEEE Conference on Intelligent Transportation System, 1997. ITSC '97. Nov. 9-12, 1997, p 717-722.*
Lane et al, Motion estimation and tracking of multiple objects in sector scan sonar using optical flow, IEE Colloquium on Autonomous Underwater Vehicles and their Systems—Recent Developments and Future Prospects, 1996, p 6/1-611.*
Ardizzone et al, Video indexing using optical flow field, Proceedings. International Conference on Image Processing, 1996. Sep. 16-19, 1996, vol 3, p 831-834.*
Lane et al, Robust tracking of multiple objects in sector-scan sonar image sequences using optical flow motion estimation, IEEE Journal of Oceanic Engineering, Jan. 1998, vol 23, iss 1, p 31-46.*
Giachetti et al, The use of optical flow for road navigation, IEEE Transactions on Robotics and Automation, Feb. 1998, vol 14, iss 1, p 43-48.*
Bors et al, Motion and segmentation prediction in image sequences based on moving object tracking, Proceedings. 1998 International Conference on Image Processing, 1998. ICIP 1998. Oct. 4-7, 1998, vol 3, p 663-667.*
Mae et al, Tracking moving object in 3-D space based on optical flow and edges, Proceedings. Fourteenth International Conference on Pattern Recognition, 1998. Aug. 16-20, 1998, vol 2, p 1439-1441.*
Gunsel, et al., “Content-based access to video objects: Temporal segmentation, visual summarization, and feature extraction,” Signal Processing, 66 (1998), p. 261-280.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for context-based indexing and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for context-based indexing and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for context-based indexing and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3137747

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.