Method for tracking a video object in a time-ordered...

Image analysis – Applications – Target tracking or detecting

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S168000, C382S291000

Reexamination Certificate

active

06724915

ABSTRACT:

FIELD OF THE INVENTION
The present invention is related to the field of digital video processing and analysis, and more specifically, to a method and apparatus for tracking a video object in an ordered sequence of two-dimensional images, including obtaining the trajectory of the video object in the sequence of images.
BACKGROUND OF THE INVENTION
Reference is made to a patent application entitled Apparatus and Method For Collaborative Dynamic Video Annotation being filed on even date herewith, and assigned to the same assignee as the present application, and whereof the disclosure is herein incorporated by reference to the extent it is not incompatible with the present application.
A situation can arise wherein two or more users wish to communicate in reference to a common object, for example, in reference to a video. An example of this could be where a soccer team coach wishes to consult with a colleague to seek advice. The soccer team coach might wish to show a taped video of a game and ask the colleague to explain, using the video, why one team failed to score in a given attack situation. In addition, the coach might wish to record this discussion and show it later to other coaches to get more opinions.
In another scenario, a student could be taking a training course being given at a remote location from where a course instructor is located. It may be that the student cannot understand a procedure being taught in the course. The student can then call the instructor over the Internet phone to find out how such a procedure should be performed. The instructor can first browse through the training video together with the student to find the clip where the difficulty can be identified. The student may then ask various questions of the instructor about that procedure. For example, the instructor may then decide to show the student another video, which offers more detailed information. The instructor may then annotate this video using collaborative video annotation tools to explain to the student how this procedure should be performed.
Existing methods for object tracking can be broadly classified as feature-point (or token) tracking, boundary (and thus shape) tracking and region tracking. Kalman filtering and template matching techniques are commonly applied for tracking feature points. A review of other 2-D and 3-D feature-point tracking methods is found in Y.-S. Yao and R. Chellappa, “Tracking a dynamic set of feature points,” IEEE Trans. Image Processing, vol.4, pp.1382-1395, October 1995.
However, all these feature point tracking methods fail to track occluded points unless they move linearly. Boundary tracking has been studied using a locally deformable active contour model snakesy. See, for example, F. Leymarie and M. Levine, “Tracking deformable objects in the plane using an active contour model,” IEEE Trans. Pattern Anal. Mach. Intel., vol.15, pp.617-634, June 1993;. K. Fujimura, N. Yokoya, and K. Yamamoto, “Motion tracking of deformable objects by active contour models using multiscale dynamic programming,”
J. of Visual Comm. and Image Representation
, vol.4, pp.382-391, December 1993; M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,”
Int. Journal of Comp. Vision
, vol. 1, no. 4, pp. 321-331, 1988. In addition, boundary tracking ahs been studied using a locally deformable template model. See, for example, C. Kervrann and F. Heitz, “Robust tracking of stochastic deformable models in long image sequences,” in
IEEE Int. Conf. Image Proc
., (Austin, Tex.), November 1994.
These boundary tracking methods, however, lack the ability of tracking rapidly moving objects because they do not have a prediction mechanism to initialize the snake. In order to handle large motion, a region-based motion prediction has been employed to guide the snake into a subsequent frame. See, for example, B. Bascle and et al, “Tracking complex primitives in an image sequence,” in
Int. Conf. Pattern Recog
., (Israel), pp. 426-431, October 1994; and B. Bascle and R. Deriche, “Region tracking through image sequences,” in
Int. Conf. Computer Vision
, pp. 302-307, 1995.
Nevertheless, the prediction relies on a global, that is, not locally varying, motion assumption, and thus may not be satisfactory when there are local deformations within the boundary and the image background is very busy. Region tracking methods can be categorized into those that employ global deformation models and those that allow for local deformations. See for example, Y. Y. Tang and C. Y. Suen, “New algorithms for fixed and elastic geometric transformation models,”
IP
, vol. 3, pp.355-366, July 1994.
A method for region tracking using a single affine motion within each object has been proposed that assigns a second-order temporal trajectory to each affine model parameter. See, for example, F. G. Meyer and P. Bouthemy, “Region-based tracking using affine motion models in long image sequences,” CVGIP:
Image Understanding
, vol. 60, pp.119-140, September 1994. Bascle et. al. propose to combine region and boundary tracking. See the aforementioned article by B. Bascle and R. Deriche, “Region tracking through image sequences,” in
Int. Conf. Computer Vision
, pp. 302-307, 1995.
They use a region-based deformable model, which relies on texture matching for its optimization, that allows the tracking approach to handle relatively large displacements, cluttered images and occlusions. Moscheni et. al. suggest using spatio-temporal segmentation for every frame pair followed by the temporal linkage of an object, to track coherently moving regions of the images. See F. Moscheni, F. Dufaux, and M. Kunt, “Object tracking based on temporal and spatial information,” in
IEEE Int. Conf. Acoust., Speech, and Signal Proc
., (Atlanta, Ga.), May 1996.
Additional background information is provided in the following:
U.S. Pat. No. 5,280,530, entitled METHOD AND APPARATUS FOR TRACKING A MOVING OBJECT and issued Jan. 18, 1994 the name of Trew et al. discloses a method of tracking a moving object in a scene, for example the face of a person in videophone applications. The method comprises forming an initial template of the face, extracting a mask outlining the face, dividing the template into a plurality (for example sixteen) sub-templates, searching the next frame to find a match with the template, searching the next frame to find a match with each of the sub-templates, determining the displacements of each of the sub-templates with respect to the template, using the displacements to determine affine transform coefficients and performing an affine transform to produce an updated template and updated mask.
U.S. Pat. No. 5,625,715, entitled : METHOD AND APPARATUS FOR ENCODING PICTURES INCLUDING A MOVING OBJECT, issued Apr. 29, 1997 in the name of Trew et al. discloses a method of encoding a sequence of images including a moving object. The method comprises forming an initial template, extracting a mask outlining the object, dividing the template into a plurality (for example sixteen) sub-templates, searching the next frame to find a match with the template, searching the next frame to find a match with each of the sub-templates, determining the displacements of each of the sub-templates with respect to the template, using the displacements to determine affine transform coefficients and performing an affine transform to produce an updated template and updated mask. Encoding is performed at a higher resolution for portions within the outline than for portions outside the outline.
U.S. Pat. No. 5,473,369 entitled OBJECT TRACKING APPARATUS issued Feb. 23, 1994 in the name of Abe discloses an object detecting and tracking apparatus which detects a tracking target object from a moving image photographed by a television camera and tracks the same, wherein the movement of the object is detected reliably and with high accuracy to automatically track the target object. When tracking is started after putting the target object in a region designating frame “WAKU” displayed on a screen in such a way as to be variable in size and position, a video

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for tracking a video object in a time-ordered... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for tracking a video object in a time-ordered..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for tracking a video object in a time-ordered... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3196162

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.