Systems and methods for tracking objects in video sequences

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240000, C375S240010, C375S240080, C375S240160

Reexamination Certificate

active

06901110

ABSTRACT:
A method for tracking one or multiple objects from an input video sequence allows a user to select one or more regions that contain the object(s) of interest in the first and the last frame of their choice. An initialization component selects the current and the search frame and divides the selected region into equal sized macroblocks. An edge detection component computes the gradient of the current frame for each macroblock and a threshold component decides then which of the macroblocks contain sufficient information for tracking the desired object. A motion estimation component computes for each macroblock in the current frame its position in the search frame. The motion estimation component utilizes a search component that executes a novel search algorithm to find the best match. The mean absolute difference between two macroblocks is used as the matching criterion. The motion estimation component returns the estimated displacement vector for each block. An output component collects the motion vectors of all the predicted blocks and calculates the new position of the object in the next frame.

REFERENCES:
patent: 5072384 (1991-12-01), Doi et al.
patent: 5237648 (1993-08-01), Mills et al.
patent: 5598226 (1997-01-01), Kokaram
patent: 5729471 (1998-03-01), Jain et al.
patent: 5745710 (1998-04-01), Clanton, III et al.
patent: 5748789 (1998-05-01), Lee et al.
patent: 5956716 (1999-09-01), Kenner et al.
patent: 5963203 (1999-10-01), Goldberg et al.
patent: 5973733 (1999-10-01), Gove
patent: 6404936 (2002-06-01), Katayama et al.
patent: 0 555 028 (1993-08-01), None
patent: 0 590 759 (1994-04-01), None
patent: 97.00423 (1997-01-01), None
“Advanced Imaging Product Survey: Photo, Document and Video,”Advanced Imaging,Oct. 1994, pp. 66-72.
Akutsu et al., “Video Indexing Using Motion Vectors,”Proceedings of the SPIE,vol. 1818, No. Part 3, Nov. 18, 1992, pp. 1522-1530.
Ardizzone et al., “A Real-Time Neural Approach to Scene Cut Detection,”IS&T/SPLE-Storage&Retrieval for Image and Video Databases IV,San Jose, California, Jan. 28-Feb. 2, 1996.
Ariki and Saito, “Extraction of TV News Articles Based on Scene Cut Detection using DCT Clustering,”International Conference on Image Processing,Sep. 1996, Lausanne, Switzerland, pp. 847-850.
Ding et al., “Previewing Video Data: Browsing Key Frames Rates Using a Video Slide Show Interface,”Proceedings of the International Symposium on Research, Development and Practice in Digital Libraries(ISDL '97), 1997, pp. 1-13.
Gelgon and Bouthemy, “A region-level graph labeling approach to motion-based segmentation,” INRIA Rennes, pp. 1-19.
Golland and Bruckstein, “Note: Motion from Color,”Computer Vision and Image Understanding,vol. 68, No. 3, Dec. 1997, pp. 347-362.
Gu and Lee, “Semi-Automatic Semantic Video Object Segmentation and Tracking,” Microsoft Corporation, pp. 1-33.
Hampapur et al., “Digital Video Segmentation,”ACM Multimedia '94 Proceedings,ACM Press-1, Oct. 1994, pp. 357-364.
Hoetter, M., “Differential Estimation of the Global Motion Parameters Zoom and Pan,”Signal Processing,vol. 16, 1989, pp. 249-265.
Huang, J. et al., “Image Indexing Using Color Correlograms,”IEEE Computer Vision and Pattern Recognition Conference,San Juan, Puerto Rico, Jun. 17-19, 1997, pp. 762-768.
Huang, T.S., “Modeling Analysis and Visualization of Nonrigid Object Motion,”Internat Conf. on Pattern Recognition,vol. 1, Atlantic City, NJ, Jun. 1990, pp. 361-364.
Kasturi et al., “An Evaluation of Color Histogram Based on Methods in Video Indexing,” Dept. of Computer Science and Engineering, The Pennsylvania State University,Technical Report CSE-96-053,1996.
Kataoka et al., “Architecture and Storage Structure of an Interactive Multimedia Information System,”IEICE Transactions on Information and Systems,vol. E78-D, No. 11, Nov. 1, 1995, pp. 1354-1365.
Kim et al., “A Fast Motion Estimator for Real-Time System,”IEEE Transactions on Consumer Electronics,vol. 43, No. 1, Feb. 1997, pp. 24-29.
Kobla et al., “Archiving, Indexing, and Retrieval of Video in the Compressed Domain,”SPIE Conference on Multimedia Storage and Archiving Systems,Boston Massachusetts, Nov. 18-19, 1996, vol. 2916, pp. 78-89.
Luo et al., “A New Prediction Search Algorithm for Block Motion Estimation in Video Coding.”IEEE Transactions on Consumer Electronics, vol. 43, No. 1, Feb. 1997, pp. 56-61.
Madrane et al., “Video Representation Tools Using a Unified Object and Perspective Base Approach,”Proceedings of the SPIE,vol. 2420, Feb. 9, 1995, pp. 152-163.
Massey and Bender., “Salient Stills: Process and Practice,”IBM Systems Journal,vol. 35, No. 3/4, 1996, pp. 557-573.
Meng, J. and Chang, S., “CVEPS- A Compressed Video Editing and Parsing System,”Proceedings of ACM Multimedia 1996,Boston, Massachusetts, Nov. 18-22, 1996, pp. 43-53.
Mills et al., “A Magnifier Tool for Video Data,”ACM CHI'92, May 7, 1992, pp. 93-98.
Pass et al., “Comparing Images Using Color Coherence Vectors,”Proceedings of ACM Multimedia 1996,Boston, Massachusetts, Nov. 18-22, 1996, pp. 65-73.
Shio and Sklansky, “Segmentation of People in Motion,” Proc. IEEE, vol. 79, pp. 325-332, 1991.
Smith and Kanade, “Video Skimming for Quick Browsing Based on Audio and Image Characterization,” Carnegie Mellon University, School of Computer Science, Technical Report CMU-95-186, 1995.
Taniguchi et al., “Panorama Excerpts: Extracting and Packing Panoramas for Video Browsing,”Proceedings of the ACM Multimedia,Seattle, Washington, Nov. 9-13, 1997, pp. 427-436.
Teodosio and Bender, “Salient Video Stills: Content and Context Preserved,”Proc. ACM Multimedia 93,California, Aug. 1-6, 1993, pp. 39-46.
Tonomura et al., “Content Oriented Visual Interface Using Video Icons for Visual Database Systems,”Journal of Visual Languages and Computing,vol. 1, Jan. 1, 1990, pp. 183-198.
Tse and Baker, “Global Zoom/Pan Estimation and Compensation for Video Compression,”Proc. ICASSP,vol. 4, May 1991, pp. 2725-2728.
Ueda, Hirotada et al., “Automatic Structure Visualization for Video Editing,”Bridges Between Worlds,Amsterdam, Apr. 24-29, 1993, pp. 137-141.
Wang, Demin, “Unsupervised Video Segmentation Based on Watersheds and Temporal Tracking,” Communications Research Center, (Manuscript submitted to:IEEE Trans. on Circuits&Systems for Video Technology,Special issue on image and video processing for emerging interactive multimedia services, Oct. 10, 1997), pp. 1-20.
Yeo and Yeung, “Retrieving and Visualizing Video,”Communications of the ACM, 40(12), Dec. 1997, pp. 43-52.
Yeung et al., “Video Browsing Using Clustering and Scene Transitions on Compressed Sequences,”Proceedings, IS&T/SPIE Multimedia Computing and Networking,pp. 399-413 (1995).
Zhang and Smoliar, “Developing Power Tools for Video Indexing and Retrieval,”SPIE,vol. 2185, pp. 140-149.
Zhang et al., “Automatic Partitioning of Full-Motion Video,”Multimedia Systems(Springer-Verlag, 1993) 1, pp. 10-28.
Zhang, H.J. et al., “Video Parsing, Retrieval and Browsing: An Integrated and Content-Based Solution,”The Third ACM International Multimedia Conference and Exhibition Multimedia '95,New York, New York, 1996, pp. 1-19.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Systems and methods for tracking objects in video sequences does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Systems and methods for tracking objects in video sequences, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for tracking objects in video sequences will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3418956

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.