Usual event detection in a video using object and frame...

Image analysis – Pattern recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S190000, C382S224000, C382S225000

Reexamination Certificate

active

07426301

ABSTRACT:
The invention provides a method for detecting usual events in a video. The events are detected by first constructing an aggregate affinity matrix from features of associated items extracted from the video. The affinity matrix is decomposed into eigenvectors, and the eigenvectors are used to reconstruct approximate estimates of the aggregate affinity matrix. Each matrix is clustered and scored, and the clustering that yields the highest scores is used to detect usual events.

REFERENCES:
patent: 6774917 (2004-08-01), Foote et al.
patent: 7103225 (2006-09-01), Yang et al.
patent: 7158680 (2007-01-01), Pace
G.L. Scott and H. C. Longuet-Higgins, “Feature grouping by relocalisation of eigenvectors of the proximity matrix”Proc. British Machine Vision Conference, 103-108, 1990.
N. Johnson and D. Hogg, “Learning the distribution of object trajectories for event recognition,”Proc. British Machine Vision Conference, 583592, 1995.
S. Kamvar, D. Klein, and C. Manning, “Interpreting and Extending Classical Agglomerative Clustering Algorithms using a Model-Based Approach,”Proc. ICML, 2002.
V. Kettnaker, “Time-dependent HMMs for visual intrusion detection,”Proc. IEEE Workshop on Detection and Recognizing Events in Video, 2003.
Z. Marx, I. Dagan, and J. Buhmann, “Coupled Clustering: a Method for Detecting Structural Correspondence,”Proc. International Conference on Machine Learning, 353-360, 2001.
G. Medioni, I. Cohen, F. Bremond, S. Hongeng, and R. Nevatia, “Event detection and analysis from video streams,”IEEE Trans. on PAMI, 23 (8) , 873-889 , 2001.
M. Meila and J. Shi, “Learning Segmentation by Random Walks,”Proc. Advances in Neural Information Processing Systems, 2000.
A. Ng, M. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,”Proc. of Neural Information Processing Systems, 2001.
T. Starner and A. Pentland, “Visual recognition of american sign language using hidden Markov models,”Proc. Int'l Workshop Automatic Face- and Gesture-Recognition, 1995.
C. Stauffer and W.E. Grimson, “Learning patterns of activity using real-time tracking,”IEEE Trans. on Pattern Analysis and Machine Intelligence, 22 (8) , 747-757, 2000.
Y.Weiss, “Segmentation using eigenvectors: a unifying view,”Proc. IEEE International Conference on Computer Vision, 975-982, 1999.
C. C. Paige, B. N. Parlett, and H. A. van der Vorst. Approximate solutions and eigenvalue bounds from Krylov subspaces. Numer. Linear Algebra Appl., 2, pp. 115-133, 1995.
L. Zelnik-Manor and M. Irani, “Event-Based Video Analysis,”IEEE Conf. Computer Vision and Pattern Recognition, Dec. 2001.
J. Davis and A. Bobick, “Representation and recognition of human movement using temporal templates,”Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1997.
A. Bobick and J. Davis, “The Recognition of Human Movement Using Temporal Templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, No. 3, pp. 257-267, 2001.
S. Kamvar, D. Klein and C. Manning, “Spectral Learning,” International Joint Conference on Artificial Intelligence, Apr. 2003.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Usual event detection in a video using object and frame... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Usual event detection in a video using object and frame..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Usual event detection in a video using object and frame... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3978274

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.