Image analysis – Pattern recognition – Feature extraction
Reexamination Certificate
2006-01-17
2006-01-17
Couso, Jose L. (Department: 2621)
Image analysis
Pattern recognition
Feature extraction
C382S103000, C382S104000
Reexamination Certificate
active
06987885
ABSTRACT:
Systems, apparatuses, and methods are presented that determine the number of people in a crowd using visual hull information. In one embodiment, an image sensor generates a conventional image of a crowd. A silhouette image is then determined based on the conventional image. The intersection of the silhouette image cone and a working volume is determined. The projection of the intersection onto a plane is determined. Planar projections from several image sensors are aggregated by intersecting them, forming a subdivision pattern. Polygons that are actually empty are identified and removed. Upper and lower bounds of the number of people in each polygon are determined and stored in a tree data structure. This tree is updated as time passes and new information is received from image sensors. The number of people in the crowd is equal to the lower bound of the root node of the tree.
REFERENCES:
patent: 5298697 (1994-03-01), Suzuki et al.
patent: 5465115 (1995-11-01), Conrad et al.
patent: 5550928 (1996-08-01), Lu et al.
patent: 5866887 (1999-02-01), Hashimoto et al.
patent: 6633232 (2003-10-01), Trajkovic et al.
patent: 6697104 (2004-02-01), Yakobi et al.
Regazzoni, “A Real-Time Vision System for Crowding Monitoring”, IEEE 0-7803-0891-3, Mar. 1993, pp. 1860-1864.
Lin, “Estimation of Number of People in Crowded Scenes Using Perspective Transformation”, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 31, No. 6, Nov. 2001, pp. 645-654.
Marana, “Estimation Crowd Density With Minkowski Fractal Dimension”, IEEE, 0-7803-5041-3, 1999, pp. 3521-3524.
Laurentini et al., “Introducing a New Problem: Shape-from-Silhouette When the Relative Positions of the Viewpoints is Unknown”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 25, No. 11, Nov. 2003, pp. 1484-1493.
International Search Report, PCT/US04/18842, Feb. 3, 2005.
Glassner, A.,Graphics Gems, 1998, pp. 75-97, Morgan Kaufmann, USA.
Goodman, J. et al.,Handbook of Discrete and Computational Geometry, 1997, pp. 599-630, CRC Press, USA.
Horprasert, T. et al.,A Robust Background Subtraction and Shadow Detection, Proceedings of Asian Conference on Computer Vision, Jan. 2000.
Laurentini, A.,The Visual Hull Concept for Silhouette-Based Image Understanding, IEEE Transactions on Pattern Analysis and Machine Intelligence, Feb. 1994, pp. 150-162, vol. 16, No. 2.
Yang, D. et al.,Counting People in Crowds with a Real-Time Network of Simple Image Sensors, Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV), 2003, pp. 122-129.
Q. Cai, J.K. Aggarwal, “Automatic Tracking of Human Motion in Indoor Scenes Across Multiple Synchronized Video Streams,” inICCV, 1998, pp. 356-362.
Xing Chen, “Design of Many Camera Tracking Systems for Scalability and Efficient Resource Allocation,” Ph.D. dissertation, Stanford University, Jun. 2002.
G. Cheung, T. Kanade, J. Boughet, M. Holler, “A Real Time System for Robust 3D Voxel Reconstruction of Human Motions,” inCVPR, v.2,2000, pp. 714-720.
R. Collins, A. Lipton, T. Kanade, “A System for Video Surveillance and Monitoring,”American Nuclear Soc. 8th Int. Topical Meeting on Robotics and Remote Systems, 1999.
T. Darrell, G. Gordon, M. Harville, J. Woodfill, “Integrated person tracking using stereo, color, and pattern detection,” inCVPR, 1998, pp. 601-609.
L. Doherty, B.A. Warneke, B.E. Boser, K. Pister, “Energy and Performance Considerations for Smart Dust,”Int. J. of Parallel Distributed Systems, vol. 4, No. 3, 2001, pp. 121-133.
H.H. Gonzalez-Banos and J.C. Latombe, “A Randomized Art-Gallery Algorithm for Sensor Placement,”Proc. 17th ACM Symp. on Computational Geometry(SoCG'01), 2001, pp. 232-240.
I. Haritaoglu, D. Harwood, L.S. Davis, “W4S: A Real-Time System for Detecting and Tracking People in 2 1/2 D,” inEuropean Conference on Computer Vision, 1998.
I. Haritaoglu, D. Harwood, L.S. Davis, “Hydra: Multiple People Detection and Tracking Using Silhouettes,”Int. Conf on Image Analysis and Processing, 1999.
J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister, “System Architecture Directions for Networked Sensors,”ASPLOS, 2000.
T. Huang, S. Russell, “Object identification: a Bayesian analysis with application to traffic surveillance,”Artificial Intelligence, 1998, 103:1-21.
C. Intanagonwiwat, R. Govindan, D. Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks,”MobiCom, 2000.
S. Intille, J.W. Davis, A. Bobick, “Real-Time Closed-World Tracking,” inCVPR, 1997, pp. 697-703.
M. Israd, J. MacCormick, “BraMBLe: A Bayesian Multiple-Blob Tracker,” inICCV, v. 2, 2001, pp. 34-41.
V. Kettnaker, R. Zabih, “Counting People from Multiple Cameras,”ICMCS, 1999, pp. 267-271.
J. Krumm, S. Harris, B. Meyers, B. Brumitt, Hale, S. Shafer, “Multi-camera Multi-person Tracking for EasyLiving,”IEEE Inter. Workshop on Visual Surveillance, 2000.
S.H. Lim, A. El Gamal, “Integration of Image Capture and Processing—Beyond Single Chip Digital Camera,”Proc. of SPIE, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications II, 2001, pp. 219-226.
W. Matusik, C. Buehler, L. McMillan, “Polyhedral Visual Hulls for Real-Time Rendering,”Eurographics Workshop on Rendering, 2001.
W.M. Merrill, K. Sohrabi, L. Girod, J. Elson, F. Newberg, W. Kaiser, “Open Standard Development Platforms for Distributed Sensor Networks,”Proc. of SPIE, Unattended Ground Sensor Technologies and Applications IV, 2002.
A. Mittal, L.S. Davis, “M2Tracker: A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene Using Region-Based Stereo,” inECCV, 2002.
J. Orwell, P. Remagnino, G.A. Jones, “Multi-Camera Colour Tracking,”IEEE Workshop on Visual Surveillance, 1999.
G.J. Pottie, W.J. Kaiser, “Wireless integrated network sensors,”CACM, vol. 43, No. 5, 2000, pp. 51-58.
T. Sogo, H. Ishiguro, M. Trivedi, “Real-Time Target Localization and Tracking by N-Ocular Stereo,”IEEE Workshop on Omnidirectional Vision, 2000.
R. Szeliski, “Rapid Octree Construction from Image Sequencies,”CVGIP: Image Understanding, vol. 58, No. 1, 1993, pp. 23-32.
T. Wada, X. Wu, S. Tokai, T. Matsuyama, “Homography Based Parallel Volume Intersection: Toward Real-Time Volume Reconstruction using Active Cameras,”IEEE Workshop on Comp. Arch. for Machine Perception, 2000, pp. 331-340.
T. Zhao, R. Nevatia, “Stochastic Human Segmentation from a Static Camera,”IEEE Workshop on Motion and Video Computing, 2002.
Gonzalez-Banos Hector H.
Guibas Leonidas J.
Yang Danny B.
Couso Jose L.
Duell Mark E.
Fenwick & West LLP
Honda Motor Co. Ltd.
Lu Tom Y.
LandOfFree
Systems and methods for using visual hulls to determine the... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Systems and methods for using visual hulls to determine the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for using visual hulls to determine the... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3575431