Methods and apparatus for retrieving images from a large...

Image analysis – Pattern recognition – Classification

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Type

Reexamination Certificate

Status

active

Patent number

07840076

Description

ABSTRACT:
An image retrieval program (IRP) may be used to query a collection of digital images. The IRP may include a mining module to use local and global feature descriptors to automatically rank the digital images in the collection with respect to similarity to a user-selected positive example. Each local feature descriptor may represent a portion of an image based on a division of that image into multiple portions. Each global feature descriptor may represent an image as a whole. A user interface module of the IRP may receive input that identifies an image as the positive example. The user interface module may also present images from the collection in a user interface in a ranked order with respect to similarity to the positive example, based on results of the mining module. Query concepts may be saved and reused. Other embodiments are described and claimed.

REFERENCES:
patent: 5579471 (1996-11-01), Barber et al.
patent: 6285995 (2001-09-01), Abdel-Mottaleb et al.
patent: 6504571 (2003-01-01), Narayanaswami et al.
patent: 6801661 (2004-10-01), Sotak et al.
patent: 6901411 (2005-05-01), Li et al.
patent: 6947930 (2005-09-01), Anick et al.
patent: 2004/0267740 (2004-12-01), Liu et al.
patent: 2005/0010605 (2005-01-01), Conrad et al.
patent: 2005/0055344 (2005-03-01), Liu et al.
patent: 2005/0120006 (2005-06-01), Nye
patent: 2005/0144162 (2005-06-01), Liang
patent: 2008/0052262 (2008-02-01), Kosinov et al.
Wang, et al., “Simplicity: Semantics-sensitive integrated matching for picture libraries”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23 No. 9, pp. 947-963, Sep. 2001.
Ke, et al., “An efficient parts-based near-duplicate and sub-image retrieval system”, Proceedings of the 12th annual ACM international conference on Multimedia, 2004, IRP-TR-04-07. pp. 869-876.
Y. Wu, E. Y. Chang, K. C.-C. Chang, and J. R. Smith, “Optimal multimodal fusion for multimedia data analysis,” Proc. of the ACM International Conference on Multimedia (MM) 2004. saved as p572-wu.pdf from http://delivery.acm.org/10.1145/1030000/1027665/p572-wu.pdf?key1=1027665&key2=224 1968511&coll=&d1=ACM&CFID=15151515&CFTOKEN=6184618.
J. Sivic, F. Scha®alitzky, and A. Zisserman, “Efficient object retrieval from videos,” Proceedings of the 12th European Signal Processing Conference, Vienna, Austria , 2004. http://www.robots.ox.ac.uk/˜vgg/publications/papers/sivic04c.pdf.
C. Carson, S. Belongie, H. Greenspan, and J. Malik, Blobworld: “Image segmentation using expectation-maximization and its application to image querying,” IEEE Trans. on Pattern Analysis and Machine Intelligence 24, pp. 1026{1038, Aug. 2002. http://www.cs.berkeley.edu/˜malik/papers/CBGM-blobworld.pdf.
Intelligent Information Management Dept. IBM T. J.Watson Research Center, “Marvel: Mpeg-7 multimedia search engine.” http://www.research.ibm.com/marvel/, Jul. 21, 2006.
H. Mueller, W. Mueller, D. Squire, and T. Pun, “Performance evaluation in content-based image retrieval: Overview and proposals,” Technical Report 99.05, University of Geneva, 1999. http://vision.unige.ch/publications/postscript/99/VGTR99.05—HMuellerWMuellerSquirePun.pdf.
Bradshaw - Tversky, Psychological Review 84(4), 1977. http://www.daylight.com/meetings/mug97/Bradshaw/MUG97/tv—tversky.html http://faculty.ucmerced.edu/eheit/simcat.pdf, Introduction to Tversky similarity measure.
V. Vinay, K. Wood, N. Milic-Frayling, and I. J. Cox, “Comparing relevance feedback algorithms for web search,” Proc. of the World Wide Web Conference , 2005. http://www2005.org/cdrom/docs/p1052.pdf.
M. Crucianu, M. Ferecatu, and N. Boujemaa, “Relevance feedback for image retrieval: a short survey,”Report of the DELOS2 European Network of Excellence (FP6) , 2004. http://www.vis.uky.edu/˜cheung/courses/ee639—fa1104/readings/ShortSurveyRF.pdf.
X. Zhou and T. Huang, “Relevance feedback for image retrieval: a comprehensive review,” Multimedia Systems 8(6), pp. 536{544, 2003. http://www.ifp.uiuc.edu/˜xzhou2/Research/papers/Selected—papers/ACM—MSJ.pdf.
S. Tong and E. Chang, “Support vector machine active learning for image retrieval,” Proc. of the ninth ACM international conference on Multimedia , 2001. [saved as p107-tong.pdf, from http://portal.acm.org/citation.cfm?id=500159].
Y. Rui, T. Huang, and S. Chang, “Image retrieval: Current techniques, promising directions and open issues,” Journal of Visual Communication and Image Representation 10, pp. 39{62, Apr. 1999. http://www.csee.umbc.edu/˜pmundur/courses/CMSC691M-04/deep—rui99—cbir—survey.pdf.
H. Tamura and N. Yokoya, “Image database systems: A survey,” Pattern Recognition 17(1), pp. 29{43, 1984.
R. Veltkamp and M. Tanase, “Content-based image retrieval systems: A survey,” Technical Report UU-CS-2000-34, Utrecht University, http://give-lab.cs.uu.nl/cbirsurvey/cbir-survey.pdf, Oct. 28, 2002.
Bouguet, “Requirements for benchmarking personal image retrieval systems” Intel Corporation 12 pages, Proc. SPIE, vol. 6061, Jan. 16, 2006.
Wu et al—“Sampling strategies for active learning in personal photo retrieval” 4 pages, ICME 2006, pp. 529-532.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for retrieving images from a large... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for retrieving images from a large..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for retrieving images from a large... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4244267

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.