Representation and retrieval of images using context vectors...

Data processing: artificial intelligence – Adaptive system

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

07072872

ABSTRACT:
Image features are generated by performing wavelet transformations at sample points on images stored in electronic form. Multiple wavelet transformations at a point are combined to form an image feature vector. A prototypical set of feature vectors, or atoms, is derived from the set of feature vectors to form an “atomic vocabulary.” The prototypical feature vectors are derived using a vector quantization method, e.g., using neural network self-organization techniques, in which a vector quantization network is also generated. The atomic vocabulary is used to define new images. Meaning is established between atoms in the atomic vocabulary. High-dimensional context vectors are assigned to each atom. The context vectors are then trained as a function of the proximity and co-occurrence of each atom to other atoms in the image. After training, the context vectors associated with the atoms that comprise an image are combined to form a summary vector for the image. Images are retrieved using a number of query methods, e.g., images, image portions, vocabulary atoms, index terms. The user's query is converted into a query context vector. A dot product is calculated between the query vector and the summary vectors to locate images having the closest meaning. The invention is also applicable to video or temporally related images, and can also be used in conjunction with other context vector data domains such as text or audio, thereby linking images to such data domains.

REFERENCES:
patent: 5161204 (1992-11-01), Hutcheson et al.
patent: 5325298 (1994-06-01), Gallant
patent: 5893095 (1999-04-01), Jain et al.
A practical approach for representing context and for performing word sense disambiguation using neural networks Gallant, S.I.; Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on vol. ii, Jul. 8-14, 1991 pp. 1007 vol. 2.
DEPICT: Documents Evaluated as Pictures, Visualizing information using context vectors and self-organizing maps Rushall, D.A.; Ilgen, M.R.; Information Visualization '96, Proceedings IEEE Symposium on Oct. 28-29, 1996 pp. 100-107, 131.
Improving the classification accuracy of automatic text processing systems using context vectors and back-propagation algorithmsFarkas, J.; Electrical and Computer Engineering, 1996. Canadian Conference on vol. 2, May 26-29, 1996 pp. 696-699 vol. 2.
Recognition of musical tonality from sound input Izmirli, O.; Bilgen, S.; Electrotechnical Conference, 1994. Proceedings., 7th Mediterranean Apr. 12-14, 1994 pp. 269-271 vol. 1.
A SIFT Descriptor with Global Context Mortensen, E.N.; Hongli Deng; Shapiro, L.; Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on vol. 1, Jun. 20-26, 2005 pp. 184-190.
Speech recognition using dynamic transformation of phoneme templates depending on acoustic/phonetic environments Abe, Y.; Nakajima, K.; Acoustics, Speech, and Signal Processing, ICASSP-89., International Conference on May 1989 pp. 326-329 vol. 1.
Kevin Gurney “An Introduction to Neural Networks” 1997, Chapter Three.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Representation and retrieval of images using context vectors... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Representation and retrieval of images using context vectors..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Representation and retrieval of images using context vectors... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3570520

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.