Method for associating semantic information with multiple...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C707S793000, C707S793000, C382S190000

Reexamination Certificate

active

06804684

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to the field of digital image processing, and in particular to a method for associating captions or semantic information with images in an image database environment.
BACKGROUND OF THE INVENTION
Currently, content-based image retrieval in an image database environment is commonly practiced by searching for images that are similar to the query based upon low-level features, or based upon semantic labels. Low-level features commonly include attributes such as colors or textures found within the image. Non-image based metadata associated with the image can also be used, such as time/date and location, but by far the most user-friendly method of determining similarity is by using caption information or semantic labels applied to the images. Potentially, of course, all of the images could be annotated with caption text or semantic labels, stored in a relational database and retrieved by keyword. However, until computer processing reaches the point where images can be automatically and effectively analyzed, most automatic image retrieval will depend on captions or semantic labels manually attached to specific images. Unfortunately, while friendly to the user for retrieval, the application of captions or semantic labels to an image is a labor-intensive process that is often not performed. Moreover, even if the images can be automatically interpreted, many salient features of images exist only in the mind of the user and need to be communicated somehow to the machine in order to index the image. Therefore, the application of captions or semantic labels to images, based on some degree of user involvement, will remain important for the foreseeable future.
It has been recognized that the home of the future may be awash with digital media and that there will be major challenges in organizing and accessing this media information. In that connection, more effective information exploration tools could be built by blending cognitive and perceptual constructs. As observed by A. Kuchinsky in the article, “Multimedia Information Exploration”,
CHI
98
Workshop on Information Exploration
, FX Palo Alto Laboratory, Inc.: Palo Alto, Calif. (1998), if narrative and storytelling tools were treated not as standalone but rather embedded within a framework for information annotation and retrieval, such tools could be leveraged as vehicles for eliciting metadata from users. This article recognizes that there is an emerging set of technologies for content-based indexing and retrieval, which provide some degree of automation for this process by automatically extracting features, such as color or texture, directly from visual data. However, in many of these systems, the features extracted tend to be low-level syntax features that are represented through image analysis, which may not be as personally meaningful to consumers as keyword-based attributes are.
Consequently, there has been work in developing a user interface agent in facilitating, rather than fully automating, the textual annotation and retrieval process in connection with typical uses of consumer picture-taking. The role of the agent would lie not so much in automatically performing the annotation and retrieval but in detecting opportunities for annotation and retrieval and alerting the user to those opportunities. Preferably, the user interface agent would assist a user by proactively looking for opportunities for image annotation and image retrieval in the context of the user's everyday work. For instance, in commonly-assigned, co-pending U.S. patent application Ser. No. 09/685,112, entitled “An Agent for Integrated Annotation and Retrieval of Images” and filed Oct. 10, 2000, a method for integrated retrieval and annotation of stored images (in a database) involves running a user application (e.g., an e-mail application) in which text entered by a user is continuously monitored by an annotation and retrieval agent to isolate the context expressed by the text. The context is matched with metadata associated with the stored images, thereby providing one or more matched images, and the matched images are retrieved and displayed in proximity with the text.
While the intent in Ser. No. 09/685,112 is to insert selected ones of the matched images into the text, the context is also utilized to provide suggested annotations to the user for the matched images, together with the capability of selecting certain of the suggested annotations for subsequent association with the matched images. If the desired image has not yet been annotated, as would be the case if the images were new images being loaded for the first time, the annotation and retrieval agent proposes candidate keywords from the surrounding text so that the user can select one or more appropriate keywords to annotate, and store with, the new image. However, this technique depends upon user-entered text for its source material for annotations, and in the absence of such text cannot be used to provide annotations for collections of images.
In order to facilitate the retrieval of relevant material from a large database, it is known to annotate selected subdivisions of the database, such as paragraphs, columns, articles, chapters or illustrative material such as pictures, charts, drawings and the like. In U.S. Pat. No. 5,404,295 (“Method and Apparatus for Utilizing Annotations to Facilitate Computer Retrieval of Database Material”), these annotations may be generated manually, semiautomatically or automatically. Generally, these annotation techniques involve finding some keyword-based relationship between the current subdivision and a prior subdivision and either utilizing the annotations for the prior subdivision, as suitably modified for the current subdivision (automatic mode) or displaying the modified annotations to the annotator as proposed annotations with the annotator making selections from the annotations (semiautomatic annotation). As described in this patent, the annotations are in a natural language form, which may be translated into a structured form, such as a T-expression. This means that a large number of alternative natural language queries, once translated into structural forms, will match the structured forms produced by a small number of annotations, thus facilitating searching. Besides the usual requirement of a relatively time-consuming initial annotating process, the drawback of these techniques is the necessity of manually assigning some form of keyword to the current subdivisions, which establishes the needed relationship between the current subdivision and a prior subdivision.
The prior art has been mainly directed to simplifying the annotation task for the purpose of image retrieval. In particular, it has not been devoted to the problem of simplifying the task of organizing large groups of similar pictures, that is, the task of constructing picture albums in a database environment. What is needed is a method that would assign, or at least recommend, captions or semantic labels to several images simultaneously based upon low-level objectively measurable similarities between the images. Ideally, these captions or semantic labels could independently apply to the whole image (global captions or labels) or to distinguishable parts of the image (regional captions or labels). This assignment should happen either automatically, or with limited user interaction, thereby decreasing the effort required of the user. Notwithstanding their utility in the organization of picture albums, these captions or semantic labels can subsequently be used as powerful tools for the storage, retrieval, and management of digital images within a digital image database environment.
SUMMARY OF THE INVENTION
The present invention is directed to overcoming one or more of the problems set forth above. Briefly summarized, according to one aspect of the present invention, a method of generating captions or semantic labels for an acquired image is based upon similarity between the acquired image and one or more stored images that are maintained i

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for associating semantic information with multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for associating semantic information with multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for associating semantic information with multiple... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3305310

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.