Embedded metadata engines in digital capture devices

Television – Camera – system and detail – Combined image signal generator and general image signal...

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S239000, C382S218000

Reexamination Certificate

active

06833865

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to digital capture devices, and more particularly, to digital still cameras, digital video cameras, digital video encoders and other media capture devices.
2. Description of the Related Technology
The distinction between still devices and motion devices is becomming blurred as many of these devices can perform both functions, or combine audio capture with still image capture. The capture of digital content is expanding rapidly due to the proliferation of digital still cameras, digital video cameras, and digital television broadcasts. Users of this equipment generally also use digital production and authoring equipment. Storing, retrieving, and manipulating the digital content represent a significant problem in these environments. The use of various forms of metadata (data about the digital content) has emerged as a way to organize the digital content in databases and other storage means such that a specific piece of content may be easily found and used.
Digital media asset management systems (DMMSs) from several vendors are being used to perform the storage and management function in digital production environments. Examples include Cinebase, WebWare, EDS/MediaVault, Thomson Teams, and others. Each of these systems exploit metadata to allow constrained searches for specific digital content. The metadata is generated during a logging process when the digital content is entered into the DMMS. Metadata generally falls into two broad categories:
Collateral metadata: information such as date, time, camera properties, and user labels or annotations, and so forth;
Content-based metadata: information extracted automatically by analyzing the audiovisual signal and extracting properties from it, such as keyframes, speech-to-text, speaker ID, visual properties, face identification/recognition, optical character recognition (OCR), and so forth.
Products such as the Virage VideoLogger perform the capture and logging of both of these types of metadata. The VideoLogger interfaces with the DMMS to provide the metadata to the storage system for later use in search and retrieval operations. These types of systems can operate with digital or analog sources of audiovisual content.
The capture of digital content offers an opportunity which is not present in analog capture devices. What is desired is a capability to embed a content-based analysis function in the capture device for extracting metadata from the digital signals in real-time as the content is captured. This metadata could then be later exploited by DMMSs and other systems for indexing, searching, browsing, and editing the digital media content. A central benefit of this approach would be that it is most valuable to capture this type of metadata as far “upstream” as possible. This would allow the metadata to be exploited throughout the lifecycle of the content, thereby reducing costs and improving access to and utilization of the content. Such an approach would be in contrast to the current practice of performing a separate logging process at some point in time after the capture of the content. Therefore, it would be desirable to capture the metadata at the point of content capture, and to perform the analysis in real-time by embedding metadata engines inside of the physical capture devices such as digital still cameras, digital audio/video cameras, and other media capture devices.
Some previous efforts at capturing metadata at the point of content capture have focused on the capture of collateral metadata, such as date/time, or user annotations. Examples of these approaches can be found in U.S. Pat. No. 5,335,072 (sensor information attached to photographs), 4,574,319 (electronic memo for an electronic camera), U.S. Pat. No. 5,633,678 (camera allowing for user categorization of images), U.S. Pat No. 5,682,458 (camera that records shot data on a magnetic recording area of the film), and U.S. Pat. No. 5,506,644 (camera that records GPS satellite position information on a magnetic recording area of the film). In addition, professional digital cameras being sold today offer certain features for annotating the digital content. An example of this is the Sony DXC-D30 (a Digital Video Cassette camera, or DVC) which has a ClipLink feature for marking video clips within the camera prior to transferring data to an editing station.
Many aspects of digital capture devices are well understood and practiced in the state of the art today. Capture sensors, digital conversion and sampling, compression algorithms, signal levels, filtering, and digital formats are common functions in these devices, and are not the object of the present invention. Much information can be found in the literature on these topics. For example, see
Video Demystified
by Keith Jack, published by Harris Semiconductor, for an in-depth description of digital composite video, digital component video, MPEG-1 and MPEG-2.
SUMMARY OF THE INVENTION
The present invention is based on technologies relating to the automatic extraction of metadata descriptions of digital multimedia content such as still images and video. The present invention also incorporates audio analysis engines that are available from third parties within an extensible metadata “engine” framework. These engines perform sophisticated analysis of multimedia content and generate metadata descriptions that can be effectively used to index the content for downstream applications such as search and browse. Metadata generated may include:
Image Feature Vectors
Keyframe storyboards
Various text attributes (closed-captioned (CC) text, teletext, time/date, media properties such as frame-rates, bit-rates, annotations, and so forth)
Speech-to-text & keyword spotting
Speaker identification (ID)
Audio classifications & feature vectors
Face identification/recognition
Optical Character Recognition (OCR)
Other customized metadata via extensibility mechanisms: GPS data; camera position & properties; any external collateral data; and so forth.
A central theme of the technical approach is that it is most valuable to capture this type of metadata as far “upstream” as possible. This allows the metadata to be exploited throughout the lifecycle of the content, thereby reducing costs and improving access and utilization of the content. The natural conclusion of this approach is to extract the metadata at the point of content capture. Thus, the present invention embeds metadata engines inside of the physical capture devices such as digital still cameras, digital audio/video cameras, and so forth.
Digital cameras are rapidly advancing in capabilities and market penetration. Megapixel cameras are commonplace. This results in an explosion of digital still content, and the associated problems of storage and retrieval. The visual information retrieval (VIR) image engine available from Virage, Inc. has been used effectively in database environments for several years to address these problems. The computation of image feature vectors used in search and retrieval has to date been part of the back-end processing of image. The present invention pushes that computation to the cameras directly, with the feature vectors naturally associated with the still image all during its life. A practical “container” for this combined image+feature vector is the FlashPix image format, which is designed to carry various forms of metadata along with the image. Image feature vectors may also be stored separately from the image.
Digital video cameras are also advancing rapidly, and are being used in a number of high-end and critical applications. They are also appearing at the consumer level. Digital video itself suffers from the same problems that images do, to an even greater degree since video data storage requirements are many times larger than still images. The search and retrieval problems are further compounded by the more complex and rich content contained in video (audio soundtracks, temporal properties, motion properties, all of which are in addition to visual prop

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Embedded metadata engines in digital capture devices does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Embedded metadata engines in digital capture devices, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Embedded metadata engines in digital capture devices will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3328303

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.