Automated video interpretation system

Image analysis – Image segmentation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C358S538000

Reexamination Certificate

active

06516090

ABSTRACT:

FIELD OF INVENTION
The present invention relates to the statistical analysis of digital video signals, and in particular to the statistical analysis of digital video signals for automated content interpretation in terms of semantic labels. The labels can be subsequently used as a basis for tasks such as content-based retrieval and video abstract generation.
DESCRIPTION OF THE PRIOR ART
Digital video is generally assumed to be a signal representing the time evolution of a visual scene. This signal is typically encoded along with associated audio information (eg., in the MPEG-2 audiovisual coding format). In some cases information about the scene, or the capture of the scene, might also be encoded with the video and audio signals. The digital video is typically represented by a sequence of still digital images, or frames, where each digital image usually consists of a set of pixel intensities for a multiplicity of colour channels (eg., R, G, B). This representation is due, in a large part, to the grid-based manner in which visual scenes are sensed.
The visual, and any associated audio signals, are often mutually correlated in the sense that information about the content of the visual signal can be found in the audio signal and vice-versa. This correlation is explicitly recognised in more recent digital audiovisual coding formats, such as MPEG-4, where the units of coding are audiovisual objects having spatial and temporal localisation in a scene. Although this representation of audiovisual information is more attuned to the usage of the digital material, the visual component of natural scenes is still typically captured using grid-based sensing techniques (ie., digital images are sensed at a frame rate defined by the capture device). Thus the process of digital video interpretation remains typically based on that of digital image interpretation and is usually considered in isolation from the associated audio information.
Digital image signal interpretation is the process of understanding the content of an image through the identification of significant objects or regions in the image and analysing their spatial arrangement. Traditionally the task of image interpretation required human analysis. This is expensive and time consuming, consequently considerable research has been directed towards constructing automated image interpretation systems.
Most existing image interpretation systems involve low-level and high-level processing. Typically, low-level processing involves the transformation of an image from an array of pixel intensities to a set of spatially related image primitives, such as edges and regions. Various features can then be extracted from the primitives (eg., average pixel intensities). In high-level processing image domain knowledge and feature measurements are used to assign object or region labels, or interpretations, to the primitives and hence construct a description as to “what is present in the image”.
Early attempts at image interpretation were based on classifying isolated primitives into a finite number of object classes according to their feature measurements. The success of this approach was limited by the erroneous or incomplete results that often result from low-level processing and feature measurement errors that result from the presence of noise in the image. Most recent techniques incorporate spatial constraints in the high-level processing. This means that ambiguous regions or objects can often be recognised as the result of successful recognition of neighbouring regions or objects.
More recently, the spatial dependence of region labels for an image has been modelled using statistical methods, such as Markov Random Fields (MRFs). The main advantage of the MRF model is that it provides a general and natural model for the interaction between spatially related random variables, and there are relatively flexible optimisation algorithms that can be used to find the (globally) optimal realisation of the field. Typically the MRF is defined on a graph of segmented regions, commonly called a Region Adjacency Graph (RAG). The segmented regions can be generated by one of many available region-based image segmentation methods. The MRF model provides a powerful mechanism for incorporating knowledge about the spatial dependence of semantic labels with the dependence of the labels on measurements (low-level features) from the image.
Digital audio signal interpretation is the process of understanding the content of an audio signal through the identification of words/phrases, or key sounds, and analysing their temporal arrangement. In general, investigations into digital audio analysis have concentrated on speech recognition because of the large number of potential applications for resultant technology. eg., natural language interfaces for computers and other electronic devices.
Hidden Markov Models are widely used for continuous speech recognition because of their inherent ability to incorporate the sequential and statistical character of a digital speech signal. They provide a probabilistic framework for the modelling of a time-varying process in which units of speech (phonemes, or in some cases words) are represented as a time sequence through a set of states. Estimation of the transition probabilities between the states requires the analysis of a set of example audio signals for the unit of speech (ie., a training set). If the recognition process is required to be speaker independent then the training set must contain example audio signals from a range of speakers.
SUMMARY OF THE INVENTION
According to one aspect of the present invention there is provided a method of interpreting a digital video signal, wherein said digital video signal has contextual data, said method comprising the steps of:
segmenting said digital video signal into one or more video segments, each segment having a corresponding portion of said contextual data; and
analysing each video segment to provide a graph at one or more temporal instances in the respective video segment dependent upon said corresponding portion of said contextual data.
According to another aspect of the present invention there is provided an apparatus for interpreting a digital video signal, wherein said digital video signal has contextual data, said apparatus comprising:
means for segmenting said digital video signal into one or more video segments, each segment having a corresponding portion of said contextual data; and
means for analysing each video segment to provide an analysis token for one or more regions contained in the respective video segment dependent upon said corresponding portion of said contextual data.
According to still another aspect of the present invention there is provided a computer program product comprising a computer readable medium having recorded thereon a computer program for interpreting a digital video signal, wherein said digital video signal has contextual data, said computer program product comprising:
means for segmenting said digital video signal into one or more video segments, each segment having a corresponding portion of said contextual data; and
means for analysing each video segment to provide an analysis token for one or more regions contained in the respective video segment dependent upon said corresponding portion of said contextual data.


REFERENCES:
patent: 3988715 (1976-10-01), Mullan et al.
patent: 5467441 (1995-11-01), Stone et al.
patent: 5708767 (1998-01-01), Yeo et al.
patent: 5821945 (1998-10-01), Yeo et al.
patent: 5831616 (1998-11-01), Lee
patent: 5870754 (1999-02-01), Dimitrova et al.
patent: 5969716 (1999-10-01), Davis et al.
patent: 6021213 (2000-02-01), Heltebrand et al.
patent: 6119135 (2000-09-01), Helfman
patent: 6137499 (2000-10-01), Tesler
patent: 6181332 (2001-01-01), Salahshour et al.
patent: 6211912 (2001-04-01), Shahraray
patent: 6281983 (2001-08-01), Form
patent: 0805405 (1997-11-01), None
patent: 99/05865 (1999-02-01), None
“Image Interpretation using Contextual Feedback”, G.C. Lai et al., Proceedings of the International Conference on Image Proce

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automated video interpretation system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automated video interpretation system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automated video interpretation system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3174202

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.