Image analysis – Image segmentation
Reexamination Certificate
2000-08-09
2004-02-24
Chen, Wenpeng (Department: 2624)
Image analysis
Image segmentation
C382S224000, C382S305000, C345S215000, C725S041000
Reexamination Certificate
active
06697523
ABSTRACT:
FIELD OF THE INVENTION
This invention relates generally to videos, and more particularly to summarizing a compressed video.
BACKGROUND OF THE INVENTION
It is desired to automatically generate a summary of video, and more particularly, to generate the summary from a compressed digital video.
Compressed Video Formats
Basic standards for compressing a video as a digital signal have been adopted by the Motion Picture Expert Group (MPEG). The MPEG standards achieve high data compression rates by developing information for a full frame of the image only every so often. The full image frames, i.e. intra-coded frames, are often referred to as “I-frames” or “anchor frames,” and contain full frame information independent of any other frames. Image difference frames, i.e., inter-coded frames, are often referred to as “B-frames” and “P-frames,” or as “predictive frames,” and are encoded between the I-frames and reflect only image differences i.e., residues, with respect to the reference frame.
Typically, each frame of a video sequence is partitioned into smaller blocks of picture element, i.e. pixel, data. Each block is subjected to a discrete cosine transformation (DCT) function to convert the statistically dependent spatial domain pixels into independent frequency domain DCT coefficients. Respective 8×8 or 16×16 blocks of pixels, referred to as “macro-blocks,” are subjected to the DCT function to provide the coded signal.
The DCT coefficients are usually energy concentrated so that only a few of the coefficients in a macro-block contain the main part of the picture information. For example, if a macro-block contains an edge boundary of an object, then the energy in that block, after transformation, as represented by the DCT coefficients, includes a relatively large DC coefficient and randomly distributed AC coefficients throughout the matrix of coefficients.
A non-edge macro-block, on the other hand, is usually characterized by a similarly large DC coefficient and a few adjacent AC coefficients which are substantially larger than other coefficients associated with that block. The DCT coefficients are typically subjected to adaptive quantization, and then are run-length and variable-length encoded. Thus, the macro-blocks of transmitted data typically include fewer than an 8×8 matrix of codewords.
The macro-blocks of inter-coded frame data, i.e., encoded P or B frame data, include DCT coefficients which represent only the differences between a predicted pixels and the actual pixels in the macro-block. Macro-blocks of intra-coded and inter-coded frame data also include information such as the level of quantization employed, a macro-block address or location indicator, and a macro-block type. The latter information is often referred to as “header” or “overhead” information.
Each P-frame is predicted from the lastmost occurring I- or P-frame. Each B-frame is predicted from an I- or P-frame between which it is disposed. The predictive coding process involves generating displacement vectors, often referred to as “motion vectors,” which indicate the magnitude of the displacement to the macro-block of an I-frame most closely matches the macro-block of the B- or P-frame currently being coded. The pixel data of the matched block in the I frame is subtracted, on a pixel-by-pixel basis, from the block of the P- or B-frame being encoded, to develop the residues. The transformed residues and the vectors form part of the encoded data for the P- and B-frames.
Video Analysis
Video analysis can be defined as processing a video with the intention of understanding the content of a video. The understanding of a video can range from a “low-level” syntactic understanding to a “high-level” semantic understanding.
The low-level understanding can be achieved by analyzing low-level features, such as color, motion, texture, shape, and the like. The low-level features can be used to partition the video into “shots.” Herein, a shot is defined as a sequence of frames that begins when the camera is turned on and lasts until the camera is turned off. Typically, the sequence of frames in a shot captures a single “scene.” The low-level features can be used to generate descriptions. The descriptors can then be used to index the video, e.g., an index of each shot in the video and perhaps its length.
A semantic understanding of the video is concerned with the genre of the content, and not its syntactic structure. For example, high-level features express whether a video is an action video, a music video, a “talking head” video, or the like.
Video Summarization
Video summarization can be defined as generating a compact representation of a video that still conveys the semantic essence of the video. The compact representation can include “key” frames or “key” segments, or a combination of key frames and segments. As an example, a video summary of a tennis match can include two frames, the first frame capturing both of the players, and the second frame capturing the winner with the trophy. A more detailed and longer summary could further include all frames that capture the match point. While it is certainly possible to generate such a summary manually, this is tedious and costly. Automatic summarization is therefore desired.
Automatic video summarization methods are well known, see S. Pfeifer et al. in “
Abstracting Digital Movies Automatically,
” J. Visual Comm. Image Representation, vol. 7, no. 4, pp. 345-353, December 1996, and Hanjalic et al. in “
An Integrated Scheme for Automated Video Abstraction Based on Unsupervised Cluster—Validity Analysis,
” IEEE Trans. On Circuits and Systems for Video Technology, Vol. 9, No. 8, December 1999.
Most known video summarization methods focus exclusively on color-based summarization. Only Pfeiffer et al. have used motion, in combination with other features, to generate video summaries. However, their approach merely uses a weighted combination that overlooks possible correlation between the combined features. Some summarization methods also use motion features to extract key frames.
As shown in
FIG. 1
, prior art video summarization methods have mostly emphasized clustering based on color features, because color features are easy to extract and robust to noise. A typical method takes a video A
101
as input, and applies a color based summarization process
100
to produce a video summary S(A)
102
. The video summary consists of either a single summary of the entire video, or a set of interesting frames.
The method
100
typically includes the following steps. First, cluster the frames of the video according to color features. Second, arrange the clusters in an easy to access hierarchical data structure. Third, extract a key frame or a key sequence of frames from each of the cluster to generate the summary.
Motion Activity Descriptor
A video can also be intuitively perceived as having various levels of activity or intensity of action. Examples of a relatively high level of activity is a scoring opportunity in a sporting event video, on the other hand, a news reader video has a relatively low level of activity. The recently proposed MPEG-7 video standard provides for a descriptor related to the motion activity in a video.
SUMMARY OF THE INVENTION
It is an objective of the present invention to provide an automatic video summarization method using motion features, specifically motion activity features by themselves and in conjunction with other low-level features such as color and texture features.
The main intuition behind the present invention is based on the following hypotheses. The motion activity of a video is a good indication of the relative difficulty of summarization the video. The greater the amount of motion, the more difficult it is to summarize the video. A video summary can be quantitatively described by the number of frames it contains, for example, the number of key frames, or the number of frames of key segments.
The relative intensity of motion activity of a video is strongly correlated to changes in color characteristics. In other words, if the intens
Divakaran Ajay
Peker Kadir A.
Sun Huifang
Brinkman Dirk
Chen Wenpeng
Mitsubishi Electric Research Laboratories Inc.
LandOfFree
Method for summarizing a video using motion and color... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for summarizing a video using motion and color..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for summarizing a video using motion and color... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3296897