Scheme for extraction and recognition of telop characters...

Image analysis – Pattern recognition – Feature extraction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S173000, C382S177000, C382S181000, C382S190000, C382S197000, C382S200000, C382S245000, C382S254000, C358S426130

Reexamination Certificate

active

06501856

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to image data processing techniques relevant to a video editing system for editing video data by attaching various information to video data and a video-database system or a video image providing system for managing and retrieving video data, and more particularly, to image data processing techniques for extracting, processing, editing,recording, and displaying telop (caption) information contained in video data so as to enhance the utility of video data at video input, recording, and displaying devices such as TV, VTR, DVD, etc.
2. Description of the Background Art
A technique for detecting a frame that contains characters from a plurality of frames constituting a video image has been studied actively in recent years, and there are many propositions of a method based on an intensity difference between frames. This method is suitable for the purpose of detecting a first telop character displaying frame among successive frames in which the identical characters are displayed.
However, the video image can contain telop characters that are displayed in motion (which will be referred to as rolling telop characters hereafter), as in the video image. of a talk show in which a brief introduction of a person on the show is rolled from left to right on a lower portion of a display screen. In such a case, the intensity difference between successive frames hardly changes immediately after the telop character series started to appear on the display screen, so that it has been difficult to detect a frame that contains the rolling telop characters by the. conventional method.
In addition, the conventional method is also associated with a problem of over-detection in which a plurality of frames displaying the same telop characters are redundantly detected, which is caused when an intensity of a background portion around characters abruptly changes while the successive frames in which the identical characters are displayed.
On the other hand, a method based on an edge pair feature point as disclosed in Japanese Patent Application No. 9-129075 (1997) only accounts for gradient directions of two neighboring edges and does not account for a change of intensity value between edges so that there has been a problem of erroneously detecting a frame with a large intensity change between edges even when there is no character displayed on that frame.
As for a technique for extracting information contained in Video data, a telop character detection method has been conventionally known. The telop character detection method proposed so far detects an appearance of telop characters using a spatial distribution of feature points that appear characteristically at character portions, and extracts a series of telop characters by utilizing the property that many telop characters remain static on a display screen for some period of time.
However, such a conventional telop character detection method cannot deal with rolling telop characters that are displayed in motion, because of its reliance on the property that many telop characters remain static on a display screen for some period of time.
In order to detect rolling telop characters as a series of telop characters, there is a need to estimate a moving distance of the rolling telop characters, and establish correspondences of telop characters that are commonly displayed over consecutive image frames. Moreover, in order to detect a telop character image (an image of characters themselves) from a video image accurately, there is a need to accurately superpose corresponding character image portions that are commonly displayed over consecutive image frames.
However, the rolling telop characters are often associated with slant or extension/contraction so that a sufficient accuracy cannot be obtained by merely superposing corresponding character image portions using a moving distance of the telop characters as a whole. Consequently, there is also a need to carry out corrections of local displacement or distortion in addition to calculating a moving distance of the telop characters. But there has been no established technique for carrying out the calculation of a moving distance of the telop characters and the local correction accurately in a practically feasible processing time.
As for a character region extraction technique that can extract character portions as connected pixel regions stably by a small amount of computations from frame images in which characters are displayed in a plurality of frames constituting color video image or a still color image in which characters are displayed, many studies have been made conventionally, including a character region extraction method proposed in H. Kuwano, S. Kurakake, K. Okada, “Telop Character Extraction from Video data”, Proc. of IEEE International Workshop on Document Image Analysis, pp. 82-88, June 1997. See also A. Shio, “An Automatic Thresholding Algorithm Based on an Illumination-Independent Contrast Measure”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 632-637, San Diego, Calif., Jun. 4-8, 1989.
This method uses a process of forming connected pixel regions which are adjacent to each other in an image space and which have resembling intensity, saturation and hue, by carrying out the division in one-dimensional color spaces of intensity, saturation, and hue sequentially in this order with respect to the input color image, and then removing those regions which do not satisfy the character region criteria from the formed connected pixel regions.
In this conventional method, the division processing in the intensity space is carried out with respect to the intensity space within a local rectangular region in the image, using a threshold obtained within that rectangular region, so that there is an advantage that the good character region extraction result can be obtained even in the case of having a local intensity variation within the image.
However, in this conventional method, in the case where the input character displaying color image is a video image of the NTSC signal format that is used by the TV broadcasting, there has been a problem that the character region will be extracted without a degraded portion at which characters are degraded.
Usually, the video image of the NTSC signal format has features that the original colors are degraded as the color of each pixel is blurred along each scanning line in the image and the color of the background portion is blurred into the characters at left and right boundaries of between the characters and the background portion in the image. For the horizontal components within the character, the degradation occurs only at left and right edges and the central portion is unaffected, but for the vertical components, the entire character portion can be degraded when the character width is narrow, in which case the intensity is lowered such that the intensity contrast between the horizontal components and the vertical components within the character becomes high (see FIG.
26
and FIG.
27
).
For this reason, in the above described conventional method, when the threshold is determined within the rectangular region that contains a connecting portion of the horizontal components and the vertical components of a degraded character portion in the video image of the NTSC signal format, the degraded vertical components will be regarded as background so that an incomplete character region will be extracted (see FIG.
28
).
Namely,
FIG. 26
shows an exemplary case of degradation that occurs within characters displayed in the video image of the NTSC signal format, where the black background color is blurred into alphabetic characters “Acoustic Echo Canceller” and the corresponding Japanese characters shown above such that the vertical components within the characters are degraded into gray.
FIG. 27
is a diagram illustrating the degradation within the character, where the black background color is blurred into an interior of the white telop character

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Scheme for extraction and recognition of telop characters... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Scheme for extraction and recognition of telop characters..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scheme for extraction and recognition of telop characters... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2981330

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.