Subtitle generation and retrieval combining document with...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Application

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S009000, C704S246000, C382S321000, C707S793000

Reexamination Certificate

active

07739116

ABSTRACT:
Provides subtitle generation methods and apparatus which recognizes voice in a presentation to generate subtitles thereof, and retrieval apparatus for retrieving character strings by use of the subtitles. An apparatus of the present invention includes: a extraction unit for extracting text from presentation documents; an analysis unit for morphologically analyzing text to decompose it into words; a generation unit for generating common keywords by assigning weights to words; a registration unit for adding common keywords to a voice recognition dictionary; a recognition unit for recognizing voice in a presentation; a record unit for recording the correspondence between page and time by detecting page switching events; a regeneration unit for regenerating common keywords by further referring to the correspondence between page and time; a control unit for controlling the display of subtitles, common keywords, text and master subtitles; and a note generation unit for generating speaker notes from subtitles.

REFERENCES:
patent: 5572728 (1996-11-01), Tada et al.
patent: 5680511 (1997-10-01), Baker et al.
patent: 6438523 (2002-08-01), Oberteuffer et al.
patent: 6823308 (2004-11-01), Keiller et al.
patent: 7013273 (2006-03-01), Kahn
patent: 7117231 (2006-10-01), Fischer et al.
patent: 7191117 (2007-03-01), Kirby et al.
patent: 7298930 (2007-11-01), Erol et al.
patent: 7490092 (2009-02-01), Sibley et al.
patent: 2002/0143531 (2002-10-01), Kahn
patent: 2007/0011012 (2007-01-01), Yurick et al.
patent: 2007/0126926 (2007-06-01), Miyamoto et al.
patent: 2007/0186147 (2007-08-01), Dittrich
patent: HEI07-182365 (1995-07-01), None
patent: 2002-268667 (2002-09-01), None
Sharon Oviatt, Taming recognition errors with a multimodal interface, Communications of the ACM, v.43 n.9, p. 45-51, Sep. 2000.
Michael Bett, Ralph Gross, Hua Yu, Xiaojin Zhu, Yue Pan, Jie Yang, and Alex Waibel, “Multimodal meeting tracker,” in Proceedings of RIAO2000, Paris, France, Apr. 2000.
Alex Waibel, Michael Belt, Florian Metze, Klaus Ries, Thomas Schaaf, Tanja Schultr, Hagen Soltau, Hua Yu, and Klaus Zechner, Advances in automatic meeting record creation and access: in Proceedings of ICASSP 2001, May 2001.
[Piperidis et al.2004] Stelios Piperidis, Iason Demiros, Prokopis Prokopidis, Peter Vanroose, Anja Hoethker, Walter Daelemans, Elsa Sklavounou, Manos Konstantinou, and Yannis Karavidas. May 26-28, 2004. Multimodal multilingual resources in the subtitling process. In Proceedings of the 4th International Language Resources and Evaluation Conference.
Hürst, W., Müller, R. & Mayer, C. (2000). Multimedia Information Retrieval from Recorded Presentations. ACM SIGIR 2000, Athens, Greece.
He, L., Sanocki, E., Gupta, A., and Grudin, J. “Comparing Presentation Summaries: Slides vs. Reading Vs. Listening,” in Proceedings of CHI 2000, Apr. 2000.
He, L., Sanocki, E., Gupta, A., and Grudin, J. “Auto-Summarization of Audio-Video Presentations,” in Proceedings of ACM Multimedia'99, 1999.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Subtitle generation and retrieval combining document with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Subtitle generation and retrieval combining document with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Subtitle generation and retrieval combining document with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4248879

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.