Using automated content analysis for audio/video content...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C725S136000, C725S149000

Reexamination Certificate

active

07640272

ABSTRACT:
Audio/video (A/V) content is analyzed using speech and language analysis components. Metadata is automatically generated based upon the analysis. The metadata is used in generating user interface interaction components which allow a user to view subject matter in various segments of the A/V content and to interact with the A/V content based on the automatically generated metadata.

REFERENCES:
patent: 6553345 (2003-04-01), Kuhn et al.
patent: 6643620 (2003-11-01), Contolini et al.
patent: 7047554 (2006-05-01), Lortz
patent: 7508535 (2009-03-01), Hart et al.
patent: 2003/0018475 (2003-01-01), Basu et al.
patent: 2003/0225825 (2003-12-01), Healey et al.
patent: 2005/0010409 (2005-01-01), Hull et al.
patent: 2005/0243166 (2005-11-01), Cutler
patent: 2005/0243168 (2005-11-01), Cutler
patent: 2007/0118873 (2007-05-01), Houh et al.
Konstantinos Koumpis and Steven Renals; “Content-Based Access to Spoken Audio”; Sep. 2005; IEEE Signal Processing Magazine.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Using automated content analysis for audio/video content... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Using automated content analysis for audio/video content..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Using automated content analysis for audio/video content... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4145002

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.