Automatic multi-camera video composition

Television – Two-way video and voice communication – Conferencing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S014090, C348S014010

Reexamination Certificate

active

06577333

ABSTRACT:

FIELD OF THE INVENTION
This present invention relates generally to multi-camera video systems, and more particularly to an automatic multi-camera video composition system and method for its operation.
BACKGROUND OF THE INVENTION
In the general field of video transmission and recording, it is common to concurrently capture video from multiple viewpoints or locations. One common example is sports broadcasting: a baseball game, for example, may use five or more cameras to capture the action from multiple viewing angles. One or more technicians switch between the cameras to provide a television signal that consists, hopefully, of the best view of whatever is happening in the game at that moment. Another example is a movie. Movie editing, however, takes place long after the events are recorded, with most scenes using a variety of camera shots in a selected composition sequence.
Although perhaps less exciting than a sports contest or a movie, many other applications of multi-camera video data exist. For instance, a selection of camera angles can provide a much richer record of almost any taped or broadcast event, whether that event is a meeting, a presentation, a videoconference, or an electronic classroom, to mention a few examples.
One pair of researchers has proposed an automated camera switching strategy for a videoconferencing application, based on speaker behavioural patterns. See F. Canavesio & G. Castagneri, “Strategies for Automated Camera Switching Versus Behavioural Patterns in Videoconferencing”, in
Proc. IEEE Global Telecommunications Conf.,
pp. 313-18, Nov. 26-29 1984. The system described in this paper has one microphone and one camera for each of six videoconference participants. Two additional cameras provide input for a split-screen overview that shows all participants. A microprocessor periodically performs an “activity talker identification process” that detects who among all of the participants is talking and creates a binary activity pattern consisting of six “talk
o talk” values.
A number of time-based thresholds are entered into the system. The microprocessor implements a voice-switching algorithm that decides which of the seven camera views (six individual plus one overview) will be used for each binary activity pattern. In essence, the algorithm decides which camera view to use for a new evaluation interval based on who is speaking, which camera is currently selected, and whether the currently-selected camera view has been held for a minimum amount of time. If more than one simultaneous speaker is detected or no one speaks, the system will switch to the conference overview after a preset amount of time. And generally, when one speaker is detected, the system will continuously select the close-up view of that speaker as long as they continue to talk or take only short pauses.


REFERENCES:
patent: 5686957 (1997-11-01), Baker
patent: 5844599 (1998-12-01), Hildin
patent: 6346963 (2002-02-01), Katsuni
patent: 0523617 (1993-01-01), None
patent: 363003589 (1988-01-01), None
patent: 07-015711 (1995-01-01), None
patent: 408130723 (1996-05-01), None
patent: 408163526 (1996-06-01), None
patent: WO9607177 (1996-03-01), None
patent: WO9960788 (1999-11-01), None
Canavesio, Franco and Castagneri, Giuseppe;Strategies for Automated Camera Switching Versus Behavioural Patterns in Videoconferencing; IEEE; 1984; pp. 313-318.
Goodridge, Steven George;Multimedia Sensor Fusion for Intelligent Camera Control and Human-Computer Interaction; Printed from NCSU website located at http://www.ie.ncsu.edu/kay/msf; printed on Oct. 9, 2000; pp. 1-5, p. 1 (abstract), pp. (introduction), p. 1 (fig. 3), p. 1 (fig. 4), pp. 1-14 (related work), pp. 1-10 (sound localization), pp. 1-7 (primitve vision), pp. 1-8 (audio-visual sensor fusion for face detection) pp. 1-7 (target tracking), pp. 1-7 (behavior fusion), pp. 1-5 (generic camera behaviors), pp. 1-5 (applications), p. 1 (fig. 74), pp. 1-5 (conclusions), pp. 1-9 (references), p. 1 (references).
Kelly, Patrick H.;An Architecture for Multiple Perspective Interactive Video; Printed from website located at http://www.acm.org/pubs/articles/proceedings/multimedia/217279/p201-kelly.htm; printed on Oct. 9, 2000; pp. 1-16.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automatic multi-camera video composition does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automatic multi-camera video composition, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automatic multi-camera video composition will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3115920

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.