Assembling verbal narration for digital display images

Computer graphics processing and selective visual display system – Display driving control circuitry – Controlling the condition of display elements

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S215000, C345S215000, C715S252000, C715S252000

Reexamination Certificate

active

06803925

ABSTRACT:

TECHNICAL FIELD
The present invention relates to providing verbal explanations for digital display images and, in particular, to combining verbal narration with digital display images and automatic cinematic manipulations.
BACKGROUND AND SUMMARY
A person shares a display image (e.g., a still photograph) with another person by telling a story (i.e., a verbal narration) about what is shown in the image. With regard to conventional print photographs, the story is often told in-person, typically in a comfortable and convenient environment that encourages a spontaneous telling of the story. The personal presence of the story-teller and the typically spontaneous flow of the story typically increases the interest of the listener.
With the increasingly wide use of digital cameras and other tools for creating digital media, still photographs and other display images can be distributed and shared more widely without the direct personal presence of the story-teller. The problem is that computer-based sharing of such digital images typically results in a posting of static images, perhaps with some brief text captions. In view of the effort of typing and the commonly greater expectation of formality in written text over spoken language, written text captions rarely capture the context, mood and details of a spoken narrative.
Also, in-person explanation of still images typically includes gestures or pointing to relevant or significant portions of the image, thereby highlighting relevant parts of the image and helping the story-teller to recall and tell the story. Static display of still images includes no such highlights or story-telling aids. Passively viewing an image while telling its story often hinders the spontaneity of the story-teller, and the static display of images is less interesting for the viewer.
Accordingly, a goal is to recreate or simulate in a computer system the experience of sharing photographs in-person. The present system provides a computer-based environment analogous to in-person sharing of photographs by utilizing spontaneous verbal narration or story-telling, together with manual indications by the story-teller of significant or relevant image portions. The system combines the verbal narration with automatic cinematic display manipulations that relate to the manual indications by the story-teller to form a multimedia production or “movie” from the display images. The cinematic display manipulations may include pans, zooms, fades, etc. that animate the display of the images and transitions between them.
In one implementation, a narration assembly method for assembling narration with digital display media components provides simplified production of a narrated sequence or “video” of multiple separate digital display media components. The digital display media components or images may be, for example, still digital graphics or photographs, as well as video segments, computer display screen images or pages (e.g., Web pages, office application pages, such as slides from PowerPoint® software from Microsoft Corporation or pages from word processing software, etc.).
The narration assembly method includes selecting a digital display image within set of images and recording in a computer system a verbal narration by the user (i.e., story-teller) relating to the image. The user is prompted to indicate or point to relevant portions of the image, such as with a computer input device like a mouse, while telling the story. The locations or regions that are indicated or pointed to during the narration are also recorded in the computer system.
A digital multimedia production is formed in accordance with the narration and the display image locations or regions indicated by the user. The digital multimedia production may be in the form of, or include, a video, slide show, web tour, or any other series of images arranged in time, synchronized with audio or textual commentary. The multimedia production is formed in connection with cinematic image manipulations and predefined cinematic rules that are applied automatically without user input. The cinematic image manipulations provide a dynamic image display that relates to the story-teller's manual indications and improve the viewing of the images. The predefined cinematic rules ensure that the resulting multimedia production conforms to conventional cinematic practices, thereby avoiding the distraction of unconventional image manipulations that can distract from the verbal narration.
The present system facilitates the spontaneous telling of stories for display images and results in a production analogous to a professional documentary film based upon narrated still images. As in a documentary film, the present system can provide panning and zooming over the images in an aesthetically pleasing manner. In providing the cinematic image manipulations automatically, the story-telling spontaneity is preserved and the user is spared the technical difficulty of producing a cinematically-pleasing sequence.
Additional objects and advantages of the present invention will be apparent from the detailed description of the preferred embodiment thereof, which proceeds with reference to the accompanying drawings.


REFERENCES:
patent: 6084590 (2000-07-01), Robotham et al.
patent: 6108001 (2000-08-01), Tuttle
patent: 6121963 (2000-09-01), Ange
patent: 6333753 (2001-12-01), Hinckley
patent: 6369835 (2002-04-01), Lin
patent: 6480191 (2002-11-01), Balabanovic
patent: 6624826 (2003-09-01), Balabanovic
patent: 6665835 (2003-12-01), Gutfrund et al.
patent: 2002/0109712 (2002-08-01), Yacovone et al.
patent: 2003/0085913 (2003-05-01), Ahmad et al.
Minos N. Garofalakis, et al., Resource Scheduling for Composite Multimedia Objects, Proceedings of the 24th VLDB Conference, 1998, 12 pages, New York, USA.
Gultekin Ozsoyoglu, et al., Automating the Assembly of Presentations from Multimedia Databases, Case Western Reserve University, 1996, 20 pages, Cleveland, USA.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Assembling verbal narration for digital display images does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Assembling verbal narration for digital display images, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Assembling verbal narration for digital display images will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3297900

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.