Video superposition system and method

Computer graphics processing and selective visual display system – Computer graphics processing – Graphic manipulation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C347S007000

Reexamination Certificate

active

06400374

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to the field of video superposition devices, and more particularly to multiple image source windowed display generation systems.
BACKGROUND OF THE INVENTION
A known video superposition system, known as “chroma keying” employs a foreground image which is separated from an actual background by detection of a background screen chrominance value. Thus, for example, a person is presented in front of a blue screen. A video processing circuit detects the chrominance level, producing a signal when the key color is detected. This color is generally a deep blue, for two reasons. First, this color is generally uncommon in natural foreground scenes, so that artifacts are minimized. Second, this color represents an extreme, so that a single ended comparator may be used to produce the key signal.
When the key signal occurs, a video source switches a synchronized (genlocked) background video signal to the output. Thus, where the key level in the foreground is not detected, the foreground is output, while where the key color is detected, the background signal is output. This technology is well established, and many variations and modifications exist. U.S. Pat. Nos. 4,200,890 and 4,409,618 relate to digital video effects systems employing a chroma key tracking technique. U.S. Pat. No. 4,319,266 relates to a chroma keying system. U.S. Pat. No. 5,251,016 relates to a chroma keyer with secondary hue selector for reduced artifacts. U.S. Pat. No. 5,313,275 relates to a chroma processor including a look-up table or memory, permitting chroma key operation. U.S. Pat. No. 5,398,075 relates to the use of analog chroma key technology in a computer graphics environment. U.S. Pat. No. 5,469,536 relates to an image editing system including masking capability, which employs a computerized hue analysis of the image to separate a foreground object from the background.
Computer generated graphics are well known, as are live video windows within computer graphics screens. U.S. Pat. No. 3,899,848 relates to the use of a chroma key system for generating animated graphics. U.S. Pat. No. 5,384,912 relates to a computer animated graphics system employing a chroma key superposition technique. U.S. Pat. No. 5,345,313 relates to an image editing system for taking a background and inserting part of an image therein, relying on image analysis of the foreground image. U.S. Pat. No. 5,394,517 relates to a virtual reality, integrated real and virtual environment display system employing chroma key technology to merge the two environments.
A number of spatial position sensor types are known. These include electromagnetic, acoustic, infrared, optical, gyroscopic, accelerometer, electromechanical, and other types. In particular, systems are available from Polhemus and Ascension which accurately measure position and orientation over large areas, using electromagnetic fields.
Rangefinder systems are known, which allow the determination of a distance to an object. Known systems include optical focus zone, optical parallax, infrared, and acoustic methods. Also known are non-contact depth mapping systems which determine a depth profile of an object without physical contact with a surface of the object. U.S. Pat. No. 5,521,373 relates to a position tracking system having a position sensitive radiation detector. U.S. Pat. No. 4,988,981 relates to a glove-type computer input device. U.S. Pat. No. 5,227,985 relates to a computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monitored object. U.S. Pat. No. 5,423,554 relates to a virtual reality game method and apparatus employing image chroma analysis for tracking a colored glove as an input to a computer system.
U.S. Pat. No. 5,502,482 relates to a system for deriving a studio camera position and motion from the camera image by image analysis. U.S. Pat. No. 5,513,129 relates to a method and system for controlling a computer-generated virtual environment with audio signals.
SUMMARY OF THE INVENTION
The present invention employs a live video source, a background image source, a mask region generator and an overlay device which merges the foreground with the background image based on the output of the mask region generator. Two classes of mask region generators are provided; first, an “in-band” system is provided which acquires the necessary mask region boundaries based on the foreground image acquisition system, and second an “out-of-band” system which provides a separate sensory input to determine the mask region boundary.
A preferred embodiment of the “in-band” system is a rangefinder system which operates through the video camera system, to distinguish the foreground object in the live video source from its native background based on differences in distance from the camera lens. Thus, rather than relying on an analysis of the image per se to extract the foreground object, this preferred embodiment of the system defines the boundary of the object through its focal plane or parallax.
A preferred embodiment of the “out-of-band” system includes an absolute position and orientation sensor physically associated with the foreground object with a predetermined relationship of the sensor to the desired portion of the foreground object. Thus, where the foreground object is a person, the sensor may be an electromagnetic position sensor mounted centrally on top of the head with the mask region defined by an oval boundary below and in front of the position and orientation sensor.
In a preferred embodiment, the foreground image is a portrait of a person, while the background image is a computer generated image of a figure. A position sensor tracks a head position in the portrait, which is used to estimate a facial area. The image of the facial area is then merged in an anatomically appropriate fashion with the background figure.
The background image is, for example, an animated “character”, with a masked facial portion. The live video signal in this case includes, as the foreground image, a face, with the face generally having a defined spatial relation to the position sensor. The masked region of the character is generated, based on the output of the position sensor in an appropriate position, so that the face may be superimposed within the masked region. As seen in the resulting composite video image, the live image of the face is presented within a mask of an animated character, presenting a suitable foundation for a consumer entertainment system. The mask may obscure portions of the face, as desired. Manual inputs or secondary position sensors for the arms or legs of the individual may be used as further control inputs, allowing the user to both control the computer generated animation and to become a part of the resultant image. This system may therefore be incorporated into larger virtual reality systems to allow an increased level of interaction, while minimizing the need for specialized environments.
In practice, it is generally desired to mask a margin of the face so that no portion of the background appears in a composite image. Thus, the actual video background is completely obscured and irrelevant. In order to produce an aesthetically pleasing and natural appearing result, the region around the face is preferably provided with an image which appears as a mask. Thus, the background image may appear as a masked character, with the foreground image as a video image of a face within the mask region. The mask region may be independent of the video image data, or developed based on an image processing algorithm of the video image data. In the later case, where processing latencies are substantial, the composite output may be initially provided as a video image data independent mask which is modified over time, when the image is relatively static, for greater correspondence with the actual image. Thus, such a progressive rendering system will allow operation on platforms having various available processing power for image processing, while yielding acceptable results on systems havin

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Video superposition system and method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Video superposition system and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Video superposition system and method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2934163

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.