Method and apparatus for rendering video data

Computer graphics processing and selective visual display system – Computer graphics processing – Attributes

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06791561

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to computer-generated graphics and in particular to efficiently rendering video data.
BACKGROUND OF THE INVENTION
Currently, in order to render video data, a source video signal is received by an interface port and sent to a video decoder that digitizes the incoming data. Thereafter Mip Map data may be created in real time and written into a memory buffer (static texture memory) as texture data. Static texture memory stores texture data that is used by a polygon rasterizer (“rasterizer”).
Conventional rendering systems are slow when polygon data is textured with live video data because video Mip Map data is continuously written into texture memory and at another instance the rasterizer attempts to read data from texture memory for rendering. Since we cannot read and write from the same memory location, memory conflict occurs. The problem gets worse when additional video channels are added because more time is spent in updating the texture memory than using the video texture for polygon rendering. The following describes existing techniques for rendering live video data.
FIG. 1
shows a conventional system for using video data as texture for rendering polygons.
FIG. 1
shows an application software module
101
in a host computer system
101
B that generates polygon descriptors for displaying a source image on a display device
107
. Application software module
101
sends polygon descriptors
101
A to a Rasterizer
102
. Data is rasterized and polygon data is converted into fragment data. If polygon data is textured with video data then the color information related to each fragment generated from polygon data is read from static texture memory
103
and sent to a Z buffer
105
, and thereafter sent to a frame buffer/video signal generator
106
that sends image data to display device
107
.
Z buffer
105
sorts the fragment data from a rendered polygon relative to fragment from other polygons to maintain spatial order from a user's perspective. A typical Z buffer
105
includes an array of memory locations (Z buffer memory) where each location contains color value (U and V coordinate) and Z values which is the distance of a polygon fragment from a view plane.
For rendering video data, a polygon is sent directly from a video source
104
to video capture unit
108
that sends digitized video data to texture memory
103
. Video capture unit
108
may include a video decoder to decode incoming video data or a separate video decoder may be connected to video capture unit
108
.
FIG. 1
also shows three cubes C
1
, C
2
and C
3
displayed at any instance. C
1
, C
2
and C
3
are textured with video data. However, only cube C
1
changes position and/or rotates over a time period “t”. But in order to render video data as polygon texture, rasterizer
102
, must re-rasterize all the three cubes over time period t. Texture data derived from the video source for the three cubes is stored in static texture memory
103
, read from static texture memory
103
and are used to color polygon fragments that are then sent to display device
107
via buffer
105
and video signal generator
106
. However, these operations are redundant because although only one cube is changing position, the stationery cubes must also be constantly re-rendered as well. This redundant rendering of objects that are static slows down the overall rendering process and is inefficient.
Hence, what is needed is a method and system that reduces the amount of data processing and efficiently displays video data without the foregoing continues re-rendering operations.
SUMMARY
The present invention addresses the foregoing drawbacks by providing a method and apparatus that efficiently displays input video data as animated textures without redundant rasterizing. In one embodiment, the process steps receive the input digitized video data in a Mip Map generator, wherein the Mip Map generator converts the digitized video data to Mip Map data and stores the Mip Map data in a V buffer memory. The method further includes sending a data set from a Z buffer to a V buffer and converting the data set to a texel address in the V buffer. The data set includes U, V and Z coordinates, Mip Map level data and channel identification data. Also, the data set is mapped to texel RGB data by the V buffer memory and then transferred back to the Z buffer.
In another aspect, the present invention provides an apparatus for rendering the input video data. The apparatus includes the Mip Map generator that receives the input video stream and converts input digitized video data to Mip Map data; and a V buffer that receives the Mip Map data associated with the input video data and a data set from the Z buffer. The V buffer includes a V buffer fetch module that receives the data set from the Z buffer and maps it to a texel address containing RGB data within the V buffer memory.
By virtue of the foregoing aspects of the present invention, digitized video data is sent to the Mip Map generator and then the Mip Map data is sent to the V buffer. The V buffer maps data from the Z buffer to a texel address containing RGB data for display and the rasterizer does not have to re-render every polygon that has as an applied video texture.
This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof in connection with the attached drawings.


REFERENCES:
patent: 4935879 (1990-06-01), Ueda
patent: 5481669 (1996-01-01), Poulton et al.
patent: 5621867 (1997-04-01), Murata et al.
patent: 5696892 (1997-12-01), Redmann et al.
patent: 5774132 (1998-06-01), Uchiyama
patent: 5796407 (1998-08-01), Rebiai et al.
patent: 5798770 (1998-08-01), Baldwin
patent: 5877771 (1999-03-01), Drebin et al.
patent: 6064407 (2000-05-01), Rogers
patent: 6195122 (2001-02-01), Vincent
patent: 6236405 (2001-05-01), Schilling et al.
patent: 6331852 (2001-12-01), Gould et al.
patent: 6348917 (2002-02-01), Vaswani
patent: 6421067 (2002-07-01), Kamen et al.
patent: 6499060 (2002-12-01), Wang et al.
patent: 6621932 (2003-09-01), Hagai et al.
“Rendering CGS Models with a ZZ-Buffer”, Computer Graphics, vol. 24, No. 4, Aug. 1990; by David Salesin and Jorge Stolfi, Stanford University, Stanford, CA; pp. 67-76.
“Hardware Accelerated Rendering of Antialiasing Using a Modified A-Buffer Algorithm”, Computer Graphics, Annual Conference Series 1997; Los Angeles, CA, by Stephanie Winner, Mike Kelley, Brent Pease, Bill Rivard, and Alex Yen; pp. 307-316.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for rendering video data does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for rendering video data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for rendering video data will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3251637

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.