Method and apparatus for rapidly rendering and image in...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06184888

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to a method and apparatus for rendering an image in response to three-dimensional graphics data and especially relates to a method and apparatus for rendering an image in response to three-dimensional graphics data in an environment, such as a network, in which the data rate is limited.
BACKGROUND OF THE INVENTION
Due to advances in the processing power of personal computers and other graphics processing devices such as video games, three-dimensional computer graphics (3D graphics) are becoming increasingly popular. The Internet is increasingly being used to deliver content that includes 3D graphics. Home personal computer (PC) users can download 3D graphics data from the Internet and can use browsers to render, display, and interact with the contents.
Rendering techniques for 3D graphics have been developed over a long time. The most recent techniques have been embodied in extension hardware that can be plugged into the graphics adaptors used in home PCs. Increases in the performance of PCs and graphics hardware permit increases in the complexity of the graphics that can be rendered. However, when the source of the 3D graphics data is a network that has a limited data transfer rate, a long loading time is required for the 3D graphics data when the image represented by the graphical data is complex. The Internet is an example of such a network. Known browser programs that run on client computers connected to the network require that substantially all of the 3D graphics data be loaded from the server before they can start performing the graphics rendering calculations necessary to display an image in response to the 3D graphics data. As a result, the user must wait for most of the loading time before the image starts to display. The processing structure of conventional 3D graphics rendering programs imposes long delays before such programs begin to display an image.
In 3D graphics, a three-dimensional space populated by one or more three-dimensional objects is defined. The position of a reference point on each object in the three-dimensional space is also defined. The shape of each object is defined in terms of a set of polygons that covers the surface of the object. For example, an equilateral cube may be defined in terms of six square polygons (squares), each of which is congruent with one of the faces of the cube. A size, a shape and coordinates are defined for each polygon. Additionally, a color and reflectivity may be defined. However, if all or part of a surface of the object is patterned, the polygon representing the surface must be divided into multiple sub-polygons so that the pattern can be represented by assigning different colors and reflectivities to the sub-polygons. This substantially increases the amount of data required to represent the object.
Consequently, if the surface of the object is to be rendered with more than rudimentary patterning, it is preferred to define the appearance of the surface using a texture. A texture is a set of data that represent a pattern of an arbitrary complexity ranging from a rudimentary bitmap to a complex image. The set of data may include image map data, for example, defining the pattern. The polygons representing parts of the surface of the object whose appearance is defined by the texture are tiled using the texture. When textures are used, an object having a complex surface appearance can be represented using fewer data than are required to represent the surface using an increased number of polygons.
When the three-dimensional space populated by three-dimensional objects is displayed as an image on a two-dimensional screen, such as the screen of a computer monitor, a view point is first chosen by the user or is defined by the graphical data. One or more sources of illumination are also defined, or may be chosen. An imaginary plane, called a rendering screen, located between the view point and the three-dimensional space is also defined. The rendering screen is divided into picture elements (pixels) in an arrangement that preferably corresponds to the number of pixels that will be used to display the image. For example, if the image is to be displayed on the screen of an NTSC television or a VGA computer display, the rendering screen is preferably divided into 640×480 pixels. The rendering program then calculates color values for each of the pixels constituting the rendering screen. The color values for each of the pixels normally include a red color value, a green color value and a blue color value. However, different color values such as luminance and color differences may be used. The image is then displayed in response to the color values of the pixels. This may be done, for example, by transferring the color values from the 3D-graphics extension hardware to the video memory of the graphics adaptor of the computer.
A conventional 3D-graphics rendering program that uses so-called Z-buffering may operate as follows:
1. Loading Phase: a complete set of graphics data is loaded from a server via a network or from local files.
2. Geometry Phase: the projections of the polygons onto the rendering screen are calculated. The results of these calculations include a depth value indicating the depth of each vertex of each polygon from the point of view.
3. Rasterizing Phase: shading and hidden surface removal operations are performed. A rasterizing operation is performed that converts the projected polygons into object pixels corresponding to the pixels of the rendering screen, and calculates a set of color values for each object pixel. Because the objects are three-dimensional, more than one polygon may project onto a given pixel of the rendering screen. Consequently, color and depth values for more than one object pixel are generated where polygons overlap on the rendering screen. The color and depth values are registered in pixel and depth buffers having storage locations corresponding to the pixels of the rendering screen. In this registration process, when more than one object pixel has been generated for a given rendering screen pixel, the depth values are compared to decide which of the object pixels is visible. Only the color and depth values corresponding to the smallest depth value, i.e., those of the visible object pixel, are stored in the pixel and depth buffers. This operation effectively removes hidden surfaces, i.e., surfaces that cannot be seen from the view point.
4. Display Phase: the contents of the pixel buffer are output for display after all polygons are processed by the geometry and rasterizing phases.
In the rasterizing phase, a shading operation calculates the color values for each object pixel. This step also calculates a set of intensity coefficients for each pixel and multiplies the original color values of the pixel by the respective intensity coefficients to obtain the pixel color values. The set of intensity coefficients also usually includes a red, a green and a blue intensity coefficient. The intensity coefficients define the effect of lighting, and depend on a material property of the polygon in which the object pixel is located, the color, position and direction of each light source, and the position and direction of the view point. Various methods already exist for calculating intensity coefficients. The polygon's original color is part of the graphic data defining the properties of the polygon. The original color of the polygon may be defined as a direct color or as a texture, as described above. When a direct color is used, the direct color itself is used as the original color. When a texture is mapped onto the polygon, the original color of each object pixel of the polygon is calculated from the pattern data representing the texture.
The pattern data of the texture defining the original color of each object pixel are determined as follows:
1. For each pixel coordinate (X, Y) of the rendering screen, associated texture coordinates (s, t) are calculated; and
2. A pixel color is accessed or interpolated from the pattern data of one or more te

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for rapidly rendering and image in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for rapidly rendering and image in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for rapidly rendering and image in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2601946

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.