Architectural extensions to 3D texturing units for...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S424000, C345S582000

Reexamination Certificate

active

06542154

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to computer graphics. More specifically, the present invention relates to volume rendering using 3-D texturing units.
Surface-oriented graphics and volume graphics are two important fields within computer graphics. They differ in the way objects are represented in the computer and the way they are displayed.
A. Surface-Oriented Graphics
In surface-oriented graphics, only the surface of a given object is stored inside the computer. For practicability, a curved surface may be approximated by a potentially large number of triangles or other polygons. The triangles in turn are defined by the properties of their vertices. Thus, objects are defined by a structured list of vertices, wherein each of the vertices in turn is defined by a multitude of data items, comprising at least the geometrical coordinates. The objects, such as the body of a car and its tires, are usually defined in their own coordinate system.
A database containing all the objects is usually stored in main memory and maintained by a host CPU.
For the display, the objects are transformed into screen space according to the desired arrangement in the scene. These geometric transformations, which usually include a perspective transformation, finally give the projection of each triangle on the display area of a computer graphics output device. These transformations are usually performed by the host CPU or a specialized geometry processor.
Typically, computer graphics output devices are raster devices, i.e., the display area (or screen) comprises a set of discrete picture elements, or pixels for short. Each triangle's projection on the screen is therefore decomposed into the set of pixels it covers, a process called scan-conversion or rasterization. Commonly, square pixels are assumed.
For each pixel inside a projected triangle, the local color of the surface is computed by applying illumination models. This process is called shading. As an example, the color of the vertices are computed prior to rasterization in accordance to location and physical behavior of the light sources in the scene. To determine an individual pixel color, the values at the vertices are then interpolated at the pixel location.
For greater realism a technique called texture mapping is widely used. In the two-dimensional case, a texture is any form of discrete image, consisting of a two-dimensional array of texture elements or texels for short. As an example, a texture can be a scanned photograph, wherein a texel consists of one numerical value for each color component (RGB). Additionally, a texture can be translucent. In this case, a texel has a fourth parameter to define the local opacity, commonly denoted by “&agr;”.
During the design of an object (called the modeling stage), a texture is mapped on its surface by assigning texture coordinates to the vertices of the triangles.
After projecting a triangle on the screen for display, texture mapping raises the problem of determining which portions of the texture are covered by the individual pixels, and which color is assigned to a pixel accordingly. To determine the location of the pixel projection on the texture, the texture coordinates given at the vertices are interpolated at the pixel center. However, using the RGB-triple or RGB&agr;-quadruple at or nearest to these coordinates would result in strong resampling artifacts (i.e., aliasing).
For example, in a hypothetical image consisting of a sphere and a globe (i.e., oceans, continents, ice caps etc.), the globe might cover the entire screen or just a few pixels, depending on the distance from the observer. Thus, a filter operation is performed for each pixel according to the size and shape of its projection on the texture.
For hardware systems, this is most often done using bi- or tri-linear interpolation within a “Mipmap.” A Mipmap is a collection of pre-filtered textures, each of them being denoted a level. Level 0 holds the original texture. 2×2 texels in level 0 are averaged, and the resulting value is one texel in level 1. This is continued towards the top level, which in case of square textures holds exactly one entry, the mean color of the entire texture.
For bi-linear interpolation, a level is chosen with texels best matching the size of the pixel projection. The pixel color is bi-linearly interpolated from the four texels surrounding the projection of the pixel center.
For tri-linear interpolation, two adjacent levels are chosen: one level holding the texels being smaller than the pixel projection, and the other holding larger texels. In each level, a bi-linear interpolation is performed. Finally, the two results are linearly interpolated according to the size of the pixel projection.
A typical embodiment of this method would therefore incorporate a memory system holding the Mipmap and a processor for interpolating texture coordinates across the screen triangle, accessing the texture memory and performing the interpolations.
Rasterization and the texturing are typically carried out by a combined rasterizer/texturing unit, connected to a local memory system which stores the textures.
Interpolation of the different quantities (i.e., color components RGB, texture coordinates) across a screen triangle is done by computing these values at a given vertex, hereinafter called the “starting vertex,” computing the derivatives of these quantities with respect to the screen coordinates and adding the derivatives, stepping from one pixel to the next or from one line to he next.
Computing the derivatives is part of the setup stage and usually carried out by the host CPU or a dedicated setup processor.
Recently extensions to three-dimensional textures have been introduced. A 3D-texture is simply a three-dimensional array of texels. A Mipmap in this case is a collection of pre-filtered 3D-textures. An entry in level n+1 is computed from 2×2×2 entries in level n. For each pixel, three texture coordinates are interpolated. In the case of filtering in only one Mipmap level, a tri-linear interpolation is performed. Using two levels, two tri-linear interpolations are followed by a final linear interpolation as explained above.
3D-texture mapping has advantages whenever a 2D-texture cannot be mapped consistently on a 3D-surface, as is the case for complex-shaped wooden objects. Hardware accelerators for 3D-texture mapping are available today.
B. Volume Graphics
Volume graphics, as opposed to surface-oriented graphics, is used whenever the interior of objects is important or when there is no clearly defined surface. Typical applications include medical diagnosis, non-destructive material testing and geology.
For processing within a computer, the properties of interest (e.g., the density or the temperature) of the objects are sampled on a three-dimensional grid to give a discrete scalar field, the so-called “volume data set.” The elements of a volume data set are called voxels. Usually the grid is rectangular and isotropic in at least two dimensions. Typical dimensions are 128
3
up to 1024
3
.
The visualization of these data sets consists of two parts: the segmentation, or classification, during which it is determined which subset of the data set should be visible (e.g., only the bones in a medical data set), and the meaningful display of these structures on the output device.
It may be assumed that the result of the classification is a parameter per voxel defining its visibility or opacity, hereinafter called “&agr;”. Usually a runs from 0 to 1, where &agr;=0 means that the voxel is completely transparent, and &agr;=1 defines a completely opaque voxel.
Furthermore it is assumed that a shading operation is performed per voxel. Each voxel is assumed to be part of a surface element, oriented according to the local gradient, and thus, illumination models from surface-oriented graphics can be applied. After classification and shading, the voxels of the data set are defined by RGB&agr;-quadruples.
For the display, or rendering, a number of methods have been develop

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Architectural extensions to 3D texturing units for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Architectural extensions to 3D texturing units for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Architectural extensions to 3D texturing units for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3028447

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.