Method and apparatus for supporting non-power-two texture...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06184889

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to data processing systems and in particular to data processing systems with graphics displays. Still more particularly, the present invention relates to accurate rendering and display of three dimensional volumes utilizing a data processing system having a three dimensional graphics adapter and a high resolution display.
2. Description of the Related Art
Graphics image displays have progressed from depicting a flat, two dimension (“2D”) representation of an object to depicting a 2D simulation of a three dimension (“3D”) solid. Construction of 3D objects may be done through the utilization of so-called “wireframe modeling,” where a wireframe model is a construct of the object utilizing lines to render the edges of an object. A surface can be constructed by shading or filling in the wireframe representation to give the appearance of a solid, three dimensional object.
Surface detail is improved by utilizing a technique called texture mapping. Texture mapping is the process of adding patterns and photo-realistic images to displayed objects. Typically, applications store descriptions of primitives (points, lines, polygons, etc.) in memory that define components of an object. When a primitive is rasterized (converted from a mathematical element to equivalent images composed of pixels on a display screen), a texture coordinate is computed for each “pixel” (short for picture element). Texture coordinates assigned to the vertices of the primitives are interpolated to calculate additional coordinates of each pixel utilized to fill the polygon. A texture coordinate is utilized by the texture mapping engine to look up “texel” (short for texture element and represents stored texture values) values from an enabled texture map. At each rendered pixel, several texels may be utilized to define one or more surface properties, such as shading or color, at that pixel.
Texture mapping hardware typically supports width, height and depth sizes that are power-of-two (a term describing data measured in units of 2
n
, where n is an integer in a range from zero to infinity). A texture map is basically a one, two or three dimensional image composed of elements (texels) that can have one, two, three or four components—R, G, B and A. Texture coordinates are floating point numbers between 0 and 1 and are utilized to determine the proper coordinates to begin adding texture to an object. Also, texture coordinates determine how much texture to add when moving pieces of the texture from texture memory to the computer display.
A texture mapping engine utilizes special, high speed dedicated memory and maps data contained in the dedicated memory to the graphics frame buffer. Texture memory is usually contained on a removable graphics card in the data processing system. The graphics card is limited in size since it has to fit within the data processing system housing. This limits the amount of texture memory available on the adapter due to space and cost limitations. A typical medical imaging system such as a Computed Tomography (“CT”) scanner, which generates high definition three dimensional data of hard tissue in patients using multiple X-ray exposures, usually generates 32 megabytes and more of data. Magnetic Resonance Imaging (“MRI”) devices, which does best in soft tissue scans, may generate the same amount of data and more per MRI scan. Because of limited fast memory, an application may break the data into eight portions of four megabytes each and load each portion, one at a time, into texture memory to process the data. Processed data is then sent to a frame buffer where the data is assembled and finally scanned by a digital-to-analog converter and sent to a high-resolution (1280×1024 or more) computer display.
There are drawbacks to traditional rendering, where only the surface of an object can be displayed. The interior of the rendered object, if displayed in a sectional view, is homogeneous interior details are not shown. Volume rendering (capable of displaying all contents of a displayed volume) and a 3D texture map is utilized to display information derived from the interior of the object.
Volume rendering, utilizing the proper algorithms, may be utilized to reveal interior details of a 3D image. For example, a human head may be displayed on a computer screen as a two dimensional photograph. The head may also be reproduced utilizing a wireframe representation and texture mapping to produce a simulated three dimensional surface. The photograph, as well as the simulated three dimensional solid, would reveal surface features of the head, such as hair, nose, ears, eyes, etc. However, a volume rendering of the head may be manipulated to display surface features as translucent, then reveal bone, brain, blood vessels, etc., as solid and simulated in three dimension. The resulting image has the quality of a volume composed of a mixture of materials with varying translucence and opacity.
In order to implement manipulation of a rendered simulated 3D image, each volume element (“voxel,” which is similar to a pixel, with a third, depth, dimension displayed) in the volume rendering display is assigned a numerical value based on its location within the volume. A numerical value may be associated with a color and an opacity at that particular point. The numerical value may be assigned an arbitrary value between zero and one, e.g., 0.1. In the case of opacity, if the opacity scale ranked 1 as totally opaque and 0 as transparent, that particular point, 0.1, would return a reading of cloudy or nearly transparent. The set of points with equal numerical values on the volume is termed an iso surface. The iso surface value may define a specific structure in the volume, such as the cornea of the eye or a bone. Additionally, an opacity level may be arbitrarily attached to all scalar fields. If opacity for the skull was set to a low value, it would appear translucent and objects within the skull having higher opacities would be more visible. Volume rendering of the head, with operator determined opacity values, depicts boundaries where differing opacities form level surfaces depicting various objects within the head.
Volume rendering is especially useful in CT, MRI (Magnetic Resonance Imaging) and seismic scanners. Rendering engines, for handling data received from these devices, require data with dimensions that are pre-defined and usually expressed as a power-of-two (see above). This is a problem because texture data derived from the aforementioned CTs, MRIs and seismic scanners is usually not power-of-two in one or more dimensions.
In most three dimensional rendering engines, the width, height and depth of the raw data are processed utilizing dimensions with powers-of-two limitations (historical, rather than hardware or software limitations). Generally, width and height coordinates of data are supplied with power-of-two dimensions, but the depth is seldom available in power-of-two dimensions. Data may be acquired from a scan and processed for utilization by a computer in either one large block of three dimensional data or multiple slices of two dimensional data (for our purposes, three dimensional data will be discussed here). For example, sensor data received by a rendering engine may be 128 by 128 by 57 units. Utilization of the sensor data will present a problem because of the non-power-of-two dimension of the depth. The usual method for rendering an image from data having non-power-of-two measurements is to re-sample the data so that it measures 128 by 128 by 32 or 64—a pre-processing step that forces data to fit limitations of the graphics rendering engine. However, forcing data to fit limits prescribed by the graphics engine, may cause faulty representation of the raw data and artifacts.
FIGS. 3A-3C
depicts an existing method of rendering non-power-of-two segments. Perspective
300
, in
FIG. 3A
, represents a volume of raw data to be rendered. A digital data segment, comprising five subvolume elements (“voxels”), and illu

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for supporting non-power-two texture... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for supporting non-power-two texture..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for supporting non-power-two texture... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2604561

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.