Super-sampling and gradient estimation in a ray-casting...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S422000, C345S424000

Reexamination Certificate

active

06483507

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention is related to the field of computer graphics, and in particular to volume graphics.
Volume graphics is the subfield of computer graphics that deals with the visualization of objects or phenomena represented as sampled data in three or more dimensions. These samples are called volume elements, or “voxels,” and contain digital information representing physical characteristics of the objects or phenomena being studied. For example, voxel values for a particular object or system may represent density, type of material, temperature, velocity, or some other property at discrete points in space throughout the interior and in the vicinity of that object or system.
Volume rendering is the part of volume graphics concerned with the projection of volume data as two-dimensional images for purposes of printing, display on computer terminals, and other forms of visualization. By assigning colors and transparency to particular voxel data values, different views of the exterior and interior of an object or system can be displayed. For example, a surgeon needing to examine the ligaments, tendons, and bones of a human knee in preparation for surgery can utilize a tomographic scan of the knee and cause voxel data values corresponding to blood, skin, and muscle to appear to be completely transparent. The resulting image then reveals the condition of the ligaments, tendons, bones, etc. which are hidden from view prior to surgery, thereby allowing for better surgical planning, shorter surgical operations, less surgical exploration and faster recoveries. In another example, a mechanic using a tomographic scan of a turbine blade or welded joint in a jet engine can cause voxel data values representing solid metal to appear to be transparent while causing those representing air to be opaque. This allows the viewing of internal flaws in the metal that would otherwise be hidden from the human eye.
Real-time volume rendering is the projection and display of volume data as a series of images in rapid succession, typically at 30 frames per second or faster. This makes it possible to create the appearance of moving pictures of the object, phenomenon, or system of interest. It also enables a human operator to interactively control the parameters of the projection and to manipulate the image, while providing to the user immediate visual feedback. It will be appreciated that projecting tens of millions or hundreds of millions of voxel values to an image requires enormous amounts of computing power. Doing so in real time requires substantially more computational power.
Further background on volume rendering is included in a Doctoral Dissertation entitled “Architectures for Real-Time Volume Rendering” submitted by Hanspeter Pfister to the Department of Computer Science at the State University of New York at Stony Brook in December 1996, and in U.S. Pat. No. 5,594,842, “Apparatus and Method for Real-time Volume Visualization.” Additional background on volume rendering is presented in a book entitled “Introduction to Volume Rendering” by Barthold Lichtenbelt, Randy Crane, and Shaz Naqvi, published in 1998 by Prentice Hall PTR of Upper Saddle River, N.J.
The reconstruction of images from sampled data is the domain of a branch of mathematics known as “sampling theory”. From the well-known Nyquist sampling theorem, the frequency at which data must be sampled must equal or exceed twice the spatial frequency of the data in order to obtain faithful reconstruction of the information in the data. This constraint is true in multiple dimensions as well as in one dimension.
Volume data sets are typically organized as samples along regular grids in three dimensions, which defines the spatial frequency inherent in the data. In order to project three-dimensional volume data onto a two-dimensional image plane by ray-casting, the data must be re-sampled at a sampling frequency equal to or greater than the Nyquist frequency. If the sampling frequency is not sufficiently high, undesirable visual artifacts caused by aliasing appear in the rendered image, especially moving images such as images being manipulated by a human viewer in real-time. Thus one of the challenges in real-time volume rendering is the efficient re-sampling of volume data in support of high-quality rendering from an arbitrary and changing view direction.
Another aspect of volume rendering is the application of artificial “illumination” or “lighting” to the rendered image, which is the creation of highlights and shadows that are essential to a realistic two-dimensional representation of a three-dimensional object. Lighting techniques are well-known in the computer graphics art and are described, for example, in the textbook “Computer Graphics: Principles and Practice,” 2nd edition, by J. Foley, A. vanDam, S. Feiner, and J. Hughes, published by Addison-Wesley of Reading, Mass., in 1990.
One illumination technique that generates very high quality images is known as “Phong illumination” or “Phong shading”. The Phong illumination algorithm requires knowledge of the orientation of surfaces appearing in the rendered image. Surface orientation is indicated by a vector referred to as a “normal”. In volume rendering, one way to obtain the normal is to estimate gradients for the samples of the volume data. Various techniques can be used to calculate gradients. According to one commonly-used technique, the gradients are estimated by calculating “central differences”. That is, the gradient in a given direction at a sample point is equal to the difference between the values of the two sample points surrounding the sample in the indicated direction.
The performance of illumination algorithms in general is very sensitive to the accuracy of the gradient calculation. To obtain the best-quality rendered image, it is important that gradients be calculated very accurately.
In a ray-casting system in which sample planes are normal to the rays, it is fairly straightforward to calculate gradients from samples of the volume data using central differences. There are, however, systems in which sample planes are parallel to the planes of voxels, so that the angle between sample planes and rays is dependent on view angle. One example of such a system is shown in the aforementioned Doctoral Dissertation. In these systems, the calculation of gradients is substantially more difficult, because the weight to be given neighboring samples is dependent on the viewing angle.
One prior technique for calculating gradient values has been to calculate an intermediate value using unity weights, and then to apply a correction factor that is a function of viewing angle. Unfortunately, this technique is both complicated and inaccurate. It would be desirable to improve the performance of illumination algorithms such as Phong illumination by enabling the calculation of accurate gradients in a manner that lends itself to efficient hardware implementation in a volume rendering processor.
BRIEF SUMMARY OF THE INVENTION
In accordance with the present invention, a ray-casting volume rendering processor is disclosed which in which volume data is efficiently re-sampled as an image is rendering from an arbitrary and changing view direction. Accurate gradients are calculated for use in a Phong illumination algorithm, and high rendering throughput is maintained while the appearance of sampling-induced artifacts in the rendered image is minimized.
The disclosed volume rendering processor includes voxel memory interface logic operative to continually retrieve the voxels from a voxel memory in which a volume data set is stored. The voxel memory is scanned in order with respect to a Cartesian coordinate system having mutually perpendicular X, Y and Z coordinate axes, the Z axis being the axis more nearly parallel to a predefined viewing direction than either the X or Y axes. Interpolation logic coupled to the voxel memory interface logic continually receives the retrieved voxels and calculates samples such that (i) each sample lies along a corresponding imaginary ray extendi

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Super-sampling and gradient estimation in a ray-casting... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Super-sampling and gradient estimation in a ray-casting..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Super-sampling and gradient estimation in a ray-casting... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2985769

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.