Temporal and spatial coherent ray tracing for rendering...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S557000

Reexamination Certificate

active

06556200

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to ray tracing, and more particularly to coherent ray tracing.
BACKGROUND OF THE INVENTION
Systems for visualization need to deal with many graphical components to accurately represent complex scenes. The scene may need to be segmented to allow the viewer to focus on areas of interest. Programmable shading and texture maps are required for complex surfaces, and realistic lighting is needed to model realistic illumination. A number of prior art techniques have been developed to reduce the amount of time it takes to render quality complex scenes. These techniques include culling, lazy evaluation, reordering and caching.
Usually, the techniques, depending on the specific visualization task at hand may use hardware or software solutions. Software solutions are tractable, but do not lend themselves to real-time visualization tasks. To design an efficient hardware architectures for performing programmable volume visualization tasks is extremely difficult because of the complexities involved. Therefore, most hardware solutions are application specific.
For example, ray tracing has been widely used for global illumination techniques to generate realistic images in the computer graphics field. In ray tracing, rays are generated from a single point of view. The rays are traced through the scene. As the rays encounter scene components, the rays are realistically reflected and refracted. Reflected and refracted rays can further be reflected and refracted, and so on. Needless to say, in even simple scenes, the number of rays to be processed increases exponentially. For this reason, ray tracing has been confined to scenes defined only by geometry, e.g., polygons and parametric patched. Ray tracing in volumetric data has universally been recognized as a difficult problem.
For volume visualization, simpler ray casting is generally used. Ray casting is ray tracing without reflected or refracted rays. In ray casting, the effect of reflected and refracted rays are ignored, and attempts to provided realistic illumination are handled by other techniques. Yet, relatively simple ray casting is still computationally expensive for visualizing volume data. For this reason, prior are solutions have generally proposed special-purpose volume rendering architectures.
Recently, hardware acceleration of ray tracing geometric models has been proposed, see ART at “www.artrender.com/technology/ ar250.html.” The ART design included parallel ray tracing engines which trace bundles of rays all the way to completion before moving on to the next bundled of rays. The input scene data were stored in the host main memory and broadcast to the processor elements. While the shading sub-system included a programmable co-processor, the ray tracing engines were ASIC implementations.
Gunther et al. in “VIRIM: A Massively Parallel Processor for Real-Time Volume Visualization in Medicine,” Proceedings of the 9
th
Eurographics Workshop on Graphics Hardware, pp. 103-108, 1994, described parallel hardware. Their VIRIM architecture was a hardware realization of the Heidelburg Ray Casting algorithm. The volume data were replicated in each module. The VIRIM system could achieve 10 Hz for a 256×256×128 volume with four modules. However, each module used three boards for a total of twelve boards.
Doggett et al. in “A Low-Cost Memory Architecture for PCI-based Interactive Volume Rendering,” Proceedings of the SIGGRAPH-Eurographics Workshop on Graphics Hardware, pp. 7-14, 1999, described an architecture which implemented image order volume rendering. The volume was stored in DIMM's on the rendering board. Each sample re-read the voxel neighborhood required for that sample. No buffering of data occurred. While the system included a programmable DSP for ray generation, the rest of the pipeline was FPGA or ASIC.
Pfister et al., in “The VolumePro Real-Time Ray-Casting System,” in Proceedings of SIGGRAPH 99, pp. 251-260, described a pipelined rendering system that achieved real time volume rendering using ASIC pipelines which processed samples along rays which were cast through the volume. Cube-4 utilized a novel memory skewing scheme to provide contention free access to neighboring voxels. The volume data were buffered on the chip in FIFO queues for later reuse.
All these designs utilized ASIC pipelines to process the great number of volume samples required to render at high frame rates. The cost-performance of these systems surpassed state-of-the-art volume rendering on supercomputers, special-purpose graphics systems, and general-purpose graphics workstations.
A different visualization problem deals with segmentation. In a medical application, each slice of data was hand segmented and then reconstructed into a 3D model of the object. Current commercial software provides tools and interfaces to segment slices, but still only in 2D. Examining 3D results requires a model building step which currently takes a few minutes to complete. Clearly, this is not useful for real-time rendering. To reduce this time, the segmentation and rendering should be performed right on the volume data utilizing direct 3D segmentation functions and direct volume rendering (DVR), and not by hand.
However, 3D segmentation is still too complex and dynamic to be fully automated and, thus, requires some amount of user input. The idea would be to utilize the computer for the computationally expensive task of segmentation processing and rendering, while tapping the natural and complex cognitive skills of the human by allowing the user to steer the segmentation to ultimately extract the desired objects.
Some prior art segmentation techniques use complex object recognition procedures, others provide low-level 3D morphological functions that can be concatenated into a sequence to achieve the desired segmentation. This sequence of low-level functions is called a segmentation “process.” These low-level functions commonly included morphological operations such as threshold, erode, dilate, and flood-fill. For the typical users of medical segmentation systems, this method has been shown to be intuitive and simple to use. The user is given a sense of confidence about the result since the user has control over the process.
In another system, the user is provided with interactive feedback while segmenting. After low-level functions were applied, the resulting segmented volume was displayed to the user, and the user was allowed to choose which function to perform next. The results of one operation assisted the user in choosing the next function. Therefore, the interactivity was limited to one low-level function at a time. If the user had created a long sequence of steps to perform a certain segmentation problem and wanted to see the effect of changing a parameter to one of the low-level functions in the middle of the sequence, then the feedback would not be 3D interactive. Instead the user was forced to step through each stage in the process repeatedly, and each time change the parameter. Additionally, the time required to perform the functions was between 5 and 90 seconds, plus up to 10 seconds to render the results, due to the use of general purpose processors.
An alternative system, segmentation could only be performed on the three orthogonal slices of the volume which were currently displayed. Since the segmentation was limited to three 2D slices, the entire segmentation “process” could be performed from start each time. This way the user could achieve interactive feedback while sliding controls to adjust parameters for functions in the middle of the process. Unfortunately, to generate a 3D projection of the volume could take up to a few minutes to complete. Additionally, there was no analogous approach to perform 2D connected component processing, since regions could grow in the third dimension and return to the original slice. Therefore, connected component processing was limited to slow feedback.
Recently, a distributed processing environment for performing sequences of the same low-level functions has

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Temporal and spatial coherent ray tracing for rendering... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Temporal and spatial coherent ray tracing for rendering..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Temporal and spatial coherent ray tracing for rendering... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3107084

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.