Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
1998-10-16
2001-05-08
Nguyen, Phu K. (Department: 2772)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06229547
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to computer graphics, and more particularly to an improved system and method for rendering arbitrarily oriented three dimensional cross sections of a volumetric data set comprising stacks of two dimensional images.
2. Related Art
The increased use of volume rendering of three-dimensional (3D) images by graphics computers is apparent in today's society. In one example, the medical industry uses this technique to diagnose patients by viewing internal images of the human body generated from magnetic resonance imaging (MRI) devices, computerized tomography (CT) scanning devices, and the like. In another example, geologists and seismologists use this technique to view internal images of our planet for locating untapped natural resources, and for predicting earthquakes and other phenomena Many other scientific disciplines use this technology for performing scientific visualizations that have not been possible before the advent of this important technology.
Computer generated volume rendering of three-dimensional (3D) images is generally formed from a plurality of stacks of two-dimensional (2D) images. For example, in the medical discipline, MRI devices are used to view images of various internal organs of the human body. 3D images of the internal organs are created from a series of 2D images, wherein each image is a cross-section at a particular depth. As the term is used herein, “3D images” are defined as arbitrarily oriented three-dimensional cross-sectional images.
The images are stacked on top of one another and are aligned in a plane coincident to the axis used to acquire the images (i.e. the image plane, or the acquisition plane). MRI images, for example are acquired by performing a series of scans in a fixed direction along the X, Y or Z axis. Thus, for example, scans made along the Z axis produce a series of images in the X-Y image plane. Scans made along the X axis, produce a series of images in the Y-Z image plane. Scans made along the Y axis, produce a series of images in the X-Z plane. As the term is used herein, “3D images” are defined arbitrarily oriented 3D cross-sectional images.
Generally, computer systems with 3D hardware accelerators are used to render 3D textured images in real-time from the acquired images. The 3D images are generated from user defined view ports within the volumetric data set which comprises the stack of 2D images. These 3D images are formed using texture mapping techniques that are well known in the art. Specifically, texture maps representing each image are stored in a computer memory device. Once these texture maps are stored, users can selectively display a 3D image rendered from one of more of the texture maps by defining a particular view port for the image. A particular view port is defined by specifying the size and orientation of a slice that intersects the series of stacked images.
When a requested slice coincides with any one of the acquired images, a properly filtered 3D image can be rendered using only bi-linear interpolation techniques. However when a requested slice is oblique, (i.e. a slice does not coincide with the image plane), 3D texture mapping using tri-linear interpolation techniques are required. Rendering oblique slices are referred to as multi-planar reformations.
On computer systems that support 3D texture mapping, rendering an image associated with an oblique slice is a rather straight forward proposition. The slicing plane is clipped to the volumes geometry and the resulting polygons are drawn with 3D texturing enabled. However, on computer systems that do not support 3D texture mapping, a 3D image associated with an oblique slice cannot be performed in an efficient manner.
The problem is that many general purpose computer systems do not support hardware accelerated 3D texture mapping using tri-linear interpolation. Hardware graphic accelerator devices that do support tri-linear interpolation are still considered specialty, high-end devices and are very expensive.
However, it would be desirable to use the wide variety of general purpose computer systems readily available today for viewing these 3D images. For example, doctors and other scientists would be able to more easily disseminate and share such images with other colleagues and other individuals for consultation purposes and the like.
One way to overcome this problem is to use software rendering techniques for performing the necessary tri-linear interpolation general purpose computer systems. However, using even the fastest microprocessors available today, tri-linear interpolation software rendering techniques tend to be too slow to be of any practical use in real-time applications.
Recently, a wide variety of affordable consumer level graphic hardware acceleration devices have become available for use in general purpose computer systems. However, these devices are somewhat limited. Specifically, these devices typically support, among other features, 2D texture mapping and bi4linear interpolation, but do not generally support 3D texture mapping and tri-linear interpolation.
Thus, what is needed is a system and method for rendering 3D images associated with oblique slices in volumetric data sets, that can be efficiently performed using affordable graphic accelerators that do not support 3D texture mapping and tri-linear interpolation
SUMMARY OF THE INVENTION
Accordingly, the present invention is directed toward an improved system and method for rendering arbitrarily oriented three dimensional cross sections of a volumetric data set comprising axially aligned parallel stacks of two dimensional textured images. A view port comprising an oblique slice intersecting the stack of textured images is defined. The polygon associated with the view port slice is divided into multiple polygons, wherein each polygon is clipped to the surface of each intersecting texture.
Each intersecting texture is then enabled, one at a time. When each texture is enabled, each polygon intersecting with the enabled texture is drawn. The colors of the vertices that fall within the active textures are maintained according to the color of the active texture, and the colors of the vertices that fall within the inactive texture are set to zero. This process is repeated until all of the intersecting textures have been enabled. This causes each polygon to be drawn exactly twice. Additive blending is enabled so that the first and second polygons are blended together.
When each polygon is drawn, bi-linear interpolation techniques are used to fill-in the colors of the texels that lie in between the vertices of the polygon. Accordingly, the active textures that are mapped onto polygons are effectively multiplied by a linear ramp that increases in intensity as the parts of the polygon approach the active texture and decreases in intensity as the parts of the polygon move away from the active texture.
The second time the polygon is drawn, the opposite texture used to draw the first polygon becomes the active texture. Thus, the texture that is mapped into the second polygon is effectively multiplied by a linear ramp that is the reverse of the first linear ramp associated with the first polygon. The first and second polygons are then blended together, resulting in a properly filtered averaged three dimensional image that is rendered using only bi-linear interpolation techniques.
An article that describes the techniques presented herein can be found in “
SIGGRAPH
98
Course Notes, Advanced Techniques for Ray Casting Volumes Section
3.9,
Polygonizing Arbitrary Cross Sections
(
MPRs
), ” published by ACM SIGGRAPH, Jul. 19, 1998. This article was written by the inventor of the present invention, Robert Grzeszczuk, and is incorporated herein by reference.
REFERENCES:
patent: 5237650 (1993-08-01), Priem et al.
patent: 5404431 (1995-04-01), Kumazaki et al.
Grzeszczuk, R. et al., SIGGRAPH98 Course Notes, Advanced Geometric Techniques for Ray Casting Volumes,ACM SIGGRAPH,239 pages, Apr. 20, 1998.
Nguyen Phu K.
Silicon Graphics Inc.
Sterne Kessler Goldstein & Fox P.L.L.C.
LandOfFree
System and method for rendering multi-planar reformations... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method for rendering multi-planar reformations..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for rendering multi-planar reformations... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2534973