Methods and arrangements for compressing image based...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240140

Reexamination Certificate

active

06693964

ABSTRACT:

TECHNICAL FIELD
This invention relates generally to computers and, more particularly, to methods and arrangements that can be implemented to compress image-based rendering (IBR) information, transport the compressed IBR information, and subsequently provide selective and/or just in time (JIT) rendering of an image based rendering scene on a portion of the compressed IBR information.
BACKGROUND
There is a continuing interest, within the computer graphics community, in image-based rendering (IBR) systems. These systems are fundamentally different from traditional geometry-based rendering systems, in that the underlying information (i.e., data representation) is composed of a set of photometric observations (e.g., digitized images/photographs) rather than being either mathematical descriptions of boundary regions or discretely sampled space functions.
An IBR system uses the set of photometric observations to generate or render different views of the environment and/or object(s) recorded therein. There are several advantages to this approach. First, the display algorithms for IBR systems tend to be less complex and may therefore be used to support real-time rendering in certain situations. Secondly, the amount of processing required to view a scene is independent of the scene's complexity. Thirdly, the final rendered image may include both real photometric objects and virtual objects.
IBR systems can be complex, however, depending upon the level of detail required and the processing time constraints. For example, Adelson et al., in their article entitled “The Plenoptic Function And The Elements Of Early Vision”, published in Computational Models of Visual Processing by The MIT Press, Cambridge, Mass. 1991, stated that a 7-dimensional plenoptic function can be implemented in an IBR system to completely represent a 3-dimensional dynamic scene. The 7-dimensional plenoptic function is generated by observing and recording the intensity of light rays passing through every space location as seen in every possible direction, for every wavelength, and at any time. Thus, imagine an idealized camera that can be placed at any point in space (V
x
, V
y
, V
z
). This idealized camera can then be used to select any of the viewable rays by choosing an azimuth angle (&thgr;) and elevation angle (&phgr;), as well as a band of wavelengths (&lgr;). Adding an additional parameter (t) for time produces a 7-dimensional plenoptic function:
p=P(&thgr;, &phgr;, &lgr;, V
x
, V
y
, V
z
, t)
Thus, given function p, to generate a view from a specific point in a particular direction, one need only to merely plug-in the values for (V
x
, V
y
, V
z
) and select from a range of (&thgr;, &phgr;) for some constant t for each desired a band of wavelengths (&lgr;).
Accomplishing this in real-time, especially for a full spherical map or a large portion thereof, is typically beyond most computer's processing capability. Thus, there has been a need to reduce the complexity of such an IBR system to make it more practical.
By ignoring the time (t) and the wavelength (&lgr;) parameters, McMillan and Bishop in their article entitled “Plenoptic Modeling: An Image-Based Rendering System” published in Computer Graphics (SIGGRAPH'95) August 1995, disclosed a plenoptic modeling scheme that generates a continuous 5-dimensional plenoptic function from a set of discrete samples.
Further research and development by Gortler et al., lead to the development of the Lumigraph as disclosed in an article entitled “The Lumigraph” that was published in Computer Graphics (SIGGRAPH'96) in August, 1996. Similarly, Levoy et al. developed a Lightfield as disclosed in an article entitled “Light Field Rendering” that was also published in Computer Graphics (SIGGRAPH'96) in August of 1996.
The Lumigraph and the Lightfield presented a clever 4-dimensional parameterization of the plenoptic function provided the object (or conversely the camera view) is constrained, for example, within a bounding box. As used herein, the term “Lumigraph” is used generically to refer to Lumigraph, Lightfield, and other like applicable plenoptic function based techniques.
By placing the object in its bounding box (e.g., a six-sided cube) which is surrounded by a larger box (e.g., a larger six-sided cube), the Lumigraph indexes all possible light rays from the object through the coordinates that the rays enter and exit one of the parallel planes of the double bounding boxes. Thus, in the case of a six-sided cube, the resulting Lumigraph data is thus composed of six 4-dimensional functions that can be discretized more precisely for the inner bounding box closest to the object, and more coarsely for the outer bounding box.
In the examples that follow, the bounding box and larger box are assumed to be six-sided cubes, wherein the plane of the inner box which is being considered is indexed with coordinates (u, v) and that the corresponding plane of the outer box is indexed with coordinates (s, t).
Alternatively, the Lumigraph could be considered as six 2-dimensional image arrays, with all the light rays coming from a fixed (s, t) coordinate forming one image, which is equivalent to setting a camera at coordinate (s, t) and taking a picture of the object where the imaging plane is the (u, v) plane.
In either case, a plurality of Lumigraph images can be taken to produce a Lumigraph image array. Since neighboring Lumigraph images within the array will tend to be very similar to one another, to create a new view of the object, the IBR system can simply split the view into its light rays by interpolating nearby existing light rays in the Lumigraph image arrays.
In this manner, the Lumigraph is attractive because it has information of all views of the objects/scenes. With the Lumigraph, a scene can be rendered realistically regardless of the scene complexity and fast as compared with a top-notch graphic rendering algorithm such as ray tracing algorithm.
Unfortunately, the Lumigraph typically requires a very large amount of data. For example, a typical Lumigraph scene may include 32 sample points in each axis on the (s, t) plane, and 256 sample points in each axis on the (u, v) plane, with 3 color samples per light ray (e.g., 8-bits of red data, 8-bits of green data, and 8-bits of blue data), and 6 parallel image planes of the object. Thus, for such a relatively low resolution Lumigraph (note that the object resolution is that of the (u, v) plane, which is only 256×256 sample points), the total raw data amount is:
Total Lumigraph Data=32×32×256×256×3×6=1.125 GB.
Such a large Lumigraph data file would be impracticable for storage on a hard drive, optical disc, etc., or for transmission over a communication network, such as, for example, the Internet. Moreover, practical Lumigraph applications will likely require better resolution through a higher sampling density, which would result in even larger Lumigraph data files.
Consequently, there is an on-going need to reduce the size of the Lumigraph data file. One method is to compress the Lumigraph data. Since the Lumigraph data consists of an array of images, therefore, one might think that compression techniques that have been successfully applied to video coding might be applicable to provide Lumigraph data compression. Unfortunately, this is not necessarily so, because there are distinct differences between video and the Lumigraph. For example, the Lumigraph is a 2-dimensional image array, while video is a 1-dimensional array (i.e., a sequence of frames). Thus, there tends to be more of a correlation in the Lumigraph than in the video sequences. Furthermore, unlike video, views rendered using the Lumigraph tend to be more static as presented to the viewer. As is well known, for most viewers, distortion is more noticeable in static images than in moving images. Since a rendered view of the Lumigraph is a combination of the image arrays, certain human visual system (HVS) properties, such as, spatial and temporal masking, may not be used.
Another difference

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and arrangements for compressing image based... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and arrangements for compressing image based..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and arrangements for compressing image based... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3320432

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.