Computer graphics processing and selective visual display system – Computer graphics processing – Attributes
Reexamination Certificate
2001-10-04
2003-09-09
Zimmerman, Mark (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Attributes
C345S421000, C345S422000, C345S589000, C345S597000, C345S008000
Reexamination Certificate
active
06618054
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to the field of 3-D graphics and, more particularly, to a system and method for rendering and displaying 3-D graphical objects.
2. Description of the Related Art
The human eye is subject to many of the same optical phenomena as inanimate optical systems. In particular, for any given state of the crystalline lens, there exists a unique distance d
f
at which objects appear maximally sharp (i.e. minimally blurred) and an interval of distances around d
f
where objects have sufficient clarity. More precisely, the blurriness of objects as a function of distance from the lens varies smoothly and has a minimum at distance d
f
. The interval of distances over which objects are sufficiently clear is commonly referred to as the depth of field. The depth of field typically increases with increasing focus distance d
f
.
Muscles (the ciliary body) connected to the crystalline lens may exert pressure on the crystalline lens. The induced deformation of the lens changes the focus distance d
f
. The extent of lens deformation in response to muscular pressure depends on the elasticity of the lens. The elasticity generally decreases with age. (By age 45, many people will have lost most of their elasticity: ergo bifocals). Thus, the range of focus distances d
f
which the human eye can achieve varies with age.
The human visual system has two directionally-controllable eyes located at the front of the head as suggested by FIG.
1
. The direction of gaze of an eye may be characterized by a ray that emanates from the center of the corresponding macula (the most sensitive portion of the retina) and passes through the center of the corresponding lens. Because each eye gathers a different view on the external world, the brain is able to create a three-dimensional model of the world.
There are brain control systems which control the orientation angles and the focus distances d
f
of each eye. These control systems may be responsive to various sources of information including clarity and positional fusion of the images perceived by the right and left eyes. In
FIG. 1A
, the ocular rays intersect at point P. Thus, the image of point P will fall on the center of the perceived visual field of each eye, and the two views on the neighborhood of point P will be fused by the visual cortex into an integrated 3D entity. In contrast, because point Q lies inside the two ocular rays, the right eye perceives the point Q as being to the left of center and the left eye perceives the point Q as being to the right of center. Thus, the brain perceives two images of point Q. Similarly, because point R lies outside the two ocular rays, the right eye perceives the point R as being to the right of center and the left eye perceives the point R as being to the left of center. So the brain perceives two images of point R also.
Let d
t1
be the distance of the right eye to the intersection point P, and d
t2
be the distance of the left eye to the intersection point P as illustrated by FIG.
1
B. If the brain control systems set the focus distance d
f1
of the right eye equal to distance d
t1
and the focus distance d
f2
of the left eye equal to distance d
t2
, the fused image in the center of the field of view will appear maximally clear, and objects closer than and farther than the intersection point will appear increasingly blurry.
The brain control systems are programmed to strongly favor an assignment of focus distances that correspond respectively to the distances to the intersection point. For most people, it is somewhat difficult even intentionally to achieve focus distances d
f1
and d
f2
that are significantly larger than or small than the distances to the intersection point. However, this is exactly the trick that is required for proper perception of stereo video as suggested by FIG.
2
A. To create the perception of a three-dimensional object at point P in front of a display screen SCR, the viewer must direct his/her eyes so that the ocular rays intersect at point P. The right ocular ray passes through P and hits the screen at position X
1
, and the left ocular ray passes through P and hits the screen at position X
2
. The screen pixels in the neighborhood of position X
1
give the right eye's view on the 3D object, and the screen pixels in the neighborhood of position X
2
give the left eye's view on the 3D object. The clearest perception of the 3D object is obtained if the viewer can focus her eyes beyond the intersection point P to the screen positions X
1
and X
2
. In other words, the right eye should achieve a focus distances d
f1
equal to the distance of the right eye to screen contact position X
1
, and the left eye should achieve a focus distance d
f2
equal to the distance of the left eye to the screen contact position X
2
. Many viewers find it difficult (or impossible) to override the brain's tendency to focus at the intersection point. Focusing at the intersection point P implies that the pixilated images in the neighborhoods of X
1
and X
2
will appear blurry, and thus, the 3D object generated at point P will appear blurry.
FIG. 2B
illustrates the complementary situation where an object is to be perceived at point P behind the screen. Again the viewer directs her gaze so the ocular rays intersect at point P. In this case, the clearest perception of the object is obtained if the viewer can achieve focus distances smaller than the distances to the intersection point P, i.e. at screen positions X
1
and X
2
respectively. Again, if the viewer cannot overcome the tendency to focus (i.e. optically focus) at the intersection point P, the object will appear blurred.
When the viewer looks at some object which resides at the plane of the screen, the eyes intersect at some point on the screen, and the brain can do what it is accustomed to doing: i.e. setting the optical focus distances so they correspond to the intersection point. Thus, objects at (or near) the plane of the screen should appear sharp.
In the real world, the brain's tendency to focus at the intersection point is beneficial and implies the following. As the viewer moves his/her eyes and the ocular intersection point approaches a physical object, the object becomes increasingly fused and increasingly clear at the same time. Thus, the brain is trained to interpret increasing clarity as a clue that the eyes are moving appropriately so as to lock onto an object, and decreasing clarity as a clue that the eyes are moving away from locking onto an object.
When the viewer is observing artificially generated objects in response to stereo video, the tendency to focus at the intersection point is disadvantageous. For example, if the user attempts to lock his eyes onto a virtual object in front of screen SCR, the object may become increasingly blurry as the ocular intersection point approaches the spatial position of the virtual object (assuming the eyes are initially directed at some point on the screen). This increasing blur may actually discourage the brain control system from converging the eyes towards the virtual object to the extent where image fusion can occur. Thus, the eyes may stop short of the place where the viewer could begin to see a unified object.
Thus, there exists a need for a graphics system and method capable of generating stereo video which allows users to more easily perceive virtual objects (or portions of objects) in front of and behind the screen surface.
SUMMARY OF THE INVENTION
A graphics system may, in some embodiments, comprise a rendering engine, a sample buffer and a filtering engine. The rendering engine may receive a stream of graphics primitives, render the primitives in terms of samples, and store the samples into the sample buffer. Filtering engine may read the samples from the sample buffer, generate video output pixels from the samples, and transmit the video output pixels to a display device. The display device presents the video output to a viewer on a two-dimensional screen surface.
In one set of embodiments,
Hood Jeffrey C.
Meyertons Hood Kivlin Kowert & Goetzel P.C.
Nguyen Kimbinh T.
Sun Microsystems Inc.
Zimmerman Mark
LandOfFree
Dynamic depth-of-field emulation based on eye-tracking does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Dynamic depth-of-field emulation based on eye-tracking, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic depth-of-field emulation based on eye-tracking will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3107814