Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
1999-04-27
2002-02-12
Nguyen, Phu K. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06346938
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to digital image processing systems for rendering and displaying views of three-dimensional (3D) geometric models, and is particularly directed to a computer-resident mechanism that enables a system user to view images generated from 3D geometric information, and to use a convenient user interface such as a joystick and a mouse device, to navigate within animated perspective views of the image as displayed on a high-resolution raster display.
BACKGROUND OF THE INVENTION
Conventional image processing systems for generating and displaying three-dimensional models customarily offer the user the ability to conduct a limited degree of navigation through a scene, including walk/drive/fly navigation with a mouse, or through the use of a joystick device. In such applications, navigation is typically constrained to predefined locations, and may or may not employ an interpolation technique to provide smooth motion from a current location to a new location. For an illustration of literature relating to model display and image processing technology, attention may be directed to the Oka et al, U.S. Pat. No. 4,600,200; Pica, U.S. Pat. No. 4,631,691; Andrews et al, U.S. Pat. No. 4,646,075; Bunker et al, U.S. Pat. No. 4,727,365; Mackinlay et al, U.S. Pat. No. 5,276,785; Amano al, U.S. Pat. No. 5,287,093; Robertson al, U.S. Pat. No. 5,359,703; Robertson, U.S. Pat. No. 5,608,850; Robertson, U.S. Pat. No. 5,689,628; and Marrin et al, U.S. Pat. No. 5,808,613.
SUMMARY OF THE INVENTION
In accordance with the present invention, the constrained capabilities of conventional 3D model display systems, such as those referenced above, are effectively obviated by a new and improved versatility digital image processing system, that is effective to controllably render and display multiple aspect and size views of a three-dimensional (3D) geometric model, such as, but not limited to an urban scene, one of which is a map or ‘bird's eye’ view of the scene, and the other of which displays a relatively close, or ‘in-scene’ view of the 3D model image. By the use of a user interface, such as a joystick or mouse, the user is not only able to toggle between which of the two scene perspectives is displayed as a map view and which is displayed up close for in-scene navigation, but may readily navigate within the virtual world of the in-scene view of the displayed 3D image.
For this purpose, the image processing system architecture of the present invention includes a host computer having an associated high resolution display, and one or more associated user interface devices (e.g., mouse and/or joystick), through which the user controls manipulation of and navigation through images generated on the display. To facilitate manipulation of and navigation through a 3D model, such as ‘street level’ movement through an urban scene, the viewing screen of the display is divided into two viewports, which display respectively different magnification views of an image of the 3D model, from points in three-dimensional space that are respectively ‘down inside’ and ‘away from’ the scene.
A first or main viewport may comprise a relatively large region of the display screen, while a second or inset viewport may comprise a relatively small region that overlaps or is superimposed on the main viewport portion. As noted above, one of the viewports will display a map or ‘bird's eye’ view of the scene, while the other viewport will display a relatively close, or ‘in-scene’ view of the 3D model image, and the user may toggle between the two scene perspectives. Operation of the user interface produces input parameters that control navigation within the in-scene view of the displayed 3D image.
The user may change the 3D location and orientation of a user virtual representation icon. The user icon is a software generated object that is superimposed on the displayed image to represent the user's location and orientation within the displayed 3D model. As the user manipulates the icon, an interpolation mechanism is accessed to supply control information to a ‘view camera’ operator, outputs of which are employed to create the 3D scene and to render the view, as the view is coupled to a viewport, which defines the overview and in-scene presentations of the image to the respective viewports.
The overview view of a geographic scene is preferably displayed in a fixed (e.g., North-up) orientation, and is rendered as a 3D perspective map view of the virtual world. Navigation within this ‘far away’ plan view is generally limited, and operation of the user interface moves the user location horizontally and vertically in the display, or at a constant altitude within the image. The overview image maintains the user location in the center of the viewport, so that changing the user location will cause the overview image to pan in the direction opposite the user's motion. The user may change the altitude of the overview viewpoint, to provide a magnification and demagnification action, which essentially allows the user to ‘zoom into’ or ‘zoom out’ of the scene.
The in-scene view is a less constrained 3D perspective view, and directly represents what would be seen by a user located in the virtual world of the 3D model. The user's line-of-sight or viewing direction may be directed at, or close to, horizontal, or it may inclined upwardly or downwardly at any elevation angle, and may face any azimuth angle (compass direction). This closely approximates how an upright human is able to view his/her surroundings within the 3D scene. The in-scene view will be typically from near-ground-level, as in the case of an urban street scene for example, but is not constrained to be so. It may be from any aspect to which the user navigates.
There are two modes of navigation that the user can perform in the in-scene view: 1) joystick mode and 2) mouse mode. In joystick mode, the user moves the joystick handle and operates buttons on the handle to control user location within the virtual 3D world. In mouse mode, the user clicks on a new vantage or ‘look-from’ point, and then an appropriate target or ‘look-at’ point. The look-at point is employed to orient the user as though standing at the new vantage point. To select a new vantage point, the mouse pointer is placed within the main viewport, and the left mouse button is actuated. The new look-at point is selected while the user continues to hold the left mouse button down, and moves the mouse pointer. In both of these steps the mouse pointer is positioned on the actual 3D model displayed in the main viewport.
In the course of navigating through or otherwise manipulating the image, an interpolation operator is used to effect a gradual or smooth transition of the image between its starting and target locations. In addition, each viewing camera location/orientation is updated by the interpolation function assigned to it. Different interpolation functions may be assigned to a camera at different times.
As pointed out above, the user may operate a pushbutton on the user interface to ‘toggle’ between the two scene perspectives displayed by the respective main and inset viewports. The two views are mutually synchronized, so that the viewing cameras move in a coordinated fashion to swap view locations, as the two viewports exchange their respective display roles. The user may toggle between the two views at any time, even during the transition initiated by a previous toggle state change. In this latter case, the motion simply reverses and the viewpoints begin returning to their immediately previous locations. Since the toggling mechanism of the invention cause the two views to be swapped simply and smoothly, user distraction is minimized, so as to facilitate the user retaining a mental picture of his/her location in the virtual world.
A principal benefit of this feature of the invention is that it allows a user at street level in an urban scene to readily and easily ascend or rise to a higher vantage point, in order to realize a better sense of location within th
Chan Ellery Y.
Faulkner Timothy B.
Allen, Dyer, Doppelt Mlibrath & Gilchrist, P.A.
Harris Corporation
Nguyen Phu K.
LandOfFree
Computer-resident mechanism for manipulating, navigating... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer-resident mechanism for manipulating, navigating..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer-resident mechanism for manipulating, navigating... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2953687