Three-dimensional models with markup documents as texture

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S582000, C345S215000, C345S215000

Reexamination Certificate

active

06363404

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to computer generated composite documents and, more particularly, to computer generated dynamic composite documents with three-dimensional models.
BACKGROUND OF THE INVENTION
The design of computer generated composite documents is continually changing to provide more interactive and dynamic features for increasing the ease of user interaction. Composite documents include components, such as images, text, and animations. Hyperlinks can be associated with any of the images, text, or animations. The components may be stored at different locations, such as on a host computer's hard drive, or at a server remotely located from the host computer, but accessible via an intra-network or the Internet. With respect to an intra-network or the Internet, composite documents are called Web pages.
Dynamic composite document features that are presently provided to users include manipulateable three-dimensional models. Virtual Reality Modeling Language (VRML) is an example of a programming language that can be used to provide manipulateable three-dimensional models, such as boxes, cylinders, cones and spheres, as well as several advanced shape geometries in composite documents. Three-dimensional geometries created from VRML can be manipulated in a composite document by a viewer. The viewer can rotate the models and perform other space transformation functions relative to the models. The surfaces of the created three-dimensional models are covered with a texture image which is a two-dimensional grid, like a piece of graph paper. Each grid square of the two-dimensional grid can be colored a different color. The grid squares of the texture image are called texture pixels or texels. The texels of a texture image are typically stored in an image file. The designer of a composite document can select an image file to use as a texture map with a Uniform Resource Locator (URL). Using a URL to specie a texture image file enables the designer to select texture images from anywhere. With respect to VRML, the texture image files can store a single texture image or a movie containing a series of texture images, like the frames in a film. Typically, JPEG (Joint Photographic Experts Group) and GIF (Graphics Interchange Format) file formats are used for non-movie texture images and the MPEG (Moving Pictures Experts Group) format is used for movie textures. A block with six sides may have a different image retrieved from a different URL for each side. See A. L. Ames et al.,
VRML
2.0 Source Book, 1997. Unfortunately VRML, and other three-dimensional modeling languages, do not support user-interface features, such as hyperlinking of information contained in the texture images applied to a three-dimensional shape. The present invention is directed to overcoming this deficiency.
SUMMARY OF THE INVENTION
In accordance with this invention, a method, system and computer-readable medium for providing hyperlinking within textures of three-dimensional models is provided. A processor with hardware and software components stores one or more markup documents in one or more texture image files of predefined three-dimensional models.
A markup document may include user-interface elements, such as various types of link elements. The processor generates a three-dimensional model for display on a display device based on predefined three-dimensional model information, predefined viewpoint information and the texture image files. The texture image files and, thus, the markup documents are mapped to predetermined locations on the predefined three-dimensional model. The displayed markup documents may include user-interface elements that may be hyperlinked to another document, file or script. As with user-interface elements included in two-dimensional markup documents, the user-interface elements included in the displayed three-dimensional model are selected by activating a cursor on the user-interface element, e.g., by placing a mouse controlled cursor over the user-interface element and activating the appropriate mouse button or tabbing from link to link with use of the keyboard
In accordance with other aspects of the present invention, a markup document mapped to a three-dimensional model may be changed by the occurrence of an input event. The occurrence of the input event causes a new markup document to be rendered, stored in the related texture image file, and mapped to the related location on the three-dimensional model.
In accordance with further aspects of the present invention, the input event is placing a cursor on a user-interface element and activating the button of a mouse or other device that controls the position of the cursor. The location of the cursor when activation occurs is used to identify the markup document stored in the texture image file to be changed.
In accordance with still other aspects of the present invention, the input event is the replacement of one markup document stored in a texture image file with another markup document.
In accordance with yet other aspects of the present invention, the markup documents are HTML (Hypertext Markup Language) documents.
As will be readily appreciated from the foregoing summary, the invention provides a new and improved method, apparatus and computer-readable medium for incorporating hyperlinking within textures of three-dimensional models.


REFERENCES:
patent: 4912669 (1990-03-01), Iwamoto et al.
patent: 5640193 (1997-06-01), Wellner
patent: 5838906 (1998-11-01), Doyle et al.
patent: 5918237 (1999-06-01), Montalbano
patent: 5983244 (1999-11-01), Nation
patent: 6018748 (2000-01-01), Smith
patent: 6029200 (2000-02-01), Beckerman et al.
patent: 6031536 (2000-02-01), Kamiwada et al.
patent: 6032150 (2000-02-01), Nguyen
patent: 6032157 (2000-02-01), Tamano et al.
patent: 6034689 (2000-03-01), White et al.
patent: 6035323 (2000-03-01), Narayen et al.
patent: 6070176 (2000-05-01), Downs et al.
Andrews, Keith, et al, “Hyper-G and Harmony: Towards the Next Generation of Networked Information Technology”, ACM 0-89791-755-3/95/0005, pp. 33-34, Dec. 1995.*
Elvins, T. Todd, et al, “Web-based Volumetric Data Retrieval”, ACM 0-89791-818-5/95/12, pp. 7-12, Dec. 1995.*
Flanagan, David, Java Examples in a Nutshell, O'Reilly, pp. 185-187, Sep. 1997.*
Hartman, Jed, et al, The VRML 2.0 Handbook, Addison-Wesley Publishing Co., plates 1-2, 21-22, 25-29, 32-35, pp. 2-3, 7-9, 28, 62-65, 89-90, 119-136, 160-168, 187-211, 213-220, Oct. 1997.*
Isaacs, Scott, Inside Dynamic HTML<Microsoft Press, pp. 57-63, 66-67, 69-74, 79, Dec. 1997.*
Meyer, Tom, et al, “WAXweb: Toward Dynamic MOO-based VRML”, ACM 0-89791-818-5/95/12, pp. 105-108, Dec. 1995.*
Oliver, Dick, et al, Netscape 3 Unleashed, second edition, Sams.net Publishing, pp. 314-315, 365-369, 386-388, 394-396, 518-520, 839, 842-844, Dec. 1996.*
Oliver, Dick, et al, Sams' Teach Yourself HTML 4 in 24 Hours, second edition, Sams.net Publishing, pp. 181-190, Dec. 1997.*
Parr, Terence J., et al, “A Language for Creating and Manipulating VRML”, ACM 0-89791-818-5/95/12, 123-131, Dec. 1995.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Three-dimensional models with markup documents as texture does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Three-dimensional models with markup documents as texture, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Three-dimensional models with markup documents as texture will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2863109

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.