API communications for vertex and pixel shaders

Computer graphics processing and selective visual display system – Computer graphics display memory system – Register

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S561000, C345S538000, C345S426000, C345S522000

Reexamination Certificate

active

06819325

ABSTRACT:

FIELD OF THE INVENTION
The present invention is directed to a three dimensional (3-D) graphics application programming interface (API) that provides new and improved methods and techniques for communications between application developers and procedural shaders, such as vertex and pixel shaders.
BACKGROUND OF THE INVENTION
Computer systems are commonly used for displaying graphical objects on a display screen. The purpose of three dimensional (3-D) computer graphics is to create a two-dimensional (2-D) image on a computer screen that realistically represents an object or objects in three dimensions. In the real world, objects occupy three dimensions. They have a real height, a real width and a real depth. A photograph is an example of a 2-D representation of a 3-D space. 3-D computer graphics are like a photograph in that they represent a 3-D world on the 2-D space of a computer screen.
Images created with 3-D computer graphics are used in a wide range of applications from video entertainment games to aircraft flight simulators, to portray in a realistic manner an individual's view of a scene at a given point in time. Well-known examples of 3-D computer graphics include special effects in Hollywood films such as Terminator II, Jurassic Park, Toy Story and the like.
One industry that has seen a particularly tremendous amount of growth in the last few years is the computer game industry. The current generation of computer games is moving to 3-D graphics in an ever increasing fashion. At the same time, the speed of play is driven faster and faster. This combination has fueled a genuine need for the rapid rendering of 3-D graphics in relatively inexpensive systems.
Rendering and displaying 3-D graphics typically involves many calculations and computations. For example, to render a 3-D object, a set of coordinate points or vertices that define the object to be rendered must be formed. Vertices can be joined to form polygons that define the surface of the object to be rendered and displayed. Once the vertices that define an object are formed, the vertices must be transformed from an object or model frame of reference to a world frame of reference and finally to 2-D coordinates that can be displayed on a flat display device, such as a monitor. Along the way, vertices may be rotated, scaled, eliminated or clipped because they fall outside of a viewable area, lit by various lighting schemes and sources, colorized, and so forth. The processes involved in rendering and displaying a 3-D object can be computationally intensive and may involve a large number of vertices.
To create a 3-D computer graphical representation, the first step is to represent the objects to be depicted as mathematical models within the computer. 3-D models are made up of geometric points within a coordinate system consisting of an x, y and z axis; these axes correspond to width, height, and depth respectively. Objects are defined by a series of points, called vertices. The location of a point, or vertex, is defined by its x, y and z coordinates. When three or more of these points are connected, a polygon is formed. The simplest polygon is a triangle.
3-D shapes are created by connecting a number of 2-D polygons. Curved surfaces are represented by connecting many small polygons. The view of a 3-D shape composed of polygon outlines is called a wire frame view. In sum, the computer creates 3-D objects by connecting a number of 2-D polygons. Before the 3-D object is ultimately rendered on a 2-D display screen, however, the data of sophisticated graphics objects undergoes many different mathematical transformations that implicate considerably specialized equations and processing unique to 3-D representation.
As early as the 1970s, 3-D rendering systems were able to describe the “appearance” of objects according to parameters. These and later methods provide for the parameterization of the perceived color of an object based on the position and orientation of its surface and the light sources illuminating it. In so doing, the appearance of the object is calculated therefrom. Parameters further include values such as diffuse color, the specular reflection coefficient, the specular color, the reflectivity, and the transparency of the material of the object. Such parameters are globally referred to as the shading parameters of the object.
Early systems could only ascribe a single value to shading parameters and hence they remained constant and uniform across the entire surface of the object. Later systems allowed for the use of non-uniform parameters (transparency for instance) which might have different values over different parts of the object. Two prominent and distinct techniques have been used to describe the values taken by these non-uniform parameters on the various parts of the object's surface: procedural shading and texture mapping. Texture mapping is pixel based and resolution dependent.
Procedural shading describes the appearance of a material at any point of a 1-D, 2-D or 3-D space by defining a function (often called the procedural shader) in this space into shading parameter space. The object is “immersed” in the original 1-D, 2-D or 3-D space and the values of the shading parameters at a given point of the surface of the object are defined as a result of the procedural shading function at this point. For instance, procedural shaders that approximate appearance of wood, marble or other natural materials have been developed and can be found in the literature.
The rendering of graphics data in a computer system is a collection of resource intensive processes. The process of shading i.e., the process of performing complex techniques upon set(s) of specialized graphics data structures, used to determine values for certain primitives, such as color, etc. associated with the graphics data structures, exemplifies such a computation intensive and complex process. For each application developer to design these shading techniques for each program developed and/or to design each program for potentially varying third party graphics hardware would be a Herculean task, and would produce much inconsistency.
Consequently, generally the process of shading has been normalized to some degree. By passing source code designed to work with a shader into an application, a shader becomes an object that the application may create/utilize in order to facilitate the efficient drawing of complex video graphics. Vertex shaders and pixel shaders are examples of such shaders.
Prior to their current implementation in specialized hardware chips, vertex and pixel shaders were sometimes implemented wholly or mostly as software code, and sometimes implemented as a combination of more rigid pieces of hardware with software for controlling the hardware. These implementations frequently contained a CPU or emulated the existence of one using the system's CPU. For example, the hardware implementations directly integrated a CPU chip into their design to perform the processing functionality required of shading tasks. While a CPU adds a lot of flexibility to the shading process because of the range of functionality that a standard processing chip offers, the incorporation of a CPU adds overhead to the specialized shading process. Without today's hardware state of the art, however, there was little choice.
Today, though, existing advances in hardware technology have facilitated the ability to move functionality previously implemented in software into specialized hardware. As a result, today's pixel and vertex shaders are implemented as specialized and programmable hardware chips. Exemplary hardware designs of vertex and pixel shader chips are shown in
FIGS. 1A and 1B
, and are described later in more detail. These vertex and pixel shader chips are highly specialized and thus do not behave as CPU hardware implementations of the past did.
Thus, a need has arisen for a 3-D graphics API that exposes the specialized functionality of today's vertex and pixel shaders. In particular, since present vertex shaders are being implemen

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

API communications for vertex and pixel shaders does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with API communications for vertex and pixel shaders, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and API communications for vertex and pixel shaders will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3336363

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.