Architecture and application programming interfaces for...

Electrical computers and digital processing systems: multicomput – Remote data accessing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S231000, C709S241000

Reexamination Certificate

active

06631403

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention is directed to multimedia data storage, transmission and compression systems and methods. In particular, this invention is directed to systems and methods that implement the MPEG-J multimedia data storage, transmission and compression standards. This invention is also directed to control systems and methods that allow for graceful degradation and enhanced functionality and user interactivity of MPEG-4 systems.
2. Related Art
The need for interoperability, guaranteed quality and performance and economies of scale in chip design, as well as the cost involved in content generation for a multiplicity of formats, has lead to advances in standardization in the areas of multimedia coding, packetization and robust delivery. In particular, the International Standards Organization Motion Picture Experts Group (ISO MPEG) has created a number of standards, such as MPEG-1, MPEG-2, MPEG-4 and MPEG-J to standardize bitstream syntax and decoding semantics for coded multimedia.
In MPEG-1 systems and MPEG-2 systems, the audio-video model was very simple, where a given elementary stream covered the entire scene. In particular, MPEG-1 systems and MPEG-2 systems were only concerned with representing temporal attributes. Thus, there was no need to represent spatial attributes in a scene in MPEG-1 systems and MPEG-2 systems.
The success of MPEG-1 and MPEG-2, the bandwidth limitations of the Internet and other distributed networks and of mobile channels, the flexibility of distributed network-based data access using browsers, and the increasing need for interactive personal communication has opened up new paradigms for multimedia usage and control. The MPEG-4 standard addresses coding of audio-visual information in the form of individual objects and a system for combining and synchronizing playback of these objects.
MPEG-4 systems introduced audio-video objects, requiring that the spatial attributes in the scene also need to be correctly represented. Including synthetic audio-video content in MPEG-4 systems is a departure from the model of MPEG-1 systems and MPEG-2 systems, where only natural audio-video content representation was addressed. MPEG-4 systems thus provide the required methods and structures for representing synthetic and natural audio-video information. In particular, MPEG-4 audio-video content has temporal and spatial attributes that need to be correctly represented at the point of content generation, i.e., during encoding, and that also need to be correctly presented at the player/decoder. Because the MPEG-4 player/decoder also allows for limited user interactivity, it should more properly be referred to as an MPEG-4 browser.
Correctly representing temporal attributes in MPEG-4 systems is essentially no different than in MPEG-1systems and MPEG-2 systems. For these earlier standards, the temporal attributes were used to synchronize the audio portions of the data with the video portions of the data, i.e., audio-video synchronization such as lip-synchronization, and to provide system clock information to the decoder to help buffer management. Because significantly more diverse types of elementary streams can be included in MPEG-4 systems, representing temporal attributes is more complex. But, as mentioned earlier, the fundamental methods for representing temporal attributes in MPEG-4 systems is essentially the same as for MPEG-1 systems and MPEG-2 systems.
In the MPEG-1 systems and MPEG-2 systems standards, the specifications extend monolithically from the packetization layer all the way to the transport layer. For example, the MPEG-2 systems Transport Stream specification defined the packetization of elementary streams (i.e., the PES layer) as well as the Transport layer. With MPEG-4 systems, this restriction has been relaxed. The transport layer is not defined normatively, as the transport layer is very application specific. It is left to other standards setting bodies to define the transport layer for their respective application areas. One such body is the Internet Engineering Task Force (IETF), which will define standards for transporting MPEG-4 streams over the Internet.
Representing spatial information in MPEG-4 systems is carried out using a parametric approach to scene description. This parametric approach uses the Virtual Reality Modeling Language (VRML). The Virtual Reality Modeling Language allows spatial and temporal relationships between objects to be specified, and allows description of a scene using a scene graph approach.
The scene description defines one or more dynamic properties of one or more audio and video objects. However, in MPEG-4 systems, the Virtual Reality Modeling Language has been extended to provide features otherwise missing from Virtual Reality Modeling Language.
MPEG-4 uses a binary representation, BInary Format for Scene (BIFS), of the constructs central to VRML and extends VRML in many ways to handle real-time audio/video data and facial/body animation. The key extensions to Virtual Reality Markup Language for MPEG-4 systems involve streaming, timing and integrating 2D and 3D objects. These extensions are all included in the BInary Format for Scene (BIFS) specification.
FIG. 1
outlines one exemplary embodiment of a MPEG-4 systems player, which is also referred to as a “Presentation Engine” or an “MPEG-4 browser”. The main components on the main data path are the demultiplexer layer, the media decoders, and the compositor/renderer. Between these three sets of components there are decoder buffers and composition buffers, respectively. The MPEG-4 systems decoder model has been developed to provide guidelines for platform developers. The binary format for scene data is extracted from the demultiplexer layer, and it is used to construct the scene graph.
Using application programming interfaces (APIs) has been long recognized in the software industry as a way to achieve standardized operations and functions over a number of different types of computer platforms. Typically, although operations can be standardized via definition of the API, the performance of these operations may still differ on various platforms, as specific vendors with interest in a specific platform may provide implementations optimized for that platform.
To enhance the features of VRML and to allow programmatic control, DimensionX has released a set of APIs known as Liquid Reality. Recently, Sun Microsystems has announced an early version of Java3D, an API specification that supports representing synthetic audiovisual objects as a scene graph. Sun Microsystems has also released the Java Media Framework Player API, a framework for multimedia playback.
SUMMARY OF THE INVENTION
As noted above, when coded multimedia is used for distributed networked and local networked applications on a multimedia data processing system, such as a personal computer, a number of situations may arise. First, the bandwidth for networked access of multimedia may be either limited or time-varying, requiring transmission of only the most significant information, followed by transmitting additional information as more bandwidth becomes available.
Second, regardless of the bandwidth available, the client, i.e., the multimedia data processing system, decoding the transmitted information may be limited in processing and/or memory resources. Furthermore, these resources may be time-varying. Third, a multimedia user may require highly interactive nonlinear browsing and playback. This is not unusual, because significant amounts of textual content on distributed networks, such as the Internet, are capable of being browsed using hyperlinked features and because this is also expected to be true for presentations employing coded audio-visual objects. The parametric MPEG-4 system may only be able to deal with the these situations in a very limited way. For example, when the parametric MPEG-4 system is incapable of decoding or presenting all of the coded audio-visual objects, the parametric MPEG-4 system may respond by dropping those objects or temporal occ

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Architecture and application programming interfaces for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Architecture and application programming interfaces for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Architecture and application programming interfaces for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3135192

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.