Electrical computers and digital processing systems: memory – Storage accessing and control – Memory configuring
Reexamination Certificate
2000-10-17
2004-06-08
Sparks, Donald (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Memory configuring
C711S150000, C711S152000, C711S163000, C711S171000, C709S203000, C709S219000, C709S225000, C709S226000, C709S231000, C709S238000, C710S052000, C710S056000, C710S057000, C710S200000, C710S220000, C725S087000, C725S092000, C725S093000, C725S094000
Reexamination Certificate
active
06748508
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the field of electronic communications, and in particular to a method and apparatus for buffering in multi-node data distribution architectures.
Sun, Sun Microsystems, the Sun logo, Solaris and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
2. Background Art
When computing in a network, it is often the case that data is sent from one network location to another. Current computer schemes, however, are inadequate, specifically in situations where the data is being generated in real-time. Before further discussing the drawbacks of current prior art schemes, an application architecture where this problem typically occurs is described below.
Multi-tier Application Architecture
In the multi-tier application architecture, a client communicates requests to a server for data, software and services, for example, and the server responds to the requests. The server's response may entail communication with a database management system for the storage and retrieval of data.
The multi-tier architecture includes at least a database tier that includes a database server, an application tier that includes an application server and application logic (i.e., software application programs, functions, etc.), and a client tier. The data base server responds to application requests received from the client. The application server forwards data requests to the database server.
FIG. 1
provides an overview of a multi-tier architecture. Client tier
100
typically consists of a computer system that provides a graphic user interface (GUI) generated by a client
110
, such as a browser or other user interface application. Conventional browsers include Internet Explorer and Netscape Navigator, among others. Client
110
generates a display from, for example, a specification of GUI elements (e.g., a file containing input, form, and text elements defined using the Hypertext Markup Language (HTML)) and/or from an applet (i.e., a program such as a program written using the Java™ programming language, or other platform independent programming language, that runs when it is loaded by the browser).
Further application functionality is provided by application logic managed by application server
120
in application tier
130
. The apportionment of application functionality between client tier
100
and application tier
130
is dependent upon whether a “thin client” or “thick client” topology is desired. In a thin client topology, the client tier (i.e., the end user's computer) is used primarily to display output and obtain input, while the computing takes place in other tiers. A thick client topology, on the other hand, uses a more conventional general purpose computer having processing, memory, and data storage abilities. Database tier
140
contains the data that is accessed by the application logic in application tier
130
. Database server
150
manages the data, its structure and the operations that can be performed on the data and/or its structure.
Application server
120
can include applications such as a corporation's scheduling, accounting, personnel and payroll applications, for example. Application server
120
manages requests for the applications that are stored therein. Application server
120
can also manage the storage and dissemination of production versions of application logic. Database server
150
manages the database(s) that manage data for applications. Database server
150
responds to requests to access the scheduling, accounting, personnel and payroll applications data, for example.
Connection
160
is used to transmit data between client tier
100
and application tier
130
, and may also be used to transfer the application logic to client tier
100
. The client tier can communicate with the application tier via, for example, a Remote Method Invocator (RMI) application programming interface (API) available from Sun Microsystems™. The RMI API provides the ability to invoke methods, or software modules, that reside on another computer system. Parameters are packaged and unpackaged for transmittal to and from the client tier. Connection
170
between application server
120
and database server
150
represents the transmission of requests for data and the responses to such requests from applications that reside in application server
120
.
Elements of the client tier, application tier and database tier (e.g., client
110
, application server
120
and database server
150
) may execute within a single computer. However, in a typical system, elements of the client tier, application tier and database tier may execute within separate computers interconnected over a network such as a LAN (local area network) or WAN (wide area network).
Data Distribution
Some applications in computer networks require the ability to distribute data collected at one point in the network to other points in the network. Video conferencing applications are examples of such applications. First, data is collected at one point of the computer network and one or more other points in the network request the data. The data is, then, distributed to the requesting points of the network. However, if the collection point does not have the computational ability to distribute the data throughout the network (e.g., thin client topologies), the above method is insufficient for distributing data.
Buffering
When the amount of data being distributed is larger than the amount of storage space in a distribution unit, the data is streamed through a buffer. Data is also streamed through a buffer when distribution is handled in real time. One example is a video conferencing application distributing live video data. The video data is collected at one point, streamed to a distribution unit, buffered and distributed to multiple points in the network.
A prior art method of buffering and dispatching data is to store incoming data in a ring buffer. In a ring buffer, data is stored in an ordered set of N storage locations. When a new data item is stored, it is placed in the storage location which is next in order. When the buffer reaches the end of the order, the order repeats with the first storage location. Thus, a data item is stored in the buffer until N more data items are stored in the buffer. The Nth data item overwrites the first data item.
FIG. 2
illustrates a prior art method of dispatching data buffered in a ring buffer. At step
200
, the storage location of a data item is locked. This means the storage location can only be used by the client which locked it. At step
210
, the storage location is copied to a different location. At step
220
, the storage location is unlocked so other clients may use the data. At step
230
, the copy of the data is processed by the client. At step
240
, the processed copy of the data is dispatched to the desired location.
FIG. 3
illustrates another prior art method of dispatching data buffered in a ring buffer. At step
300
, the storage location of a data item is locked. At step
310
, the data item is processed by the client. At step
320
, the processed data is dispatched to the desired location. At step
330
, any changes to the data item are undone. At step
340
, the storage location is unlocked so other clients may use the data.
With the methods of
FIGS. 2 and 3
, no incoming data is buffered if all the storage locations (buffers) are locked. If data is generated in real-time, this means that some data is lost. For example, if all buffers are locked in a video conferencing application, new video images are not stored until a buffer is unlocked. Thus, no client is able to view the new video images. Additionally, the method of
FIG. 2
is inefficient with respect
Hanko James G.
Khandelwal Vivek
O'Melveny & Myers LLP
Sparks Donald
Sun Microsystems Inc.
Truong Bao Q.
LandOfFree
Method and apparatus for buffering in multi-node, data... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for buffering in multi-node, data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for buffering in multi-node, data... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3312302