Backpressure arrangement in client-server environment

Electrical computers and digital processing systems: multicomput – Computer-to-computer protocol implementing – Computer-to-computer data framing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S230000

Reexamination Certificate

active

06654811

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a client-server environment and more particularly, the present invention relates to a backpressure arrangement in a client-server environment which optimizes the transfer of data between the server and the clients.
2. Description of the Related Art
In a typical client-server environment, a number of clients are connected to the server via a shared data bus. The clients generate requests for data on a control bus and the server responds to these requests by sending data units to the corresponding clients on the shared data bus.
The server keeps a pool of data units (that is, data packets) for each client. This pool may consist of a number of queues storing packets from different flows destined for the same client. The size of any data unit can be arbitrary, but must always be greater than a certain minimum value.
SUMMARY OF THE INVENTION
The object of the present invention is to optimize the transfer of data between the server and the clients so as to keep all clients occupied at all times without causing buffers disposed between the clients and the server from overflowing.
The above-noted object may be effected by providing a line interface card apparatus including: a packet queue memory for storing pools of data packets for different clients; a physical layer having first and second buffers, said physical layer being connected to said packet queue memory by a data bus and being connected to a plurality of links for continuously transmitting data, subject to availability, from the physical layer buffers; a queue manager connected to said packet queue memory and to said physical layer by a control bus; wherein, upon the said physical layer transmitting a request for data to be transmitted on a certain link to said queue manager on said control bus, said queue manager instructs a packet pool corresponding to that link in said packet queue memory to transmit a next data block to said physical layer via said data bus, each data packet in said packet queue memory being transmitted from said packet queue memory to said physical layer in one or more packet fragments upon successive requests for data from a link to which that data packet is destined; and the packet fragments being stored in one of said first and second buffers.
Each of the first and second buffers is sufficiently large enough to store the largest packet fragment and the packet fragment minimum size is equal to the data packet minimum size.
A speed-up factor for the data bus may be equal to a ratio of a maximum fragment length to a minimum fragment length to keep all of said plurality of links busy at all times.
Said plurality of links may be arranged into a plurality of classes, each of said classes including only links having similar transmission speeds, and wherein each of said classes is assigned a priority according to said transmission speed of its respective links, a class having links having a highest transmission speed being assigned a highest priority down to a class having links having a lowest transmission speed being assigned a lowest priority and wherein the queue manager process requests in a strict priority order starting with links from the class having the highest priority down to links from the class having the lowest priority.


REFERENCES:
patent: 5519701 (1996-05-01), Colmant et al.
patent: 5568470 (1996-10-01), Ben-Nun et al.
patent: 6067300 (2000-05-01), Baumert et al.
patent: 6067301 (2000-05-01), Aatresh
patent: 6094435 (2000-07-01), Hoffman et al.
patent: 6151644 (2000-11-01), Wu
patent: 6201789 (2001-03-01), Witkowski et al.
patent: 6246680 (2001-06-01), Muller et al.
patent: 6259698 (2001-07-01), Shin et al.
patent: 6363075 (2002-03-01), Huang et al.
patent: 763915 (1997-03-01), None
patent: 2328593 (1999-02-01), None
patent: 9935879 (1999-07-01), None
“Expandable ATOM Switch Architecture (XATOM)for ATM LANs”, By R. Fan, et al. Pub. date Jan. 5, 1994.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Backpressure arrangement in client-server environment does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Backpressure arrangement in client-server environment, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Backpressure arrangement in client-server environment will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3150656

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.