Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1998-06-09
2001-03-27
Hsu, Alpus H. (Department: 2662)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S465000, C370S469000, C370S471000
Reexamination Certificate
active
06208651
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to transmitting data over digital networks, and, in particular, to improving message latency and increasing throughput by reducing overhead caused by interfaces and headers in different protocol layers.
It is known that often increasing capacity of a non-congested network has no or very little effect on increasing the throughput. That happens due to the fact that delays occur in the software on hosts, not in the transmission lines themselves. Peer entities of two hosts (for example, two transport layers) communicate with each other by using protocols.
Distributed systems employ communication protocols for reliable file transfer, window clients and servers, RPC atomic transactions, multi-media communication, etc. Layering of protocols has been known as a way of dealing with the complexity of computer communication. The convenience of having a stack of protocols is often overshadowed by the problem that layering produces a lot of overhead which, in turn, increases delays in communication and leads to performance inefficiencies. One source of overhead is interfacing. Crossing a protocol layer costs some CPU cycles. The other source is header overhead. Each layer uses its own header, which is prepended to every message and usually padded so that each header is aligned on a 4 or 8 byte boundary. The size of each layer's header, together with a trend to very large host addresses (of which at least two addresses per message are needed), makes it impossible to have a smaller header. Evidently, it is of great importance to develop techniques that reduce delays by improving performance of layered protocols.
Various techniques improving performance of a communication system with layered communication protocols have been offered in the past. One of the successful techniques is reduction of communication latency. Particularly, the technique calls for minimization of the processing time from the moment when a sender hands a packet down to a protocol stack to the moment when the lowest protocol layer sends the packet out to a communication channel. For example, U.S. Pat. No. 4,914,653 “Inter-processor communication protocol” to Bishop et al. discloses a method according to which each host assigns a certain priority to a message it intends to transmit. Regular-type messages are assigned a lower priority, acknowledgment-type messages are deemed to be “quick” messages with a higher priority. According to that protocol, a receiving host returns a high priority message to a sending host either during or following the transmission of every regular packet. That allows the receiving host to return acknowledgments to the sending host without having to piggyback them to another outgoing communication or without having to save and batch the acknowledgments for later transmission. Similarly, at the receiving host top priority acknowledgments are received and temporarily stored separately from regular messages. Such acknowledgments are processed independently of the arrival of regular messages. The invention disclosed in the Bishop patent optimizes the performance of protocols by only prioritizing incoming and outgoing communication handled by the protocols.
U.S. Pat. No. 5,487,152 “Method and apparatus for frame header splitting in a media access control/host system interface unit” to Young describes an interface between a media access control (MAC) function of a local area network and a host attached to the network medium via the MAC. The interface is capable of determining points within a frame where a frame can be split for further processing. The frame splitting feature allows the header of a frame to separate from the data part of the frame and place the data and the header into separate buffers. Later, headers can be processed from their buffer even when the data buffer space is full.
An Ethernet network hub adaptor that is modified to provide a known bounded low maximum access latency is disclosed in U.S. Pat. No. 5,568,469 “Method and apparatus for controlling latency and jitter in a local area network which uses a CSMA/CD protocol” to Sherer et al. The adaptor regulates access of different stations to the channel by having a special state variable stored in the MACs. Latency is reduced by ensuring that the hub and hosts take turns in transmitting packets. The method described in that patent does not address the possibility of optimizing packets themselves or optimizing the manner in which packets travel through a protocol stack.
Similarly, U.S. Pat. Nos. 5,412,782 and 5,485,584 disclose the reduction of overall latency in an Ethernet network by providing a controller in an Ethernet adapter, the controller generating interrupts for receiving or transmitting different portions of a packet. Again, those patents employ an external device, a controller, for reducing latency communication.
From the overview of the prior art it becomes clear that none of the existing patents takes into account the design features of already existing protocols to achieve improvement in end-to-end massage latency. Such an approach is very desirable, since modern network technology itself allows for very low latency communication. For example, the U-Net interface to an ATM network allows for 75 &mgr;second round-trip communication as long as the message is 40 bytes or smaller. That system is described in a publication by Anindya Basu, Vineet Buch, Werner Vogels, and Thorsten von Eicken: U-Net: A user-level network interface for parallel and distributed computing. In
Proc. of the Fifteenth ACM Symp. on Operating Systems Principles
, pages 40-53, Copper Mountain Resort, Colo., December. 1995, that is incorporated herein by reference. For larger messages, the latency is at least twice as long. It is therefore important that protocols that are unimplemented over U-Net use small headers and do not introduce much processing overhead.
So far, complex communication protocols and, in particular, layered ones, have introduced orders of magnitude of overhead compared to the latencies of the underlying networks. As networks are getting faster and faster (by the introduction, for example, of fast optical networks), the effect of overhead on the network latency has been worsening.
SUMMARY OF THE INVENTION
It is, therefore, the object of the present invention to reduce overhead by developing a Protocol Accelerator (PA), comprising a collection of techniques implementation of which significantly reduces both the message header overhead imposed by layered protocols and the message processing overhead of complex communication protocols. In particular, when a Protocol Accelerator applies to complex layered protocols that are employed in conjunction with high-speed networks, the latency of the layered protocol stack becomes approximately the latency of the underlying network. The Protocol Accelerator can be used to create distributed computer systems that are faster and more robust than existing systems.
The method and system of the PA comprise the following inventive steps:
1. The fields in the message headers are classified in four categories.
2. The fields are collected together in four headers, one for each category. Each of the four headers is carefully packed, minimizing padding necessary for alignment and reducing header overhead.
3. One of the categories comprises fields that are immutable over the time when a connection is active. Rather than including the immutable information in every message, such information is only sent once. Subsequent messages use only a short identifier to represent immutable information. The short identifier is called a “connection cookie”. Therefore, the use of a “connection cookie” reduces the header overhead and connection lookup time upon receipt of a message by a receiving host.
4. The communication protocol is transformed into a canonical form. The canonical form of a communication protocol is defined as a two-phase message processing: the pre-processing and the post-processing. The pre-processing phase concerns itself with a message he
Hayden Mark
Van Renesse Robbert
Cornell Research Foundation Inc.
Hodgson Russ Andrews Woods & Goodyear LLP
Hsu Alpus H.
LandOfFree
Method and system for masking the overhead of protocol layering does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for masking the overhead of protocol layering, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for masking the overhead of protocol layering will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2515274