Multiplex communications – Data flow congestion prevention or control
Reexamination Certificate
1999-04-05
2002-10-29
Kizou, Hassan (Department: 2662)
Multiplex communications
Data flow congestion prevention or control
C709S235000
Reexamination Certificate
active
06473401
ABSTRACT:
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not Applicable
REFERENCE TO A MICROFICHE APPENDIX
A microfiche appendix is attached. The total number of microfiche is one and the total number of frames is ninety-two.
NOTICE OF MATERIAL SUBJECT TO COPYRIGHT PROTECTION
All of the material in this patent document is subject to copyright protection under the copyright laws of the United States and of other countries. The owner of the copyright has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the United States Patent and Trademark Office file or records, but otherwise reserves all copyrights whatsoever.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention pertains generally to network server technology, and more particularly to a pseudo-serving system and method for reducing congestion at the server or within the network itself. The pseudo-serving mechanisms of the invention provide a means of cooperatively sharing the burden of file serving within the body of Internet requesters. Contract based incentives and security features operate to persuade resource consumers to help serve resources in return for speedier data access.
2. Description of the Background Art
The phenomenal growth of interconnected computer networks such as the Internet has intensified the demand for bandwidth. A major cause of delays in computer networks is congestion which, in the context of data transmission, is conceptually similar to congestion on a freeway. Each data packet to be passed over the network can be likened to a vehicle that is heading to a destination. When too many data packets (cars) are heading to destinations that lie along the same internet segment (freeway), then these packets (cars) interfere with one another. This causes the speed of the traffic to slow down, and it takes longer for the data packets (cars) to reach their desired destinations.
Bandwidth congestion in a computer network occurs largely at three different locations within the network, which can be categorized as follows:
(a) Server-side (on-ramp)—The bandwidth of the server and its link to the Internet is divided among the clients downloading files from the server. While the user can cause tremendous data to flow, the server and/or its link to the lnternet is a point of congestion;
(b) Intermediate links (sections of freeway)—Congestion arises in the intermediate nodes and links of the Internet because too many users are using the network at the same time and the user requests share a common node or link. For example, congestion occurs when a router can not route packets faster than they arrive; and
(c) Client-side (off-ramp)—The bandwidth provided by the user's internet service provider (ISP) is shared among a large number of other users (user community). The greater the sharing, the less bandwidth each user receives.
To better meet the demand for bandwidth and reduce Internet congestion, the research community is responding with innovative approaches. One approach, for example, is the development of faster network components, including modems, switches, and transmission lines. Despite these improvements, however, new multimedia applications continue to demand ever-greater bandwidth. The transmission of real-time traffic such as is required for Internet phone and videoconferencing applications, as well as the download of large Moving Picture Experts Group (MPEG) files, is tolerable only under the best of network conditions.
Another approach is the development of new protocols that use bandwidth more efficiently. Examples include Compressed Serial Line Internet Protocol (CSLIP) and Low-Bandwidth X, which use compression techniques to reduce redundancy. Other protocols such as Hyper-Text Transfer Protocol version 1.1 (HTTP 1.1) and Transaction Transmission Control Protocol (TTCP) improve on current standards by removing overhead. Still others call for the removal of entire protocol layers. Work is underway, for example, to implement Transmission Control Protocol/Internet Protocol (TCP/IP) directly over a Synchronous Optical Network (SONET).
A further approach is the development of caching mechanisms within the Internet. These schemes work by recognizing that files are often requested more than once. By storing popular files locally, future requests for these files can be satisfied quickly without the need to retrieve them from the server.
Caching schemes are characterized along a number of dimensions. Two of the most important ones include the location of the cache and the degree of cooperation. Data transfers from a cache close to the client tend to be faster and demand fewer resources from the network. Cooperative caches generally have fewer cache misses than those that operate by themselves. Although caching schemes work well and are responsible for a portion of the reduction of bandwidth usage today, they are not always able to satisfy requests. This happens when the request is a first request for the data, and when the requested data in the cache has become “stale”; in either case, the data must be retrieved from the server. Moreover, cooperative caches that yield higher hit rates suffer from many of the same problems that they were intended to address. These include bottlenecks at certain “hot spots” (sometimes retrieving a file from a heavily loaded cache is slower than retrieving it directly from the server) and increased latencies because the cache is located many nodes (“hops”) away from the client.
Closely related to caching is prefetching. Rather than keeping retrieved data locally on behalf of future requests, prefetching works by transferring data to the user prior to a user request. Mailing lists, network news, and so-called “push” technologies belong to this category; data is pushed from the server to the client in anticipation of future requests. Unfortunately, prefetching suffers from two important drawbacks. First, it is useless to the user if the file requested has not been prefetched. Second, it generates unnecessary traffic in the network whenever files are retrieved for which there was never a request. This further exacerbates congestion in the network without providing any benefits.
Still another approach utilizes basic principles of economics. It recognizes that bandwidth is a scarce resource and seeks ways to allocate it optimally. Work in this area is often concerned with maximizing the welfare of the user community. This is usually done by granting priority for the delivery of packets to users who value it more at the expense of those who value it less. To encourage the truthful revelation of user values, these schemes often institute some form of pricing based on usage of bandwidth. Mackie-Mason and Varian's “smart market” gives a flavor for how economics can be applied to the allocation of bandwidth. In it, packets are routed based on bids placed by users; packets with higher bids are routed with higher priority over ones with lower bids. While schemes like the smart market maximize the welfare of the user community, they tend to be impractical from a number of standpoints. Prioritizing packets based on bids, for example, requires that all routers cooperate. This requires significant changes to the well-entrenched IP protocol and is therefore difficult to implement. Moreover, pricing schemes based on usage often incur significant accounting costs, and basic questions such as who should be billed in various distributed connectionless environment, such as the Internet, are difficult to answer.
The development of faster network components and more efficient protocols act to directly reduce network congestion, while caching and prefetching schemes may act to reduce congestion, but are not always effective. For example, requests that do not “hit” the caches go directly to the server. With a cache hit rate of about 50%, even with cooperative caches roughly half of all requests still arrive at the server. These cache misses apply both to regular caches and to cooperative caches.
Therefore
Ghosal Dipak
Kong Keith
Gray Cary Ware & Freidenrich LLP
iScale, Inc.
Yin Lu
LandOfFree
Self-scaling method for exploiting cached resources across... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Self-scaling method for exploiting cached resources across..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Self-scaling method for exploiting cached resources across... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2999201