Cost-based optimization for content distribution using...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S124000, C710S011000, C709S228000, C709S230000, C709S241000

Reexamination Certificate

active

06542964

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to schemes for caching content, and in particular, Internet content, at one or more locations.
BACKGROUND
Internet content, in its broadest sense, can be thought of as data, objects or information available via the Internet (perhaps through the World-Wide-Web (WWW) graphical user interface) using the hypertext transfer protocol (HTTP), the file transfer protocol (FTP) or other protocols such as the real-time streaming protocol (RTSP). A cache is a way to replicate requested Internet content on a system closer (either physically or logically) to the requesting site than to the source. The cache can then be used as a means to reduce the time needed to access the content, improve network reliability and reduce upstream bandwidth consumption.
Caching can be performed at any point along a delivery path between the client that requests the information and the server (or other source) that provides it. Different terms are used to refer to the cache, depending on where it is deployed in the delivery path.
FIG. 1
shows some of the common locations in which caches (sometimes referred to as cache servers) can be deployed:
A personal cache server or personal proxy server
5
may be associated with an individual user's personal computer
10
. The function of a personal cache server
5
is to improve user performance by keeping local copies of frequently request content on the user's personal computer
10
. Most commercial web browsers available today include some caching capability but this functionality is generally limited in terms of features and storage capacity. Some personal cache servers may be configured so as to attempt to anticipate what the user's future content requests might be. Then, these anticipated requests can be pre-fetched before they are actually requested by the user or a user application. By avoiding long delays before requested content is returned, the user's experience is enhanced.
A personal proxy server extends the concept of a personal cache server by servicing more than one client. In most cases, personal proxy servers are used to connect two or more computers/devices to a network (e.g., the Internet) over a single connection. The proxy server hides the fact that there is more than one computer by using either a network address translation (NAT) scheme or local address translation (LAT) scheme to assign fictitious addresses to the computers connecting to the personal proxy server. When the proxy server receives a request, it translates the fictitious address into a real Internet Protocol (IP) address and forwards the request using the real IP address. When a response is received, the proxy server translates the address back to the original fictitious address and returns the reply to the client that initiated the request.
Another common cache sever is the Point Of Presence (POP) cache server
12
. POP cache servers
12
may be deployed by Internet Service Providers (ISPs) and are used both to improve user performance and to manage bandwidth costs. POP cache servers are typically configured in one of two ways; either as a proxy, where each user specifically requests use of the cache, or as a transparent cache, to which all requests are redirected.
Edge cache servers
14
are also common features in ISPs' networks. The primary roll of an edge cache server is to minimize traffic across a service provider's backbone. As most service providers lease their backbone network circuits from other carriers, the use of a cache at this level can lead to significant cost savings. For example, a service provider may install an edge cache device in each of the provider's major regional network centers (often referred to as super POPs) so that data is only transmitted across the (leased) backbone a minimum number of times.
Cache servers
17
may also be installed at peering points
16
. To understand why cache servers are used at this level, consider that the Internet is made up of thousands of separate networks. In order for these networks to exchange information efficiently, peering points
16
were created so that service providers could interconnect their respective networks. Unfortunately, peering points have become saturated, at least in part because the same piece of information is often moved across the peering point thousands of times. By placing cache servers
17
at the peering points (to establish what has become known as content peering), service providers are able to transfer particular content across the peering point only once and then serve all subsequent requests for that content from the cache
17
. This helps to reduce the amount of traffic being transferred across the peering point
16
, thus improving response time.
Cache servers may also be deployed to act as so-called HTTP accelerators
18
at various locations. Because cache servers are often much more lightweight and efficient than a full featured server they are often used to front-end the actual servers
19
. This is most often done with web servers and the resulting entity is referred to as an HTTP accelerator. When a user request is received, it is directed to one of the available accelerators, which because it already has the information is able to respond to the request without the need to communicate back the origin server
19
. This significantly reduces the workload on the origin server, which in turn improves user response time. Caches are also used in connection with firewall proxy servers
20
. A firewall proxy server is often found at a company's connection to the Internet and performs many different functions. For example, the firewall proxy server
20
may block outside requests to access the company's internal network. The firewall proxy server
20
also gives the company the ability to control employee access to the Internet. If so equipped, the firewall proxy server
20
can store frequently requested information in a cache to improve user response time and reduce networks costs. In addition, it can be integrated with universal resource locator (URL) databases that restrict access to sites that may contain material that is not consistent with company policies. Until recently, the primary focus of these devices has been on access control and security and as such have had limited caching capability.
Finally, cache servers may be associated with distributed content caching (DCC)/reverse proxy operations. One significant requirement for any enterprise doing business on the Internet is to be able to scale their service and manage user response time. Distributed content caching does just that. In this configuration, cache servers
22
may be deployed at major traffic sites for a provider's content. In this case, a provider may be an Internet service provider, a content provider or even a country provider (e.g., where a particular provider deploys access systems that allow users in overseas countries to access Web sites in the United States).
Unlike database replication, where data is duplicated based on content being created, updated, or deleted, cache replication is dynamic, which simply means it is based on a client request. The advantage of dynamic replication is that only the content that is requested gets replicated. The disadvantage is that changes to the original content are not automatically applied to the replicated content. To overcome this disadvantage, a cache needs to be able to check for possible discrepancies between its copy of the content and the original. There are many different methods for validating cache content coherency—what type of content is being replicated and other business requirements often dictate the best method for a particular situation. In general though, most cache coherency methods do not require that the original content be checked each time a client requests it. Instead these schemes provide a means for defining how stale (i.e., how old) a cached copy of content must be before it is re-checked against the original.
Which method of cache coherency i

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cost-based optimization for content distribution using... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cost-based optimization for content distribution using..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cost-based optimization for content distribution using... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3058571

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.