Electrical computers and digital processing systems: multicomput – Computer network managing – Computer network access regulating
Reexamination Certificate
1999-07-01
2003-04-08
El-Hady, Nabil (Department: 2154)
Electrical computers and digital processing systems: multicomput
Computer network managing
Computer network access regulating
C709S214000, C709S216000, C709S217000, C709S219000, C709S238000, C709S213000, C707S793000
Reexamination Certificate
active
06546422
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an access to content servers in a packet transfer network and, particularly, to a caching of network contents.
2. Description of the Related Art
In a known caching method of network contents, in which a server and clients are arranged through a network and a cache device is provided in the network, the cache device monitors accesses of the clients to the server and directly returns responses corresponding to accesses similar to the monitored accesses to the clients.
In each of Japanese Patent Application Laid-open Nos. Hei 9-251414 and Hei 10-31615 which relate to such network caching method discloses a construction in which a cache device caches for all accesses to a server. Further, in Japanese Patent Application Laid-open No. Hei 9-330265, respective clients have their own cache devices and use a cache information commonly.
Japanese Patent Application Laid-open No. Hei 10-49422 discloses a cache device, which caches an access to a server having high access frequency, preferentially, and, when a finite caching capacity of the cache device is going to overflow, deletes the caching content to hold an available cache capacity.
FIG. 8
shows a flowchart of an operation of the above mentioned cache device. As shown in
FIG. 8
, a packet is received (step S
31
) and it is determined whether or not the received packet is an HTTP (Hypertext Transfer Protocol) packet (step S
32
). If the packet is the HTTP packet (Yes in step S
32
), then it is determined whether or not the packet is a request packet from a server (step S
33
). If the packet is a response packet to a request from a server (No instep S
33
), the cache device checks a cache capacity as to whether or not it is short (step S
38
). If the cache capacity is enough (No in step S
38
), the cache device caches the received response corresponding to the request having a certain access frequency (step S
39
). If the cache capacity is short (Yes in step S
38
), the cache device searches any packet having lower access frequency (step S
41
). If there is such packet therein (Yes in step S
41
), the cache device deletes such packet (step S
42
) and caches the received response (step S
39
).
Further, Japanese Patent Application Laid-open No. Hei 9-282289 discloses a cache method in which an arrangement of a cache device in a network is automatically determined on the basis of the access frequency to a server and the route length to the server.
In the network system disclosed in either Japanese Patent Application Laid-open No. Hei 9-251414 or Japanese Patent Application Laid-open No. Hei 10-31615, the cache device performs the caching for all accesses to the server. Therefore, when there are accesses beyond the finite cache capacity, it is impossible to perform an efficient caching with which the coefficient of use of network resources becomes optimal. Further, in Japanese Patent Application Laid-open No. Hei 9-330265, in which the clients have the cache devices, respectively, to use the cache information commonly, it is impossible to perform a caching by taking the coefficient of use of network resources into consideration.
In the cache device disclosed in Japanese Patent Application Laid-open No. Hei 10-49422, the efficient use of the network resources is taken into consideration and the caching for the access to the server, whose access frequency is high, is performed preferentially as shown in FIG.
8
. However, such cache device can not perform efficient caching in view of the coefficient of use of the network since, if the length of route to the server is large, more resources of the network are consumed for each access even when the frequency of the latter is low.
Further, in the network system disclosed in Japanese Patent Application Laid-open No. Hei 9-282289, the arrangement of cache devices in the network is determined on the basis of the frequency of access to the server and the route length to the server. However, such information is not considered in determining a cache in a cache site and, therefore, it is impossible to perform a caching by taking the coefficient of use of the network into consideration.
In these prior arts mentioned above, it is difficult to realize an efficient caching in view of the coefficient of use of the network.
SUMMARY OF THE INVENTION
An object of the present invention is to provide a caching method of network contents and an apparatus for performing the same method, which enable a caching of the network contents with minimum use of network resources.
According to a first aspect of the present invention, a caching method of network contents of a packet transfer network including a packet relay for relaying accesses of clients to servers supplying the contents is featured by that the packet relay monitors accesses from the clients to the servers, determines a cache priority on the basis of a product of relaying path lengths between the packet relay and the servers and the frequency of access from the clients to the servers and preferentially caches responses to the accesses to servers, which have high cache priority.
According to a second aspect of the present invention, a caching method of network contents of a packet transfer network including packet relays for relaying accesses of clients to servers supplying the contents is featured by that, after a first packet relay caches a response to an access to a specific server, a second packet relay compares the length of relaying path between the second packet relay and the server with the length of relaying path between the second packet relay and the first packet relay, for the same access to the server in the second packet relay, and transfers the access to the first packet relay when the route length between the second packet relay and the first packet relay is smaller than compares the length of relaying path between the second packet relay and the server.
According to a third aspect of the present invention, a cache device for network contents, which includes a first and second termination devices connected to a client side network and a server side network, respectively, a switch device connected between the first and second termination devices and a cache server connected to the switch device, is featured by that the switch device transfers a request packet from the client to either the server side network and the cache server or the cache server, depending upon the state of cache of a response of the cache server, transfers the response packet from the server to the network and the cache server on the client side and transfers the response packet from the cache server to the network on the client side and that the cache server determines a cache priority of the response from the server on the basis of a product of relaying path lengths and the access frequency to the server, which is calculated on the basis of the received request packet, preferentially caches the response corresponding to the access to the server, which has a high cache priority, on the basis of the response packet from the server and, after the response is cached, returns the response packet having the cached response corresponding to the access packet to the switch device.
According to the present invention, in determining the priority of cache of the contents, the length of relaying path up to the server is used in addition to the frequency of access to the server. Therefore, a server access through a long route up to the server and consuming a large network resources is cached in access unit (from a client access to a server to a response thereto) even when the frequency of access to the server is low to some extent. On the contrary, an access through a short route to the server and consuming small network resources is not cached in access unit even if the access frequency is high to some extent. In this manner, the caching is realized by a finite cache capacity with minimum consumption of network resources.
REFERENCES:
patent: 5924116 (1999-07-01), Aggarwal et al.
patent: 594
Arutaki Akira
Isoyama Kazuhiko
Nishihara Motoo
El-Hady Nabil
NEC Corporation
LandOfFree
Caching of network contents by packet relays that determine... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Caching of network contents by packet relays that determine..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching of network contents by packet relays that determine... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3063067