Electrical computers and digital processing systems: multicomput – Distributed data processing – Client/server
Reexamination Certificate
2000-04-07
2004-08-03
Dharia, Rupal (Department: 2141)
Electrical computers and digital processing systems: multicomput
Distributed data processing
Client/server
C709S219000
Reexamination Certificate
active
06772193
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a caching technique on a network, and more particularly to a method and system for controlling the caching of user's required data between a user terminal and a server having the user's required data stored therein.
The internet is a computer network which is formed by the interconnection of networks in all the world. Numerous servers are connected to the internet. A user terminal capable of connection with the internet can make an access to one of the servers to retrieve desired information.
It does not necessarily follow that an individual network forming the internet has a sufficient throughput to flow a large amount of information. If a large amount of information is to be transferred from a WEB server to a user terminal through a network having an insufficient throughput, a long time is taken for transfer. A typical example of the conventional method for improving the throughput includes a method in which a cache is located between servers and user terminals so that transferred data is utilized plural times. This example has been disclosed by JP-A-9-81491 and JP-A-10-198623.
The cache in the above-mentioned prior art is arranged in a network system so that when a user terminal requests a server for information, the required information is sure to pass through the network cache. The information sent from the server to the user terminal is temporarily stored in the network cache. Therefore, when a user terminal makes a request for the same information again, this information is delivered by not the server which has originally held this information but the network cache which has the same information stored therein. Thereby, it becomes possible to improve the responsibility when a user terminal requests a server for information.
SUMMARY OF THE INVENTION
The above-mentioned conventional network cache has a problem that in the case where streaming is made so that a user terminal starts the reproduction of video or video data before the whole of video data on the server is received, cached data becomes incomplete at the time of interruption of display of video data at the user terminal. Namely, at a point of time when the video data reproduction is stopped at the user terminal, the video data transfer request is also stopped and hence the network cache ceases the caching of transferred data. When a request for the same data is made from a user terminal again, the incompletion of the cached data causes the server to deliver the remaining data subsequently to the delivery of the cached data from the network cache. In this case, a change in throughput of the network may be caused, thereby making it impossible to attain the stabilized delivery of video data.
An object of the present invention is to provide a network cache in which irrespective of the interruption of a transfer request from a user terminal, the caching of data required by the user terminal is made without ceasing or up to the last extremity of that data in its access unit, thereby performing the stabilized reproduction of video data at the time of re-request of the same data, a control method for controlling such a network cache and a computer program product for embodying such a control method.
In many cases, video data has a remarkably large data size as compared with character data or still picture data. Therefore, the conventional network cache has a problem that the caching of video data causes the discard of previously cached data or the reduction of a cache area for storage of data other than video data, thereby deteriorating the cache hit rate to deteriorate the network responsibility.
Another object the present invention is to provide a network cache in which in the case where it is known that the caching of transferred data is not necessary, a cache processing is eliminated, thereby preventing the cache hit rate from being deteriorated, a control method for controlling such a network cache and a computer program product for embodying such a control method.
A network cache according to one aspect of the present invention includes a user request processing unit for making the proxy delivery or transfer of required data from a server to a user terminal, and a cache control unit for performing a processing for storage of required data transferred from the server. The user request processing unit starts its processing in response to a request from the user terminal. When the user terminal stops the request, the operation of the user request processing unit is stopped. The cache control unit is activated by the user request processing unit. The user request processing unit and the cache control unit operate in parallel to each other. Thereby, even if the user terminal interrupts the request on the way, the cache control unit continues the cache processing for storage of required data up to the last extremity of that data in its access unit, for example, the end of a file.
A network cache according to another aspect of the present invention includes means for holding that highest speed or maximum network throughput for each of plural servers at which data can be flown between that server and a network cache, and means for analyzing a reproduction bit rate required for making the real-time reproduction of video data transferred or obtained from the server. The held network throughput and the required video data reproduction bit rate are compared. In the case where the ratio of the network throughput to the video data reproduction bit rate is larger than a fixed value, the caching of transferred data is not performed. Thereby, it is possible to prevent the cache hit rate from being deteriorated.
A network cache according to a further aspect of the present invention includes analyzing means such as a program module with which when data requested from a user terminal to a server is multimedia configuration information indicating a way of combining different kinds of data including character data, still picture data, video data and so forth, the multimedia configuration information is analyzed to extract picture data, thereby obtaining a required transfer rate for data made an object of transfer. Thereby, it is possible to judge whether or not the caching of transferred data should be performed in accordance with the obtained transfer rate.
Other objects, features and advantages of the present invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
REFERENCES:
patent: 5584007 (1996-12-01), Ballard
patent: 5805804 (1998-09-01), Laursen et al.
patent: 5857101 (1999-01-01), Ballard et al.
patent: 5913041 (1999-06-01), Ramanathan et al.
patent: 6014693 (2000-01-01), Ito et al.
patent: 6016520 (2000-01-01), Facq et al.
patent: 6119151 (2000-09-01), Cantrell et al.
patent: 6167438 (2000-12-01), Yates et al.
patent: 6237060 (2001-05-01), Shilts et al.
patent: 6330606 (2001-12-01), Logue et al.
patent: 6336157 (2002-01-01), Carbonaro et al.
patent: 6405256 (2002-06-01), Lin et al.
patent: 6473902 (2002-10-01), Noritomi
patent: 981491 (1997-03-01), None
patent: 10198623 (1998-07-01), None
Igawa Masaru
Sakamoto Shuichi
Shibata Koichi
Antonelli Terry Stout & Kraus LLP
Dharia Rupal
Hitachi , Ltd.
Nguyen Quang
LandOfFree
Method for continuing data transfer to network cache from... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for continuing data transfer to network cache from..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for continuing data transfer to network cache from... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3274045