Caching system and method for a network storage system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S120000, C711S129000, C711S130000, C711S141000

Reexamination Certificate

active

06792507

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates generally to a system and method for storing data on computers connected by a network, such as the Internet and in particular to a caching system and method for reducing the data traffic and problems associated with network storage systems.
Typical enterprise applications that run on today's Internet environment generally need to draw large amounts of data from either local or remote corporate data centers through geographically dispersed networks (e.g. Local Area Networks, Wide Area Networks, etc.). In many cases, this data traffic must share the network bandwidth with all other traffic on the network, making the delivery of data very much susceptible to network congestion. Caching is a technique of mapping frequently used data from a slow storage device, in which it is typically stored, to a higher speed storage device to achieve data access performance gains since the data can be more rapidly accessed from the higher speed storage device. Extending this concept in a network environment for remote data sources, it is possible to map the remote data into a local cache which is near the requesting host device in order to decrease/eliminate the delay due to network traffic congestion.
The greatest challenge associated with using caches is ensuring data coherency, i.e. to make sure that the data in the cache are always the same as the data stored in the target storage device. For static data, there are no such concerns because the data in the target storage device stays the same so that the data in all caches extracted from the original data will always be the same as the original. However, in real-world situations, data usually gets updated frequently, making it necessary for the caches to either refresh themselves frequently to obtain the current data or to invalidate the data stored in the cache so that applications reading from the cache will not get “stale” (e.g., invalid) data from the cache.
Currently in the industry, there exist several cache mechanisms and architectures for improving the performance of data retrieval from remote storage devices. These include storage subsystems (e.g. RAID controllers) in a storage network environment, web caches in implemented within web browsers, and web cache appliances. Each of these cache mechanisms are designed to improve data retrieval performance for their specific applications.
In storage networks (e.g. Fibre Channel networks), any number of hosts read and write data to storage subsystems (e.g. RAID controllers) across the network. Storage subsystems attached to these storage networks implement cache memory either inside the storage devices (i.e. disk drives) or on the subsystem controller cards. In both cases, they usually have very limited amounts of cache storage space installed. Moreover, these kinds of caches may not help the overall performance much if the transmission latency between the storage subsystem and the requesting host is large, as usually is the case in large geographically dispersed networked environments.
On the other hand, the web page caching mechanism implemented in the web browsers on individual client machines stores a copy of requested data locally. This kind of cache system has two problems: the first one is that the cached data is a snap shot of the data at the time it is being cached and thus afterwards it has no way of knowing if the data in the original storage device has been updated; secondly, the same data gets cached in many client machines making it an inefficient mechanism for general applications. This approach deals with managing the web page cache memory inside the client machines while they access web pages from Internet web servers. However, web page access represents only a special class of application where occasional stale data is not a significant issue for most users. Users usually manually force the browser to ignore the data in the cache and go to the target web server for the updated data directly. Obviously, this approach cannot be applied to most other applications where data integrity and coherency are important like in real-time transaction based applications.
Another web caching mechanism used in the industry are web caching appliances. These devices are typically inserted in the network near the web clients. All web transactions pass through the web cache and if the web page accessed is not found in the web cache, the appliance forwards the web request to the originally intended web server. When the web server responds, the web cache will cache the reply for later queries to the same web objects by the same or different web clients. Using this mechanism, if the web cache is local and the web servers are remote, then web accesses can be satisfied in a much more timely manner. In addition, the web cache is made available to all web clients local to the cache appliance. The drawback of this approach is that is specific to web access and not to more generic storage access methods (file or block).
Other suggested cache mechanisms propose ways of increasing the cache hit rate by intelligently pre-fetching data into cache memory during low processing and/or network activity periods. These proposed schemes obviously work better for some applications like web page access and other sequential data access applications. Other cache mechanisms have developed complex schemes to manage distributed cache systems on local networks, which are difficult to configure and manage Their implementation may be efficient when both the requesting hosts and target devices are located in the same local network. However, when the requesting hosts and the target devices are geographically separated as in the Internet environment, their distributed cache scheme implementation may not always produce the desired performance gains due to the data transmission latency across wide area networks.
All of the cache mechanisms described above have several drawbacks in light of the goal to provide a generic but highly efficient file and block storage data cache mechanism for geographically dispersed networks. It is desirable to provide a high performance, efficient cache system and method for network storage and it is to this end that the present invention is directed.
SUMMARY OF THE INVENTION
The caching system and method built in accordance with this invention overcomes the above problems and limitations of existing cache systems. This invention addresses all of these problems by implementing a cache in the network near the storage devices and another cache in the network near the requesting hosts. Thus, frequently accessed data from a group of storage devices are cached in the system deployed near the storage subsystems. In addition, data frequently accessed by a group of hosts/clients are cached in the system deployed near the requesting host devices. This invention allows a network of arbitrary size to exist in between the host-side cache system and the storage-side cache system.
The cache systems are implemented in a manner that allows the most efficient processing of read requests to occur. Each read request initiated by a host would first be checked against the cached data in host-side cache system and be satisfied by that system if the requested data is present. If the host-side cache could not satisfy the read request, the read request is sent out to the storage-side cache. If a cache hit occurs at the storage-side cache system, the read request is satisfied by that system. If both cache systems fail to generate a cache hit for the read request then the request is forwarded to the target storage device to satisfy the read request. Thus, the slow access-time target storage device is only accessed to satisfy a read request if both the host-side cache system and the storage-side cache system cannot satisfy the read request. Thus, network traffic is reduced if a host-side cache hit occurs or target storage latency is reduced if the storage-side cache hit occurs. In the case where both caches do not contain the data, when the data is transmitted from the

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Caching system and method for a network storage system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Caching system and method for a network storage system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching system and method for a network storage system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3257665

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.