Method and system for coherently caching I/O devices across...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S152000, C711S163000

Reexamination Certificate

active

06370615

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention is directed to a disk caching technique using software, in particular, disk caching software for use on an OpenVMS operating system. OpenVMS is the operating system used on VAX and Alpha AXP computers.
Computer users are always looking for ways to speed up operations on their computers. One source of the drag on computer speed is the time it takes to conduct an input/output operation to the hard disk drive or other mechanical disk devices. Such devices are slowed by mechanical movement latencies and I/O bus traffic requirements. One conventional method for avoiding this speed delay is to cache frequently accessed disk data in the computer main memory. Access to this cached data in main memory is much quicker than always accessing the hard disk drive for the data. Access speed to a hard disk drive is replaced by main memory access speed to the data resident in the cache.
There is a significant down side to the conventional form of caching techniques. Caches are conventionally organised as to be made up of fixed sized areas, known as buckets, where the disk data is stored, with all the buckets added together making up the fixed total size of the computer main memory allocated for use by the cache. No matter what size the original disk access was this data has to be accommodated in the cache buckets. Thus, if the disk access size was very small compared to the cache bucket size, then most of the bucket storage area is wasted, containing no valid disk data at all. If the disk was accessed by many of these smaller accesses, then the cache buckets would get used up by these small data sizes and the cache would not apparently be able to hold as much data as was originally expected. If the disk access size was larger than the cache bucket size, either the data is not accommodated in the cache, or several cache buckets have to be used to accommodate the disk data which makes cache management very complicated. With this conventional approach to disk caching the computer user has to try to compromise with the single cache bucket size for all users on the computer system. If the computer is used for several different applications, then either the cache bucket size has to be biased to one type of application being at a disadvantage to all the other applications, or the cache bucket size has to averaged against all applications with the cache being at less an advantage as would be desired. It is an object of the present invention to reduce this down side of using a disk cache.
SUMMARY OF THE INVENTION
In accordance with the embodiment of the invention, the total cache is organised into three separate caches each having a different cache bucket size associated with it for small, medium, and large, disk access sizes. The computer user has control over the bucket sizes for each of the three cache areas.
In accordance with the embodiment of the invention, the computer user has control over which disks on the computer system will be included in the caching and which disks on the computer system are to be excluded from the caching.
In accordance with the embodiment of the invention, the total cache size contained in the computer main memory, being made up of the three cache areas, does not have a singular fixed size and will change dependent on the computer systems use. The total cache size is allowed to grow in response to high disk access demand, and to reduce when the available computer main memory becomes at a premium to the computer users. Thus the computer main memory used by the cache fluctuates dependent on disk data access and requirements of the computer main memory. The computer user has control over the upper and lower limits of which the total cache size occupies the computers main memory. The total cache will then be made up of mainly the small, or the medium, or the large bucket areas, or a spread of the three cache area sizes dependent on how the cached disks are accessed on the system.
In accordance with the embodiment of the invention, once the total cache size has grown to its upper limit further new demands on cache data are handled by cache bucket replacement, which operates on a least recently used algorithm. This cache bucket replacement will also occur if the total cache size is inhibited from growing owing to a high demand on computer main memory by other applications and users of the computer system.
In accordance with the embodiment of the invention, when a disk which is being cached is subject to a new read data access by some computer user, the required disk data is sent to the computer user and also copied into an available cache bucket dependent on size fit. This cache bucket is either newly obtained from the computer main memory or by replacing an already resident cache bucket using a least recently used algorithm. If this disk data, now resident in the cache, is again requested by a read access of some computer user, the data is returned to the requesting user directly from the cache bucket and does not involve any hard disk access at all. The data is returned at the faster computer main memory access speed, showing the speed advantage of using a disk cache mechanism.
In accordance with the embodiment of the invention, when a disk which is being cached is subject to a new read data access by some computer user and this disk access is larger than all three cache bucket sizes, the disk data is not copied to the cache. This oversize read access, along with other cache statistics are recorded allowing the computer user to interrogate the use of the cache. Using these statistics the computer user can adjust the size of the three cache buckets to best suit the disk use on the computer system.
In accordance with the embodiment of the invention, when a write access is performed to a disk which is being cached and the disk data area being written was previously read into the cache, i.e. an update operation on the disk data, the current cache buckets for the previous read disk data area are invalidated on all computers on the network.
Other objects and advantages of the invention will become apparent during the following description of the presently preferred embodiments of the invention taken in conjunction with the drawing.


REFERENCES:
patent: 3820078 (1974-06-01), Curley et al.
patent: 4755930 (1988-07-01), Wilson, Jr. et al.
patent: 4775955 (1988-10-01), Liu
patent: 4849879 (1989-07-01), Chinnaswamy et al.
patent: 5025366 (1991-06-01), Baror
patent: 5060144 (1991-10-01), Sipple et al.
patent: 5062055 (1991-10-01), Chinnaswamy et al.
patent: 5067071 (1991-11-01), Schanin et al.
patent: 5091846 (1992-02-01), Sachs et al.
patent: 5136691 (1992-08-01), Baror
patent: 5185878 (1993-02-01), Baror et al.
patent: 5241641 (1993-08-01), Iwasa et al.
patent: 5265235 (1993-11-01), Sindhu et al.
patent: 5282272 (1994-01-01), Guy et al.
patent: 5301290 (1994-04-01), Tetzlaff et al.
patent: 5307506 (1994-04-01), Colwell et al.
patent: 5323403 (1994-06-01), Elliott
patent: 5335327 (1994-08-01), Hisano et al.
patent: 5347648 (1994-09-01), Stamm et al.
patent: 5353430 (1994-10-01), Lautzenheiser
patent: 5363490 (1994-11-01), Alferness et al.
patent: 5390318 (1995-02-01), Ramakrishnan et al.
patent: 5408653 (1995-04-01), Josten et al.
patent: 5426747 (1995-06-01), Weinreb et al.
patent: 5452447 (1995-09-01), Nelson et al.
patent: 5566315 (1996-10-01), Milillo et al.
patent: 5606681 (1997-02-01), Smith et al.
“I/O Express Technical Report”, Executive Software International, Glendale CA; Feb. 1992-Jan. 1993.*
“The VAX/VMS Distributed Lock Manager”, Snaman, Jr., William E et al.,Digital Technical Journal, No. 5, Sep. 1987.
“The Design and Implementation of A Distributed File System”, Goldstein,Digital Technical Journal, No. 5, Sep. 1987.
“File System Operation in a VAXcluster Environment”, Chapter 8,VMS File System Internals, McCoy,Digital Press, 1990.
“The Stanford Dash Multiprocessor”, Lenoski et al.,Computer, IEEE Computer Society, Mar. 1992, pp. 63-79.
“Cache-Coherency Protocols Keep Data Consistent”, Gallant, J.,Electronic Technology f

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for coherently caching I/O devices across... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for coherently caching I/O devices across..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for coherently caching I/O devices across... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2883458

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.