System and method for high-speed substitute cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06629201

ABSTRACT:

TECHNICAL FIELD
The present invention is directed to a disk caching technique. In particular, certain embodiments are directed to disk caching software for use with an operating system.
BACKGROUND OF THE INVENTION
Computer users are always looking for ways to speed up operations on their computers. One source of the drag on computer speed is the time it takes to conduct an input/output operation to the hard disk drive or other mechanical disk devices. Such devices are slowed by mechanical movement latencies and I/O bus traffic requirements. One conventional method for avoiding this speed delay is to cache frequently accessed disk data in the computer main memory. Access to this cached data in main memory is much quicker than always accessing the hard disk drive for the data. Access speed to a hard disk drive is replaced by main memory access speed to the data resident in the cache.
SUMMARY OF THE INVENTION
A method of caching data in a computer having an operating system with a file caching mechanism comprises, in one embodiment: intercepting an input/output request stream; disabling the file caching mechanism with respect to all requests in the request stream that are directed to at least one selected disk volume; and accessing a direct block cache to satisfy a request of the request stream. Further related embodiments include a method of caching data in a computer having a window-based operating system, and a method wherein the step of disabling the file caching mechanism comprises disabling the mechanism based on disk volume identifier entries in a look-up table, which may be adjusted in accordance with input from a user.
In a further embodiment, a method comprises: in a cache list, searching for blocks, each of which contains data, written as a result of a write request, that is more recent than data present in the corresponding block on a mechanical disk; when a quota number of such blocks is found, sorting the blocks into an optimal write order; and generating at least one request to write the blocks to the mechanical disk in the optimal write order. The step of searching for blocks may be instituted periodically, with a wake-up time period, which may be adjusted in accordance with user input. The quota number may also be adjusted in accordance with user input, and the set of steps of searching, sorting, and writing may be activated and deactivated. In one embodiment, the step of sorting comprises sorting the blocks such that the blocks are ordered in accordance with a count of physical memory locations, one physical memory location corresponding to each block, the count beginning at a physical memory location on an outermost track of the disk, counting in a rotation direction around the outermost track, and continuing by moving in one track and counting in the same rotation direction upon reaching a location that has already been counted, until a final memory location on an innermost track is reached.
Yet another embodiment comprises creating an associative map which, for each given block of a set of blocks, maps a block identifier for the given block to a pointer, the pointer taking a value chosen from the values of: (i) a pointer value signifying that there is no data in cache memory corresponding to the given block; and (ii) a pointer value that points to a cache memory location containing data from the given block. The method may further comprise, upon receipt of an input/output request involving a block of the set of blocks, determining the value of the pointer to which the block identifier for the block is mapped in the associative map; and may comprise, upon determining that the value of the pointer is the pointer value that points to a cache memory location containing data from the given block, accessing the cache memory location to satisfy the input/output request. The method may also comprise, upon determining that the value of the pointer is the pointer value that points to a cache memory location containing data from the given block, updating a least recently used counter field for the given block in the cache memory location to be equal to the value of a global least recently used counter.
In a further related embodiment, the method comprises, upon determining that the value of the pointer is the pointer value signifying that there is no data in cache memory corresponding to the given block, generating a request to receive a free block of memory to be used as a new cache memory block. The method may also comprise, upon receiving the request for a free block of memory, determining whether a virtual address from a memory table of virtual memory addresses is available for use by the direct block cache; and, if so, causing the memory block corresponding to the virtual address to be used as the new cache memory block. Additionally, the method may comprise, if a virtual address from the memory table is not available for use by the direct block cache,
(i) searching for block identifiers in the associative map which are mapped to pointers having a pointer value that points to a cache memory location containing data, and associating each such block identifier with a least recently used counter from the cache memory location to which each block identifier corresponds;
(ii) sorting the block identifiers according to the numerical order of the least recently used counters to which they correspond; and
(iii) for each of a number of the lowest ordered block identifiers that is at least one, causing the memory block corresponding to the pointer value, to which the at least one block identifier is mapped, to be added to a list from which the new cache memory block may be chosen.
In further related embodiments, the method comprises adjusting the number of the lowest ordered block identifiers in accordance with a user input maximum and minimum, such that a number, representing the block identifiers which are mapped to pointers having a pointer value that points to a cache memory location containing data, does not fall below the minimum or exceed the maximum. Such a maximum and minimum may be adhered to for each disk volume of a set of disk volumes.
In further embodiments, a method of caching comprises adjusting, by number of sectors of data per block, a size per block of a set of blocks in accordance with user input; the size per block may, for example, range from 2 sectors per block to 64 sectors per block. Another embodiment comprises caching metafile data for a file system of the operating system.
In a still further embodiment, a method of caching data in a computer having an operating system, a kernel mode portion with limited memory, and a user mode portion, comprises providing an expanded memory to a cache operating in the kernel mode portion. An embodiment of the method includes creating, in the user mode portion, at least one memory tables comprising a set of virtual memory addresses; and accessing a memory table of the at least one memory tables when allocating memory to a cache. More than one such memory table may be created, and the number created may be adjusted in accordance with user input. Context switching may be used between tables of the more than one memory tables. Memory tables containing virtual addresses corresponding to at least 2 GB of memory may be created. The method may also utilize a program of the operating system, that is used for setting up user process virtual address maps, to create the at least one memory table.
A still further embodiment provides a user interface through which cache performance data can be displayed, cache parameters can be adjusted and a cache itself or cache features can be enabled or disabled.
The above embodiments may be used in many different possible combinations, as will be apparent from the below. Additionally, embodiments which are cache processes operating in computers have analogous features to those just summarized, as will also be apparent from the below.


REFERENCES:
patent: 4794523 (1988-12-01), Adan et al.
patent: 5475840 (1995-12-01), Nelson et al.
patent: 5581736 (1996-12-01), Smith
patent: 5606681 (1997

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for high-speed substitute cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for high-speed substitute cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for high-speed substitute cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3018921

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.