Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
2002-08-19
2004-11-02
Sparks, Donald (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C711S129000, C711S133000, C711S134000, C711S136000, C711S154000, C711S159000, C711S160000, C711S170000, C711S171000, C711S172000, C711S173000, C711S203000, C711S205000, C711S206000, C711S207000, C711S208000, C711S209000
Reexamination Certificate
active
06813684
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a method for controlling accesses to such storage sources as a disk, etc. More particularly, the present invention relates to a method for controlling a disk cache memory.
When compared with the arithmetic operation of a CPU, it takes much time to access a disk. This is why a cache memory is used to store data that has been accessed once so that the data, when it is accessed next time, is read from the cache memory to shorten the access time. However, the capacity of the cache memory is smaller than that of the disk. Consequently, the cache memory is often required to discard older data to store new data.
One of the well-known general cache memory controlling methods is the least recently used (LRU) algorithm. According to the LRU algorithm, the least recently accessed data is purged from the subject cache memory when some data therein should be replaced with new data. The last access time is added to each data stored in the cache memory. When data stored in the cache memory is accessed, the access information of the data is updated so that the data is kept updated to the latest. It is due to this access information that older data can be discarded sequentially from the cache memory. The official gazette of JP-A No.65927/1999 discloses a system provided with a “priority cache memory” that caches files with priority levels in addition to the ordinary LRU-controlled cache memory. This priority cache memory employs a controlling method that determines each file to be discarded according to its priority level.
The LRU algorithm sometimes swaps out earlier accessed data from the cache memory while the data is used frequently. This is because the algorithm regards even the data that is used only once as the latest accessed data.
According to the above described conventional technique that uses such a priority cache memory together with an ordinary LRU-controlled cache memory, a priority attribute is added beforehand to each file that seems to be accessed frequently, so that the file is stored in the priority cache memory. Consequently, the above problem is solved and the cache hit percentage increases. On the other hand, the LRU-controlled cache memory can also be used for files that are not accessed so often, but accessed very frequently at a time. In order to avoid this problem, however, the above described conventional technique distinguishes files to be stored in the priority cache memory from others and the users are requested to determine and set priority levels of those files by themselves. In addition, the priority cache memory is separated physically from the LRU-controlled cache memory and the capacity of each of those cache memories cannot be set freely. This is the problem of the conventional technique.
Under such circumstances, it is an object of the present invention to provide a cache memory system and a method for controlling the cache system, which uses a cache memory divided into a plurality of areas and enables data to be stored in each of those areas automatically according to its state, thereby preventing data to be accessed frequently from being swapped out by data not to be accessed so often and improving the cache hit percentage and the I/O performance.
It is another object of the present invention to provide a cache system and a method for controlling the cache system that enables the capacity of each of the divided cache memory areas to be set freely, thereby making it easier to set the area according to the state of the data to be stored therein.
SUMMARY OF THE INVENTION
The typical feature of the present invention disclosed in this specification is cache replacement performed independently for each area in the subject cache memory by dividing the cache memory into a plurality of areas having area numbers to be recorded in their corresponding segments when data is stored therein and setting an upper limit size for each of those areas. This means that a plurality of preset areas are just virtual areas for which their sizes are set.
More concretely, identification information is added to each disk data I/O command issued from the CPU. The identification information denotes a type of each data to be accessed. The identification information is used as the area number of each target area when data is written/read by the I/O command in/from the cache memory. The identification information is recorded in the cache memory actually as the number of an area in/from which data is written/read.
Still more concretely, the type of data to be accessed by an I/O command denotes at least user data or meta data. The meta data can also be divided into meta data of i-node, directories, and others. User data is specified by respective application programs while the meta data is used by a file system to manage files. Consequently, when the CPU in which the file system runs issues an I/O command, the system can determine which type data is to be accessed automatically. This is why the identification information for a data type can be added to each I/O command. The user is not requested of anything for this information addition.
User data is generally accessed at random. Meta data is accessed more often than user data. According to the cache memory controlling method described above, the data type added to each I/O command denotes a cache memory area to be accessed and the upper limit size of each area is controlled so that data in any area is replaced with new data independently. This is why it is prevented that meta data is swapped out from the cache memory to store new user data, thereby the cache hit percentage increases.
Other features of the present invention will become more apparent in the description of the preferred embodiments.
REFERENCES:
patent: 5537635 (1996-07-01), Douglas
patent: 5835964 (1998-11-01), Draves et al.
patent: 6047354 (2000-04-01), Yoshioka et al.
patent: 11-65927 (1997-08-01), None
patent: 2002-7213 (2000-06-01), None
Fujiwara Shinji
Sakaguchi Akihiko
A. Marquez, Esq. Juan Carlos
Fisher Esq. Stanley P.
Hitachi , Ltd.
Reed Smith LLP
Sparks Donald
LandOfFree
Disk drive system and method for controlling a cache memory does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Disk drive system and method for controlling a cache memory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Disk drive system and method for controlling a cache memory will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3357153