Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability
Reexamination Certificate
1999-05-27
2001-05-15
Peikari, B. James (Department: 2186)
Error detection/correction and fault detection/recovery
Data processing system error or fault handling
Reliability and availability
C714S006130, C714S006130, C714S042000, C714S764000, C711S156000
Reexamination Certificate
active
06233700
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to a method for management of cache ages and a medium having a cache page management program stored herein. More particularly, it relates to a method for management of cache pages used in a disk storage device and a medium having a cache page management program stored therein.
DESCRIPTION OF THE RELATED ART
In a cache page management method, used in a control device of the above type, it is one of crucial elements to raise the bit ratio of the cache while suppressing the cache memory capacity to a smaller value.
For accomplishing this object, there is proposed a “Segment LRU method” such as is introduced in, for example, NIKKEI ELECTRONICS No. 617 (issued September 1994).
Referring to
FIG. 7
, a general structure of a disk control device is explained.
FIG. 7
is a block diagram showing a routine disk control device having a cache memory. As shown therein, a disk control device
30
is arranged between a central processing unit
20
and disk storage devices
40
to
43
, andiskonstituted by a cache memory
33
, a MPU
32
and a ROM
31
having a control program
31
a.
In this cache memory
33
, there are provided a set of cache pages, composed of a suitable number of cache pages
33
b
of suitable sizes, and a LRU table
33
a
adapted for supervising the cache pages
33
b
. As a matter of course, the accessing speed in the cache memory
33
is significantly higher than the accessing speed in the disk storage devices
40
to
43
.
Referring to
FIG. 8
, the segment LRU method is now explained.
FIG. 8
shows the detailed structure of the LRU table shown in FIG.
7
.
The segment LRU method is a method for supervising cache pages exploiting the fact that, if the business application is run on an information processing system, and a particular block on a disk storage device is accessed more than twice within a finite period of time, the probability is high that the same block is accessed in succession subsequently. It is a feature of this technique that the LRU table
33
a
is split into two regions, that is a protection area
1
and an examination area
2
, as shown in FIG.
8
.
The LRU table
33
a
, shown in
FIG. 7
, is constituted by plural entries
3
in one-to-one correspondence for each of the plural cache pages
33
b
. Each entry
3
has an address etc of the associated cache page
33
b
and a flag area
4
for storage of flags, with values of PROT and PROB, indicating to which of the protection area
1
and the examination area
2
belong the flags.
In this manner, the protection area
1
and the examination area
2
are of a list structure in which entries from a most recently accessed entry (most recently used termed as MSU) up to the earliest entry not accessed for long (least recently used termed as LRU) are interconnected by pointers.
The total number of the entries is physically determined by the capacity of the cache memory
33
. The sizes of the protection area
1
and the examination area
2
are pre-set fixed values and are not changed during the operation.
The entry
3
at the MRU position of the protection area
1
is indicated by a pointer
6
a
, referred to below as PrtMruP, while the entry
3
at the LRU position of the protection area
1
is indicated as pointer
6
b
, referred to below as PrtLruP.
Similarly, the entry at the MRU position of the examination area
2
is indicated by a pointer
6
c
, referred to below as PrbMruP, while the entry
3
at the LRU position of the examination area
2
is indicated as a pointer
6
d
, referred to below as PrbLruP.
Meanwhile, in the segment LRU method, if a cache failure occurs, the entry
3
in the LRU position of the examination area
2
is driven from the cache memory
33
and a new entry
3
is added to the MRU position in the examination area
2
. A value PROB is stored in the flag area
4
of the new entry
3
and data corresponding to the cache failure is stored and held in an associated cache page
33
b.
In case of a cache hit, it is verified, based on the value of the flag area
4
, whether or not the hit is that in the protection area
1
or that in the examination area
2
. If the flag is PROT, the hit has occurred in the protection area
1
so that the entry
3
corresponding to the hit is moved to the MRU position in the protection area
1
.
If the flag is PROB, the hit has occurred in the examination area
2
so that the entry
3
corresponding to the hit is moved to the MRU position in the protection area
1
at the same time as the flag area
4
of the entry
3
is changed to PROT. The decrease caused in the examination area
2
by this operation is eked out by movement of the entry
3
at the LRU position in the protection area
1
to the MRU position of the examination area
2
. The flag area
4
of the moved entry
3
is rewritten from PROT to PROB.
By the above operation, there is caused a difference in priority in the time period during which the accessed data stay in the cache memory
33
.
That is, data not re-used during the time the data is in the examination area
2
is driven out from the cache memory
33
without affecting the data in the protection area
1
, while re-used data are moved to the protection area
1
so as to stay for prolonged time in the cache memory
33
.
The segment LRU method has thus been employed up to now as effective means for raising the utilization efficiency of the disk cache, with the attention being directed to the characteristics of the disk accessing from the central processing unit.
SUMARRY OF THE DISCLOSURE
In the course of investigations toward the present invention, the following problems have been encountered.
With coming into widespread use of a non-stop continuous operation of the computer system, the operating configuration in which the online operation proceeds concurrently with the batch operation is increasing. In particular, in the batch operation, the disk space to which individual job's access is narrower and accessing occurs concentratedly in a shorter time. Therefore, the effect of the disk cache is felt to be promising.
However, in the batch operation, accessing is halted as soon as the job comes to a close, as characteristic of the batch operation. If the accessing pattern to the disk storage device is changed in this manner, a variety of problems are raised in the segment LRU method due to, for example, switching between batch jobs.
That is, the accessed disk space cannot follow up with the rapidly changing environment such that entries with drastically lowered re-use probability are “seized” in the vicinity of the LRU position of the protection region, thus lowering the hit ratio of the cache.
It is an object of the present invention to overcome the aforementioned problem and to provide a method for supervising cache pages in which the conventional segment LRU method has the ability of activating the “seized” entries and a medium having a management program for cache pages stored therein.
According a first aspect of, the present invention there is provided a novel method for management of a cache page in a system including a central processing unit, a storage device connected to the central processing unit, a cache memory connected to the central processing unit and having an accessing speed higher than that of the storage device, a set of a plurality of cache pages provided in the cache memory and an LRU table provided in the cache memory and constituted by a plurality of entries adapted for controlling the cache pages, wherein the LRU table is divided into a protection area and an examination area, and the following steps are conducted:
(a) in case of a cache failure, a cache failure entry is stored in the examination area and, on occurrence of overflow of the examination area with entries, an entry at an LRU position of the examination area is extracted and driven out of the cache memory, while the cache failure entry is added to an MRU position of the examination area,
(b) in case of a cache hit in the protection area, the cache hit entry is extracted and moved to an MRU position in
Foley & Lardner
NEC Corporation
Peikari B. James
LandOfFree
Method for management of cache page and medium having a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for management of cache page and medium having a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for management of cache page and medium having a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2473326