System and method for controlling cache memories, computer...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S100000, C711S122000

Reexamination Certificate

active

06629200

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the invention
The present invention relates to a method for controlling a plurality of cache memories, as well as to a computer system, a hard disk drive unit, and a hard disk control unit used to enable those cache memories to hold data efficiently.
2. Description of Related Art
A computer system is usually composed of devices such as a CPU, a main memory, and an HDD (Hard Disk Drive) unit which have different data transfer rates. Generally, a DRAM is employed as such a main memory. The data transfer rate of the DRAM is far slower than that of the CPU, so the DRAM often becomes a factor for lowering the operation speed of the CPU. The data transfer rates of such external storage devices as an HDD unit, a CD-ROM unit, etc. are far slower than that of the DRAM, so the operation speed of the DRAM is often lowered by the operation of such a device. Such an operation speed difference among devices that exchange data with each other, therefore, hinders the intended performances of those devices even when they have an ability for transferring data fast respectively, since the transfer rate of the whole system is dominated by a device having the slowest transfer rate among them.
In order to reduce such a speed difference among devices, therefore, a computer system is provided with a cache memory. The cache memory makes the most use of the localization of the object program and/or data, thereby storing part of the data stored in the low-ranking device having a slow data transfer rate. Consequently, the number of accesses to the low-ranking device is reduced, thereby improving the data transfer rate of the whole computer system.
A typical cache memory is provided between a CPU and a main memory. If there are a plurality of cache memories, they are called the primary cache, the secondary cache, . . . sequentially from the CPU side. The closer the cache is to the CPU, the smaller the storage capacity becomes. Generally, an SRAM is employed for each of those cache memories.
The main memory is used as a cache memory for an external storage. Consequently, the number of accesses to the external storage is reduced, thereby the operation speed of the main memory can be improved. Sometimes, a cache memory is also provided in such an external storage as a so-called extended card, etc. For example, each HDD unit is provided with a cache memory for holding part of data stored in a magnetic disk. With the use of such a cache memory, the number of accesses to the magnetic disk is reduced, thereby the data transfer rate of the HDD unit can be improved. A cache memory provided in the main memory, the extended card, the HDD unit, etc. in this way is referred to as a disk cache.
Generally, a data reading method referred to as a “Look-Ahead Method” is employed for those disk caches. According to this “look-ahead” data reading method, after ending a read of requested data from an area requested by a host system, the data in the area following that of the requested data is also read together. This method can thus improve the cache hitting rate in reading data sequentially from consecutive addresses in ascending order of their numbers.
FIG. 12
shows how data is read from a conventional disk cache.
In
FIG. 12
, a host system means a unit that requests reading/writing data from/to an HDD unit. Each of the host system and the HDD unit is provided with a cache memory and data is exchanged between the cache memories of those units. In this case, the cache memory of the host system is regarded as a high-ranking cache memory and the cache memory of the HDD unit is regarded as a low-ranking cache memory. It is premised here that data D
1
requested from an application program is held in none of the host system and the HDD unit.
If an application program requests the host system to transfer data D
1
, a cache mis-hit occurs in the host system. The HDD unit is thus requested to transfer data D
3
including both data D
1
and data D
2
in the address following the address of this data D
1
. In the HDD unit, therefore, the data D
3
requested from the host system is read from the object magnetic disk and held in the cache memory to cope with this cache mis-hit. The data D
3
is transferred to the host system. Consequently, the cache memory of the host system holds the data D
3
. And furthermore, in the HDD unit, the data D
4
in the address following that of the transferred data D
3
is read from the magnetic disk and held in the cache memory.
Such a look-ahead data reading method is disclosed in, for example, the official gazette of Published Unexamined patent application Ser. No. 11-110139. The gazette describes both method and apparatus for reading data in the look-ahead manner. According to the method and the apparatus described in this gazette, if a read operation is detected in the direction of consecutive addresses in descending order of their numbers, the requested data is read from an area requested from the host system, then data is also read from an address preceding the address of the requested data, thereby the cache hitting rate can be improved during reading in the reverse direction of consecutive addresses in descending order of their numbers.
However, according to the conventional method for holding data in a cache memory, the same data is held in both host system and HDD unit. If reading of data is requested from an application program to the host system and the host system hits the cache memory, therefore, the requested data is transferred from the host system to the high-ranking cache memory. The HDD unit is never requested to read the data. Consequently, if the same data is held in the cache memory of the HDD unit at this time, the data will not be used effectively.
Under such circumstances, it is an object of the present invention to provide a method for controlling a plurality of cache memories, which can solve the above conventional problems and avoid a wasteful operation that common data is held in both high-ranking and low-ranking cache memories or reduce such common data to be held in both high-ranking and low-ranking cache memories, thereby holding data more efficiently in each of those cache memories, as well as to provide a computer system, a hard disk drive unit, and a hard disk control unit that are all employing the method for controlling a plurality of cache memories.
SUMMARY OF THE INVENTION
The present invention provides a method for controlling a plurality of cache memories including a low-ranking cache memory and a high-ranking cache memory so that the high-ranking and low-ranking cache memories are operated in different swap modes.
The computer system of the present invention is provided with a low-ranking cache memory and a high-ranking cache memory connected to the low-ranking cache memory. The high-ranking and low-ranking cache memories exchange select information for selecting a swap mode respectively with each other when said computer system is started up, thereby selecting different swap modes according to the exchanged select information respectively.
The hard disk drive unit of the present invention is provided with a low-ranking cache memory for storing part of the data stored in a magnetic disk and it is connected to a host system having functions of a high-ranking cache memory.
Each of the high-ranking and low-ranking cache memories exchanges select information with the host system so that the high-ranking and low-ranking cache memories are operated in different swap modes when the system is started up. According to the exchanged select information, the hard disk drive unit selects the swap mode of the low-ranking cache memory.
According to a preferred embodiment of the present invention, a hard disk control unit is provided with a first connection terminal connected to an extended connection terminal of a peripheral device provided for the computer system, a second connection terminal connected to a hard disk drive unit having a low-ranking cache memory for storing part of the data stored in a magnetic disk, and a high-ranking

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for controlling cache memories, computer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for controlling cache memories, computer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for controlling cache memories, computer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3109025

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.