Cache update method and cache update control system...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S152000, C711S158000, C711S167000, C711S168000

Reexamination Certificate

active

06647463

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a cache update method and a cache update control system. More particularly, the present invention relates to a method of updating an address array in a cache of a non-blocking type.
2. Description of the Related Art
Typically, in a current computer system, a cache having a high speed and a small capacity is mounted between a processor core and a main memory so that an access to data stored in the main memory can be made faster.
FIG. 1
schematically shows a cache of a set associative type mainly used today. The illustrated cache has the simplest 1-WAY configuration in the set associative types.
In
FIG. 1
, a data array (DA)
11
holds a copy of a part of main memory data, at a block unit (128 bytes in the example of FIG.
1
), and an address array (AA)
12
stores an address of a data block stored in the data array
11
. Each of the data array
12
and the address array is constituted by 256 entries in the example of FIG.
1
. Also, the cache has a comparator
13
for judging a hit or miss. A main memory address indicated by a load instruction (LD) serving as a read instruction to the main memory is conveniently divided into a tag address, an index address and a block address in order from the higher location in the case of the access to the cache. In the example of
FIG. 1
, the block address is composed of 4 bits, the index address is composed of 8 bits, and the tag address is composed of the remaining bits in the main memory address.
When the load instruction is executed, the index address is extracted from the load instruction, and one entry of the address array
12
is specified by the index address. Then, the comparator
13
compares whether or not a tag address stored in the specified entry coincides with a tag address in the load instruction. Typically, an effectiveness indication bit (not shown) indicating whether or not the tag address stored in the entry is effective is provided in each entry of the address array
12
. The effectiveness indication bit is investigated simultaneously with the comparison between the tag addresses.
A state at which the effectiveness indication bit indicates effective and the coincident between both the tag addresses is detected is referred to as a cache hit, or simply referred to as a hit, and another state except above state is referred to as a cache miss, or simply referred to as a miss. In the case of the cache hit, data of the 8 bytes within the entry in the data array (DA)
11
specified by the portion of the index address and the block address of the load instruction are read out as cache data, and sent to a process core as reply data.
On the other hand, in the case of the cache miss, a miss request is sent to the main memory, in accordance with the main memory address indicated by the load instruction, and a block of 128 bytes containing the 8-byte data corresponding to the address is read out from the main memory, and the 8-byte data corresponding to the address in the block of 128 bytes is returned back to the processor core. Also, the tag address of the load instruction is registered in the entry of the address array
12
in which entry missed address is stored, and further the block of the 128 bytes read out from the main memory is stored in the entry of the data array
11
.
As described above, in the conventional technique, as for an update of the cache in the case of the miss, when the miss request is issued to the main memory, the index address is registered in the address array
12
, and when block data is sent from the main memory with regard to the issued miss request, the data array
11
is updated. As a document disclosing such a cache update method, for example, there is Japanese Examined Patent Application (JP-B-Heisei, 7-69862) (in particular, third to 31-th lines on a left column of a third page). The same cache update method is followed in its original state, in a cache of a non-blocking type developed in recent years.
The cache of the non-blocking type can continue a process for following instructions even while a request of reading out data caused by the cache miss is sent to the main memory. In short, in the conventional cache that is not the non-blocking type, in the case of the cache miss, the processor core must stop the process for the following instruction until the data is prepared. However, in the cache of the non-blocking type, an operation for reading out more than one piece of block data can be required to the main memory. Thus, the executing performance is improved correspondingly to it. An operation of the 1-WAY set associative cache shown in
FIG. 1
will be explained as an example. When a cache miss occurs in a load instruction LDa having a certain index address INDEX
1
a miss request is sent to the main memory. Then, a next load instruction LDb is processed without any stop of the process for the processor core. Hence, if the load instruction LDb has an index address except the INDEX
1
and the index address hits a cache, the hit cache data is returned back to the process core as reply data with regard to the load instruction LDb.
However, in the conventional cache of the non-blocking type, the update of the address array is executed when the miss request is sent. Thus, this brings about a problem that the feature of the non-blocking type is not sufficiently used. For example, in the case of the above-mentioned example, at the time of the miss of the load instruction LDa, the entry corresponding to the INDEX
1
of the address array
12
is updated at a considerably early timing when the miss request is sent. Hence, when a tag address prior to the update is referred to as TAG
1
, even if the following load instruction LDb has the index address INDEX
1
and the tag address TAG
1
, this results in a miss hit.
As the related art, Japanese Laid Open Patent Application (JP-A-Showa, 63-234336) discloses “Information Processor”. This information processor includes a cache memory. This information processor is provided with a boundary register, an address register, a boundary comparator and a cache control circuit. The boundary register which can be pre-set a boundary address of a main memory therein. The address register which holds an address to access the main memory and the cache memory. The boundary comparator which compares a content of the boundary register with a content of a part of the address register at a time of a request of an access to the main memory. The cache control circuit which controls whether or not the reference and the update of the cache memory is inhibited on the basis of the compared result by the boundary comparator.
Japanese Laid Open Patent Application (JP-A-Heisei, 7-219845) discloses “Cache Memory Control Method”. In this cache memory control method, a store hit level register holds a first data array or a second data array hit at a time of a storing operation. A competition detection circuit detects a presence or absence of a cache access and a storing operation to different data arrays, on the basis of the contents of the store hit level register and a hit detection circuit. The control circuit instructs the respective data arrays to carry out the storing operation and a reading operation at the same time, if the storing operation is firstly done and the reading operation is then done with regard to the different data arrays, in accordance with the contents of the hit detection circuit, the store hit level register, the competition detection circuit and the operation register. Thus, the simultaneous operation to the different data arrays can be attained in the case of the cache access if the storing operation is firstly done and the reading operation is then done, or if the data is being loaded.
Also, Japanese Laid Open Patent Application (JP-A-Heisei, 8-55061) discloses “Cache Memory Controller”. In this cache memory controller, when transfer start indication data “1” is set into a register, a detector judges whether or not a processor accesses a main memory and the like. If the

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache update method and cache update control system... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache update method and cache update control system..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache update method and cache update control system... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3150140

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.