Data cache system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S141000, C711S147000

Reexamination Certificate

active

06560674

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a data processor having various modules, and more specifically to a cache system within that data processor for improving data transfer operations among the various modules.
BACKGROUND OF THE INVENTION
In many data processing chip sets data is transferred from one or many processors to memory devices and input/output, I/O, subsystems, or other chip components known as functional units, via an appropriate bus structure. Typically, the bus structure includes a processor bus, a system bus and a memory bus. Thus, when there is a memory operation wherein data is required to be moved to or from a memory location to a processor, the system bus would cease to operate until the data movement from the memory location to the processor is completed. Similarly, when there is a data movement from an external device to a memory location, the processor bus would cease to operate until the data is moved to its intended location.
Typically, the main memory in the data processor is made out of dynamic RAMs (DRAMs). The access speed of DRAMs may not be sufficient for many applications. A somewhat faster memory is available and is referred to as static RAM or SRAM. However, SRAM memory is more expensive than DRAM and may not be feasible as a main memory component.
In order to alleviate the problems associated with the delays caused by DRAMs, many systems employ a cache memory made of high speed static RAM, SRAM, that is disposed between the central processing unit and the system's main DRAM memory.
FIG. 16
illustrates a data cache unit
508
in accordance with a prior art cache system. A device referred to as a cache controller or refill controller
518
attempts to maintain copies of information that the processing unit may request in a cache memory
516
. The controller also maintains a tag memory directory
514
to track information currently in the cache memory. Whenever the processing unit initiates a memory read, the controller performs a very quick search of the directory by accessing tag memory
514
via arbiter
510
, to determine if the requested information is already in the cache. If the information is currently stored in the cache memory, a cache hit has occurred. If, however, the information is not currently stored in the cache memory, a cache miss has occurred.
When a hit occurs, the controller accesses cache memory
516
via an arbiter
512
, to get the requested information. The controller then routes the requested information to central processing unit
102
. The quick directory search and fast-access time of the cache memory ensures that the central processing unit will not stall while waiting for the requested information.
If a miss occurs however, the controller accesses DRAM
528
via memory control unit
524
to get the requested data. One or more wait states will be inserted in the processing unit's bus cycle. Whenever, the cache controller is forced to go to DRAM to get information, it always gets an object of a fixed size from memory. This is referred to as a line of information. The line size is defined by the cache controller design. When refill controller
518
gets the line from DRAM memory
528
, it supplies the originally requested data item to central processing unit and records the entire line in the cache data memory.
Furthermore, cache controllers are divided into two categories: write-through and write-back. Typically, refill controller
518
checks to determine whether central processing unit
102
has initiated a read or a write to DRAM
528
. A write-through cache controller handles memory writes as explained hereinafter.
On a write hit, the write-through cache controller updates the line in both cache memory
516
and DRAM
528
. This ensures that the contents of the cache always reflects the information in the memory. This cache strategy is referred to as coherency. On a write miss, the write-through cache controller updates the line in DRAM memory only.
On the other hand, for a write hit, the write-back cache controller updates the line in the cache, but not in DRAM
528
. Cache controller then marks the line in cache tag memory
514
as dirty or modified. Thus, the contents of the cache memory and DRAM do not reflect each other. Of the two lines, the cache line is now current and the memory line is stale. On a write miss, the write-back cache controller updates the line in memory only, with the contents of the corresponding cache line.
Although, there has been many attempts to increase the cache hits, there is still a need for a data transfer operation by employing a cache system that has an improved ratio of cache hits.
SUMMARY OF THE INVENTION
Thus, in order to improve the cache hit ratios in a data cache system, an external access controller is provided that allows the data cache to operate as a bus slave in response to read and write requests by other bus masters in the system. As a result, based on the knowledge of the data that may become necessary to the processor, other bus masters provide data to the data cache before the processor issues a store or load operation for that data.
In accordance with one embodiment of the invention, in an information processing system, having a plurality of modules including a processor, a main memory and a plurality of I/O devices, the data cache includes a cache data memory coupled to a central processing unit for providing data to the processing unit in response to load operations and for writing data from the central processing unit in response to store operations. A refill controller is coupled to the cache data memory for controlling the operation of the data cache in accordance with a specifiable policy. The external access controller is coupled to the cache data memory, and to an external memory bus such that the contents of the cache data memory are accessible for read and write operations in response to read and write requests issued by other modules in the information processing system, that function as bus masters.


REFERENCES:
patent: 4794521 (1988-12-01), Ziegler et al.
patent: 4941088 (1990-07-01), Shaffer et al.
patent: 4951232 (1990-08-01), Hannah
patent: 5005117 (1991-04-01), Ikumi
patent: 5010515 (1991-04-01), Torborg, Jr.
patent: 5111425 (1992-05-01), Takeuchi et al.
patent: 5276836 (1994-01-01), Fukumaru et al.
patent: 5301351 (1994-04-01), Jippo
patent: 5303339 (1994-04-01), Ikuma
patent: 5392392 (1995-02-01), Fischer et al.
patent: 5412488 (1995-05-01), Ogata
patent: 5442802 (1995-08-01), Brent et al.
patent: 5461266 (1995-10-01), Koreeda et al.
patent: 5483642 (1996-01-01), Okazawa et al.
patent: 5493644 (1996-02-01), Thayer et al.
patent: 5506973 (1996-04-01), Okazawa et al.
patent: 5561820 (1996-10-01), Bland et al.
patent: 5646651 (1997-07-01), Spannaus et al.
patent: 5655131 (1997-08-01), Davies
patent: 5655151 (1997-08-01), Bowes et al.
patent: 5664218 (1997-09-01), Kim et al.
patent: 5668956 (1997-09-01), Okazawa et al.
patent: 5673380 (1997-09-01), Suzuki
patent: 5675808 (1997-10-01), Gulick et al.
patent: 5682513 (1997-10-01), Candeleria et al.
patent: 5893066 (1999-04-01), Hong
patent: 6002883 (1999-12-01), Goldrian
patent: 6052133 (2000-04-01), Kang
patent: 6076139 (2000-06-01), Welker et al.
patent: 6219759 (2001-04-01), Kumakiri
patent: 6230241 (2001-05-01), McKenney
patent: 0817069 (1998-01-01), None
patent: 00/22536 (1998-10-01), None
International Search Report dated Jan. 12, 2000.
“The M-Machines Multicomputer” by Fillo et al. Annual International Symposium on Microarchitecture (vol. 28).
“Tolerating Latency Through Software-Controlled Prefetching in Shard-Memory Mulitprocessors” by Mowry et al. Journal Parallel and Distributed Computing vol. 12 (p. 87-106).
“The Stanford Multiprocessor” by Lenowski et al. IEEE Computer Society vol. 25 (p. 63-79).
“Talisman: Commodity Realtime 3D Graphics for the PC”,Computer Graphics Proceedings, Annual Conference Series, 1996.
“Exemplar System Architecgure,”Technical Computing, A Hewlett Packard Web Page, 1997.
Jim Handy, The Cache Memory Book, pp. 9, 12-14 and 128, Dec.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data cache system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data cache system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data cache system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3051859

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.