Caching method using cache data stored in dynamic RAM...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S003000, C711S150000

Reexamination Certificate

active

06449690

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of Invention
The present invention relates generally to the field of computer system memory and pertains more particularly to a caching method using cache data stored in dynamic RAM embedded in a logic chip and cache tag stored in static RAM external to the logic chip.
2. Discussion of the Prior Art
Modern computer systems are often comprised of multiple forms and locations of memory. The memory subsystem is typically organized hierarchically. For example, from cache memory of various levels at the top to main memory and finally to hard disc memory. A processor in search of data or instructions looks first in the cache memory, which is closest to the processor. If the information is not found there, then the request is passed next to the main memory and finally to the hard disc. The relative sizes and performance of the memory units are conditioned primarily by economic considerations. Generally, the higher the memory unit is in the hierarchy the higher its performance and the higher its cost. For reference purposes, the memory subsystem will be divided into “caches” and “memory.” The term memory will cover every form of memory other than caches. Information that is frequently accessed is stored in caches and information that is less frequently accessed is stored in memory. Caches allow higher system performance because the information can typically be accessed from the cache faster than from the memory. Relatively speaking, this is especially true when the memory is in the form of a hard disk.
A cache consists of a cache data portion and a cache tag portion. The cache data portion contains the information that is currently stored in the cache. The cache tag portion contains the addresses of the locations where the information is stored. Generally, the cache data will be larger than the cache tags. The cache data and the cache tags will not necessarily be stored together depending on the design. When a specific piece of information is requested, one or more of the cache tags are searched for the address of the requested information. Which cache tags are searched will depend on the cache design. If the address of the requested information is present in the cache tags, then the information will be available from that address in the cache data. If the address is not present, then the information may be available from memory.
In general, there are two cache applications that will be considered. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus. Caches must be designed in such a way that their latency meets the timing requirements of the requesting components such as the processor pipeline or the shared bus. For example, consider the design of the shared bus. A cache or other agent on the bus that requires a specific piece of information will issue the address of the information on the bus. This is known as the address phase. Subsequently, all caches or other agents attached to the bus must indicate whether the information at the issued address is located there. This is known as the snoop phase. Typically, the bus design specifies that the cache must supply its snoop response within a fixed time interval after the address has been issued on the bus. If the cache is not designed to satisfy this timing requirement, it will lead to sub-optimal usage of the bus thus lowering system performance.
Examples of prior art systems will now be discussed in greater detail. Turning first to
FIGS. 1-3
, block diagrams of a processor
10
having an integral cache
12
that is interfaced to a processor pipeline
14
are shown. The processor
10
further consists of a register file
16
, an address buffer
18
, and a data buffer
20
. The various elements are connected together by uni-directional and bi-directional conductors as shown. When the cache
12
of
FIG. 1
is integral to the processor
10
, conventionally both the cache tags and the cache data are stored in fast static random access memory (SRAM) technology. In general, such an implementation is shown as cache
12
in FIG.
2
. Sometimes, insufficient cache is provided integral to the processor, so a supplemental cache is provided external to the processor. Such an implementation is shown as caches
12
a
and
12
b
in FIG.
3
. Among the drawbacks to implementations of caches exclusively in SRAM are that, relatively speaking, SRAM is expensive, is less dense, and uses more power than dynamic random access memory (DRAM) technology.
With reference to
FIGS. 4-6
, block diagrams of a cache
12
external to a processor
10
and interfaced with a shared bus
22
are shown. Also interfaced with the shared bus
22
is a memory
24
. The cache
12
and the memory
24
are interfaced with the shared bus
22
though a bus interface
26
as shown. When the cache
12
of
FIG. 4
is external to the processor
10
, conventionally, the cache tags are stored in a SRAM memory and the cache data is stored in a DRAM memory. In one implementation, both the SRAM memory
12
a
containing cache tags and the DRAM memory
12
b
containing cache data are external to the bus interface
26
as shown in FIG.
5
. In another implementation, only the DRAM memory
12
b
containing cache data is external to the bus interface
26
while the SRAM memory
12
a
containing cache tags is integral to the bus interface as shown in FIG.
6
. Among the drawbacks to these implementations are that the latency of accessing the cache data is long since it is stored in slower DRAM external to the logic chip. This may force a delay in transferring data to the shared bus thus degrading the system performance. Further, when the cache tags are implemented in SRAM embedded on the logic chip, the size of the cache is limited by the higher cost, the lower density, and the greater power consumption of SRAM.
A definite need exists for a system having an ability to meet the latency timing requirements of the requesting components of the system. In particular, a need exists for a system which is capable of accessing cache memory in a timely manner. Ideally, such a system would have a lower cost and a higher capacity than conventional systems. With a system of this type, system performance can be enhanced. A primary purpose of the present invention is to solve this need and provide further, related advantages.
SUMMARY OF THE INVENTION
A caching method is disclosed for using cache data stored in dynamic RAM embedded in a logic chip and cache tags stored in static RAM external to the logic chip. In general, there are at least two cache applications where this method can be employed. First, there are caches integral to a processor and interfaced to a processor pipeline. Second, there are caches external to a processor and interfaced with a shared bus.


REFERENCES:
patent: 5067078 (1991-11-01), Talgam et al.
patent: 5687131 (1997-11-01), Spaderna
patent: 5699317 (1997-12-01), Sartore et al.
patent: 5721862 (1998-02-01), Sartore et al.
patent: 6026478 (2000-02-01), Dowling
patent: 6151664 (2000-11-01), Borkenhagen et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Caching method using cache data stored in dynamic RAM... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Caching method using cache data stored in dynamic RAM..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching method using cache data stored in dynamic RAM... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2860788

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.