Cache memory store buffer

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S141000, C711S154000

Reexamination Certificate

active

06434665

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates in general to memory caching systems and, more specifically, to an apparatus and methods for allowing buffering of commands for a cache memory.
Processors are clocked at ever increasing frequencies to increase performance of the systems in which they are embedded. Today, these frequencies are approaching one gigahertz. Although the clock frequency of the processors is increasing, some memory has not kept pace with this evolution.
There are two major categories of memory, namely, static random access memory (SRAM) and dynamic random access memory (DRAM). SRAM can operate at speeds approaching one gigahertz, but DRAM only operates at speeds approaching two hundred megahertz. With this in mind, designers could use SRAM in order to have memory operate at the same clock frequency as the processor, however SRAM is much more costly than DRAM. This cost differential is attributable to the fact that a SRAM memory cell takes about eight transistors to implement, while a DRAM memory cell only takes one. Accordingly, most processing systems have far more DRAM than SRAM.
To achieve speeds with DRAM which approach SRAM speeds, memory cache circuits are used. Memory caches use a small SRAM which is mapped to a larger DRAM typically, outside the processor. Memory caches work under the principal that most read or write operations are fulfilled by the cache and do not require a time intensive read from external memory. Even for moderately sized memory caches, hit rates are near ninety-nine percent.
Although most processors have an on chip cache, there is further need for improving cache architectures. One common problem in cache architectures is where a write operation is immediately followed by a read operation. The write operation to a data memory in the cache is subdivided into two parts: checking a tag memory for a hit and writing to the data memory when there is a hit. The read operation from data memory is also subdivided into two parts: checking tag memory for a hit and reading the appropriate set from the data memory when there is a hit. To speed execution of the read operation, both parts are executed simultaneously and once a hit is determined, the proper data is selected from the set which has been already read. In this way, the read operation can execute in one clock cycle while the write operation takes two clock cycles to execute its two parts.
In conventional cache architectures, only a single access of data memory is possible at the same time. When the write operation is immediately followed by a read operation, the write to the data memory in the second clock cycle clashes with the read from data memory of the subsequent read operation. In Table I, this clash occurs in cycle n+
1
and is characterized by both write and read operations attempting to access the data memory at the same time which is not possible. To avoid this problem some conventional processors stall execution so that the write operation can complete before starting the read operation, as shown in Table II. Those skilled in the art appreciate that stalling the processor reduces performance of the system because the two pipelined operations require three cycle to complete.
TABLE I
Cycle
Operation
n
n + 1
Write
Check Tag
Write Data
Read
Check Tag & Read Data
TABLE II
Cycle
Operation
n
n + 1
n + 2
Write
Check Tag
Write Data
Stall


Read
Check Tag & Read Data
Some have solved the back-to-back write-before-read problem by increasing the speed of the cache. If the cache runs at a frequency twice as fast as the frequency of the processor, the write operation can be completed in a single clock cycle of the processor. This technique is effective, but it requires the cache to run at twice the frequency of the processor. However, as processor clock frequencies approach one gigahertz, conventional techniques cannot run the cache at twice that frequency. Accordingly, new techniques are needed to solve the back-to-back write-before-read problem.
SUMMARY OF THE INVENTION
According to the invention, disclosed are an apparatus and methods which allow for processing back-to-back write and read operations without stalling the processor. In one embodiment, a cache memory subsystem buffers write operations between a central processing unit (CPU) and the cache memory subsystem. Included in the cache memory subsystem are a tag memory, a data memory and a store buffer. The store buffer is coupled to both the data memory and the tag memory. Additionally, the store buffer stores a write operation.
In another embodiment, a process for storing information in a memory cache is disclosed. The process includes receiving a write operation and queuing the write operation while other operations are performed. At a later time, the write operation is executed. The write operation may be queued in a store buffer, for example.
In yet another embodiment, a process for performing back-to-back cache operations is disclosed. In one step, a write operation is received and queued. A read operation is received and executed in other steps. After queuing, the write operation is executed.


REFERENCES:
patent: 3820078 (1974-06-01), Curley et al.
patent: 4814981 (1989-03-01), Rubinfield
patent: 5251311 (1993-10-01), Kasai
patent: 5386565 (1995-01-01), Tanaka et al.
patent: 5423050 (1995-06-01), Taylor et al.
patent: 5434804 (1995-07-01), Bock et al.
patent: 5440705 (1995-08-01), Wang et al.
patent: 5448576 (1995-09-01), Russell
patent: 5452432 (1995-09-01), Macachor
patent: 5455936 (1995-10-01), Maemura
patent: 5479652 (1995-12-01), Dreyer et al.
patent: 5483518 (1996-01-01), Whetsel
patent: 5488688 (1996-01-01), Gonzales et al.
patent: 5530965 (1996-06-01), Kawasaki et al.
patent: 5570375 (1996-10-01), Tsai et al.
patent: 5590354 (1996-12-01), Klapproth et al.
patent: 5596734 (1997-01-01), Ferra
patent: 5598551 (1997-01-01), Barajas et al.
patent: 5606670 (1997-02-01), Abramson et al.
patent: 5608881 (1997-03-01), Masamura et al.
patent: 5613153 (1997-03-01), Arimilli et al.
patent: 5617347 (1997-04-01), Lauritzen
patent: 5627842 (1997-05-01), Brown et al.
patent: 5657273 (1997-08-01), Ayukawa et al.
patent: 5682545 (1997-10-01), Kawasaki et al.
patent: 5704034 (1997-12-01), Circello
patent: 5708773 (1998-01-01), Jeppesen, III et al.
patent: 5717896 (1998-02-01), Yung et al.
patent: 5724549 (1998-03-01), Selgas et al.
patent: 5737516 (1998-04-01), Circello et al.
patent: 5751621 (1998-05-01), Arakawa
patent: 5768152 (1998-06-01), Battaline et al.
patent: 5771240 (1998-06-01), Tobin et al.
patent: 5774701 (1998-06-01), Matsui et al.
patent: 5778237 (1998-07-01), Yamamoto et al.
patent: 5781558 (1998-07-01), Inglis et al.
patent: 5796978 (1998-08-01), Yoshioka et al.
patent: 5828825 (1998-10-01), Eskandari et al.
patent: 5832248 (1998-11-01), Kishi et al.
patent: 5835963 (1998-11-01), Yoshioka et al.
patent: 5845321 (1998-12-01), Ito et al.
patent: 5848247 (1998-12-01), Matsui et al.
patent: 5860127 (1999-01-01), Shimazaki et al.
patent: 5862387 (1999-01-01), Songer et al.
patent: 5867726 (1999-02-01), Ohsuga et al.
patent: 5884092 (1999-03-01), Kiuchi et al.
patent: 5896550 (1999-04-01), Wehunt et al.
patent: 5918045 (1999-06-01), Nishii et al.
patent: 5920889 (1999-07-01), Petrick et al.
patent: 5930523 (1999-07-01), Kawasaki et al.
patent: 5930833 (1999-07-01), Yoshioka et al.
patent: 5944841 (1999-08-01), Christie
patent: 5950012 (1999-09-01), Shiell et al.
patent: 5953538 (1999-09-01), Duncan et al.
patent: 5956477 (1999-09-01), Ranson et al.
patent: 5970509 (1999-10-01), Green
patent: 5978874 (1999-11-01), Singhal et al.
patent: 5978902 (1999-11-01), Mann
patent: 5983017 (1999-11-01), Kemp et al.
patent: 5983379 (1999-11-01), Warren
patent: 6023757 (2000-02-01), Nishimoto et al.
patent: 6038582 (2000-03-01), Arakawa et al.
patent: 6038661 (2000-03-01), Yoshioka et al.
patent: 6070234 (2000-05-01), Shimazaki et al.
patent: 6145054 (2000-11-01), Mehrotra et al.
patent: 6148372 (2000-11-01), Mehrotra et al.
patent: 6154812 (2000-11-01), Hetherington et al.
patent: 6226713 (2001-05-01),

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache memory store buffer does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache memory store buffer, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory store buffer will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2890729

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.