Static information storage and retrieval – Addressing – Optical
Reexamination Certificate
2003-05-20
2004-08-31
Le, Vu A. (Department: 2824)
Static information storage and retrieval
Addressing
Optical
C365S230030
Reexamination Certificate
active
06785190
ABSTRACT:
BACKGROUND
1. Field
The present disclosure pertains to the field of cache memories. More particularly, the present disclosure pertains to a new method for improving command bandwidth and reducing latency for memory access for read and/or write operations.
2. Description of Related Art
Cache memories generally improve memory access speeds in computer or other electronic systems, thereby typically improving overall system performance Increasing either or both of cache size and speed tend to improve system performance, making larger and faster caches generally desirable. However, cache memory is often expensive, and generally costs rise as cache speed and size increase. Therefore, cache memory use typically needs to be balanced with overall system cost.
Traditional cache memories utilize static random access memory (SRAM), a technology which utilizes multi-transistor memory cells. In a traditional configuration of an SRAM cache, a pair of word lines typically activates a subset of the memory cells in the array, which drives the content of these memory cells onto bit lines. The outputs are detected by sense amplifiers. A tag lookup is also performed with a subset of the address bits. If a tag match is found, a way is selected by a way multiplexer (mux) based on the information contained in the tag array.
A DRAM cell is typically much smaller than an SRAM cell, allowing denser arrays of memory and generally having a lower cost per unit. Thus, the use of DRAM memory in a cache may advantageously reduce per bit cache costs. One prior art DRAM cache performs a full hit/miss determination (tag lookup) prior to addressing the memory array. In this DRAM cache, addresses received from a central processing unit (CPU) are looked up in the tag cells. If a hit occurs, a full address is assembled and dispatched to an address queue, and subsequently the entire address is dispatched to the DRAM simultaneously with the assertion of load address signal.
Typically, a processor accesses a plurality of bits in a consecutive manner. For example, a burst operation allows a processor to access a consecutive number of bits based on the burst length. The size (“width”) of a bus within a memory needs to increase for accommodating processors that require larger burst lengths, such as, a 16 bit burst length. However, a wider memory bus increases the size of the memory. Thus, the cost of the memory increases. Likewise, a memory that supports only an eight-bit burst length utilizes two memory accesses of eight bits. Thus, the additional penalty in time (“latency”) for the additional eight-bit access decreases the processor's performance because the processor is waiting for the completion of the additional eight-bit access. Also, the latency depends on the internal circuit paths of the DRAM.
REFERENCES:
patent: 5463759 (1995-10-01), Ghosh et al.
patent: 5732241 (1998-03-01), Chan
patent: 5956743 (1999-09-01), Bruce et al.
Bains Kuljit S.
Halbert John
Le Vu A.
Nesheiwat Michael J.
LandOfFree
Method for opening pages of memory with a single command does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for opening pages of memory with a single command, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for opening pages of memory with a single command will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3329396