Dual-ported, pipelined, two level cache system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S122000, C711S128000, C711S129000, C711S205000, C711S207000

Reexamination Certificate

active

06272597

ABSTRACT:

FIELD OF THE INVENITON
The present invention relates generally to the field of electronic data processing devices. More particularly, the present invention relates to cache memories.
BACKGROUND OF THE INVENTION
Many computer systems today use cache memories to improve the speed of access to more frequently used data and instructions. A small cache memory may be integrated on a microprocessor chip itself, thus, greatly improving the speed of access by eliminating the need to go outside the microprocessor chip to access data or instructions from an external memory.
During a normal data load accessing routine, the microprocessor will first look to an on-chip cache memory to see if the desired data or instructions are resident there. If they are not, the microprocessor will then look to an off-chip memory. On-chip memory, or cache memory, is smaller than main memory. Multiple main memory locations may be mapped into the cache memory. The main memory locations, or addresses, which represent the most frequently used data and instructions get mapped into the cache memory. Cache memory entries must contain not only data, but also enough information (“tag address and status” bits) about the address associated with the data in order to effectively communicate which external, or main memory, addresses have been mapped into the cache memory. To improve the percentage of finding the memory address in the cache (the cache “hit ratio”) it is desirable for cache memories to be set associative, e.g., a particular location in memory may be stored in multiple ways in the cache memory.
Most previous cache designs, because of their low frequency, can afford a relatively large cache, e.g. a cache which contains both integer data and larger floating point data. However, as microprocessor frequencies and instruction issue width increase, the performance of on-chip cache system becomes more and more important. In cache design, low latency and high capacity requirements are incompatible. For example, a cache with a low latency access usually means the cache has a small capacity. Conversely, a large cache means the cache has a long access latency.
For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, it is desirable to develop improved performance for on-chip cache memory.
SUMMARY OF THE INVENTION
A novel cache memory and method of operation are provided which increases microprocessor performance. In one embodiment, the cache memory has two levels. The first level cache has a first address port and a second address port. The second level cache similarly has a first address port and a second address port. A queuing structure is coupled between the first and second level of cache. In another embodiment, a method for accessing a cache memory is provided. The method includes providing a first virtual address and a second virtual address to a first translation look aside buffer and a second translation look aside buffer in a first level of the cache memory. The method further includes providing the first virtual address and the second virtual address to a translation look aside buffer in a second level of the cache memory. Providing the first virtual address and the second virtual address to the first level and the second level of the cache memory occurs in a first processor clock cycle. A first cache hit/miss signal corresponding to the first virtual address is provided through a queuing structure to an arbitrator in the second level of the cache memory after a second processor clock cycle.


REFERENCES:
patent: 5023776 (1991-06-01), Gregor
patent: 5442766 (1995-08-01), Chu et al.
patent: 5742790 (1998-04-01), Kawasaki
patent: 5930819 (1999-07-01), Hetherington et al.
patent: 6044478 (2000-03-01), Green
patent: 6065091 (2000-05-01), Green
patent: 6101579 (2000-08-01), Randolph et al.
Farrens et al., “A Partitioned Translation Lookaside Buffer Approach to Reducing Address Bandwidth”, May 1992, p. 435.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Dual-ported, pipelined, two level cache system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Dual-ported, pipelined, two level cache system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dual-ported, pipelined, two level cache system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2480491

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.