Snoop stall reduction on a microprocessor external bus

Electrical computers and digital data processing systems: input/ – Intrasystem connection – Bus interface architecture

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S107000, C711S146000, C711S167000

Reexamination Certificate

active

06604162

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to the field of microprocessors, computers and computer systems. More particularly, the present invention relates to snoop stall reduction on a microprocessor external bus.
BACKGROUND OF THE INVENTION
Since the beginning of electronic computing, main memory access has been much slower than processor cycle times. Access time is the time between when a read is initially requested and when the desired data arrives. Processor cycle time and memory access time continues to widen with advances in semiconductor technology. Efficient mechanisms to bridge this gap are central to achieving high performance in future computer systems.
The conventional approach to bridging the gap between memory access time and processor cycle time has been to introduce a high-speed memory buffer, commonly known as a cache, between the microprocessor and main memory. Caches are ubiquitous in virtually every class of general purpose computer systems. The data stored within one cache memory is often shared among the various processors or agents which form the computer system. The main purpose of a cache memory is to provide fast access time while reducing bus and memory traffic. A cache achieves this goal by taking advantage of the principles of spatial and temporal locality.
As semiconductor technology has continued to improve, the gap between memory access time and central processing unit (CPU) cycle time has widened to the extent that there had arisen a need for a memory hierarchy which includes two or more intermediate cache levels. For example, a two-level cache memory hierarchy often provides an adequate bridge between access time and CPU cycle time such that memory latency is dramatically reduced. In these types of computer systems, the first-level (L1) cache or the highest level cache provides fast, local access to data since this cache is situated closest to the execution unit and has the smallest size. The second-level (L2) cache provides good data retention in bus and memory traffic because this cache is comparatively larger in size. The second level (L2) cache therefore takes up significant die size area and is consequently slower than the first level (L1) cache.
Main memory is typically the last or final level down in the memory hierarchy. Main memory satisfies the demands of caches and vector units, and often serves as the interface for one or more peripheral devices. Main memory usually comprises of core memory or a dedicated data storage device such as a hard disk drive unit.
One of the problems that arises in computer systems that include a plurality of caching agents and a shared data cache memory hierarchy is cache coherency. Cache coherency refers to the problem wherein, due to the use of multiple or multi-level cache memories, data may be stored in more than one location in memory. For example, if a microprocessor is the only device in a computer system that operates on data stored in memory and the cache is situated between the CPU and memory, there is little risk in the CPU using stale data. However, if other agents in the system share storage locations in the memory hierarchy, it creates an opportunity for copies of data to be inconsistent, or for other agents to read stale copies.
Cache coherency is especially problematic in computer systems that employ multiple processors as well as other caching agents. For instance, a program running on multiple processors requires that copies of the same data be located in several cache memories. Thus, the overall performance of the computer system depends upon the ability to share data in a coherent manner.


REFERENCES:
patent: 5325503 (1994-06-01), Stevens et al.
patent: 5572703 (1996-11-01), MacWilliams et al.
patent: 5797026 (1998-08-01), Rhodehamel et al.
patent: 5802577 (1998-09-01), Bhat et al.
patent: 5991855 (1999-11-01), Jeddeloh et al.
patent: 6041380 (2000-03-01), LaBerge
patent: 6052762 (2000-04-01), Arimilli et al.
patent: 6065101 (2000-05-01), Gilda
patent: 6078981 (2000-06-01), Hill et al.
patent: 6112283 (2000-08-01), Neiger et al.
patent: 6115796 (2000-09-01), Hayek et al.
patent: 6202101 (2001-03-01), Chin et al.
patent: 6374329 (2002-04-01), McKinney et al.
patent: 6397297 (2002-05-01), Sperber et al.
patent: 6397304 (2002-05-01), George
patent: 6460119 (2002-10-01), Bachand et al.
Pentium Pro and Pentium II System Architecture, Second Edition, Mindshare, Inc., Tom Shanley, PC System Architecture Series, pp.: cover-xxxiv, 207-220 and 261-296.
Pentium Pro Processor System Architecture, Mindshare, Inc., Tom Shanley, PC System Architecture Series, pp.: cover-xxx, 187-200 and 241-276.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Snoop stall reduction on a microprocessor external bus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Snoop stall reduction on a microprocessor external bus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Snoop stall reduction on a microprocessor external bus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3085218

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.