Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-08-27
2004-01-20
Kim, Hong (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S141000, C711S145000, C711S146000, C709S217000
Reexamination Certificate
active
06681292
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a caching input/output (I/O) hub. More particularly, the present invention relates to a distributed read and write caching implementation within a caching I/O hub that optimizes scalability and performance in multi-processor computer systems.
2. Discussion of the Related Art
Multi-processor computer systems are designed to accommodate a number of central processing units (CPUs), coupled via a common system bus or switch to a memory and a number of external input/output devices. The purpose of providing multiple central processing units is to increase the performance of operations by sharing tasks between the processors. Such an arrangement allows the computer to simultaneously support a number of different applications while supporting I/O components that are, for example, communicating over a network and displaying images on attached display devices. Multi-processor computer systems are typically utilized for enterprise and network server systems.
To enhance performance, all of the devices coupled to the bus must communicate efficiently. Idle cycles on the system bus represent time periods in which an application is not being supported, and therefore represent reduced performance.
A number of situations arise in multi-processor computer system designs in which the bus, although not idle, is not being used efficiently by the processors coupled to the bus. Some of these situations arise due to the differing nature of the devices that are coupled to the bus. For example, central processing units typically include cache logic for temporary storage of data from the memory. A coherency protocol is implemented to ensure that each central processor unit only retrieves the most up to date version of data from the cache. In other words, cache coherency is the synchronization of data in a plurality of caches such that reading a memory location via any cache will return the most recent data written to that location via any other cache. Therefore, central processing units are commonly referred to as “cacheable” devices.
However, input/output components are generally non-cacheable devices. That is, they typically do not implement the same cache coherency protocol that is used by the CPUs. Accordingly, measures must be taken to ensure that I/O components only retrieve valid data for their operations. Typically, I/O components retrieve data from memory, or a cacheable device, via a Direct Memory Access (DMA) operation. An input/output hub component may be provided as a connection point between various input/output bridge components, to which input/output components are attached, and ultimately to the central processing units.
An input/output hub may be a caching I/O hub. That is, the I/O hub includes a caching resource to hold read and write elements. Although a single caching resource may be utilized for both read and write elements, the read and write elements are treated differently by the I/O components and the interfaces connected thereto, and accordingly have different requirements. Because the single caching resource is utilized by both read and write elements, the caching resource is not optimized for either application, and accordingly, it is not the most efficient implementation available.
REFERENCES:
patent: 5613153 (1997-03-01), Arimilli et al.
patent: 5835945 (1998-11-01), King et al.
patent: 6128711 (2000-10-01), Duncan et al.
patent: 6192450 (2001-02-01), Bauman et al.
patent: 6230219 (2001-05-01), Fields, Jr. et al.
patent: 6321298 (2001-11-01), Hubis
patent: 6434639 (2002-08-01), Haghighi
patent: 6463510 (2002-10-01), Jones et al.
patent: 2001/0032299 (2001-10-01), Teramoto
patent: 2003/0041215 (2003-02-01), George et al.
Patent Application Publication No. US 2001/0014925 A1, dated Aug. 16, 2001, Kumata.
Bell Mike
Blankenship Robert
Congdon Bradford B
Creta Kenneth C.
George Robert
Intel Corporation
Kim Hong
Pillsbury & Winthrop LLP
LandOfFree
Distributed read and write caching implementation for... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Distributed read and write caching implementation for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed read and write caching implementation for... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3206382