Apparatus and method for tracking flushes of cache entries...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S144000, C711S145000, C711S122000

Reexamination Certificate

active

06591332

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to an improvement in cache memory management, and more specifically to tracking flushes of cache entries in a cache memory bridging a coherent memory domain (e.g., a data processing system) and non-coherent memory domain (e.g., input/output (I/O) devices).
2. Description of the Prior Art
Many data processing systems (e.g., computer systems, programmable electronic systems, telecommunication switching systems, control systems, and so forth) use one or more I/O devices for data transfers. Typically, an I/O bridge host (also called a primary hub), and multiple I/O bridge guests are connected in a hierarchy to facilitate data transfers between the data processing system and the I/O devices. The I/O bridge host is connected to the I/O bridge guests, which are connected to the I/O controllers, which are connected to the I/O devices.
FIG. 1
illustrates a typical prior art data processing system
100
that includes central processing unit (CPU)
102
connected by a common bus to main memory
104
, and I/O bridge host
106
. I/O bridge host
106
is connected to I/O bridge guests
108
,
110
,
112
, and
114
. I/O bridge guest
108
is connected to I/O devices
116
and
118
. I/O bridge guest
110
is connected to I/O device
120
. I/O bridge guest
112
is connected to I/O devices
122
and
124
. I/O bridge guest
114
is connected to I/O device
126
. The I/O controllers are not shown, but they are frequently incorporated in the I/O devices.
Many data processing systems use a cache memory (cache) in the I/O bridge host to improve the speed of communication with the I/O bridge guests (and ultimately the I/O devices). The cache management of the cache entries in the cache in the I/O bridge host is complex. One reason for this complexity is that the I/O bridge host and the cache are usually designed to be in the coherent memory domain (i.e., the location of the most up-to-date version of each cache entry is always identified), while the I/O devices are usually in the non-coherent memory domain.
One prior art solution for tracking the point in time to flush a cache entry (i.e., send data back to main memory or give up ownership of the cache entry) involves sending a cache entry address and a cache entry from the I/O bridge host to the I/O device. When the cache entry can be flushed, a message containing the cache entry address is sent back to the I/O bridge host from the I/O bridge guest. However, this prior art solution requires considerable bandwidth, and/or a large number of circuit packaging pins for communicating the cache entry address.
FIG. 2
illustrates such a prior art data processing system
200
which includes an I/O bridge host
106
that contains I/O control logic
202
, I/O cache
204
, and interface logic
206
. I/O cache control logic
202
receives a cache entry address
220
and a flush command
230
from memory. I/O cache control logic
202
provides a cache entry address
210
to I/O cache
204
and interface logic
206
. I/O cache
204
provides cache data
212
to interface logic
206
. Interface logic
206
outputs cache entry address
210
and cache data
212
to interface logic
208
for the I/O bridge guest
108
. I/O bridge guest
108
outputs cache data
212
to I/O device
116
, and outputs cache entry address
210
and a flush signal
214
to I/O cache control logic
202
. This requires considerable bandwidth, and a large number of circuit packaging pins for communicating the cache entry address, unless the I/O bridge host and I/O bridge guest are on the same integrated circuit chip or module.
A second prior art solution for tracking the time to flush cache entries in a cache in an I/O bridge host is to store cache entry addresses in a flush-cache-entry first-in-first-out (FIFO) memory. When an I/O device reads the last byte of a cache entry, it sends a message indicating that the I/O bridge host can flush the cache entry. When an I/O device sends a message to the I/O bridge host to flush the cache entry, the cache flushes the cache entry to main memory in the data processing system. The FIFO must be large enough to accommodate the maximum number of cache entries being transferred between the data processing system and the I/O device at a given point of time.
FIG. 3
illustrates one configuration of a prior art data processing system
300
that includes an I/O bridge host
106
containing I/O cache
204
, which is connected by a common bus to interface logic and flush cache-entry-address (CEA) FIFOs
306
,
308
,
310
, and
312
. Flush CEA FIFO
306
is connected to I/O bridge guest
108
. Flush CEA FIFO
308
is connected to I/O bridge guest
110
. Flush CEA FIFO
310
is connected to
110
bridge guest
112
. Flush CEA FIFO
312
is connected to I/O bridge guest
114
.
FIG. 4
illustrates a prior art data processing system
400
in more detail that includes an I/O bridge host
106
that contains I/O cache control logic
402
, I/O cache
204
, and interface logic
306
. I/O cache control logic
402
receives a cache entry address
220
and a flush command
230
from memory. I/O cache control logic
402
provides a cache entry address
210
to I/O cache
204
and to Flush CEA FIFO
406
(henceforth simply referred to as CEA FIFO
406
) in interface logic
306
. I/O cache
204
provides cache data
212
to interface logic
306
. Interface logic
306
outputs cache data
212
to interface logic
404
for the I/O bridge guest
108
. Interface logic
404
outputs flush signal
214
to I/O cache control logic
402
and CEA FIFO
406
. I/O bridge guest
108
outputs cache data
212
to an I/O device (not shown).
The problem with this second prior art solution is that it does not take into account the fact that the cache could be required to flush or choose to flush the cache entry for other reasons, such as during the operation of a cache snoop algorithm or a cache replacement algorithm. The cache may re-use the cache entry for some other data. When the original I/O device sends a flush message, the cache controller will flush the new data. Because the cache is part of a coherent memory domain (i.e., architecture), this behavior does not cause data corruption, but it can cause performance degradation, because the data may need to be re-fetched (e.g., from main memory) for the second request that is being serviced by the cache.
It would be desirable to prevent the flushing of a re-allocated cache entry by keeping track of the flush status of the cache entry by using the CEA FIFO, and still maintain the order of the cache entries for the I/O responses. This avoids the cache tossing out new data destined for a different I/O device than the I/O device issuing the flush, and thereby prevents performance degradation caused by additional flush and re-request transactions for the cache.
SUMMARY OF THE INVENTION
An object of the invention is to prevent the flushing of a re-allocated cache entry by keeping track of the flush status of the cache entry by using the CEA FIFO, and still maintain the order of the cache entries for the I/O responses.
A first aspect of the invention is directed to a method for tracking at least one cache entry in a cache serving data transfers between a coherent memory domain and a non-coherent memory domain in a data processing system, including steps of storing an address corresponding to at least one cache entry in a plurality of memory cells, using at least one memory cell as a valid flag to indicate when at least one cache entry is still in the cache, and changing the valid flag based on one or more signals transmitted from the non-coherent memory domain.
A second aspect of the invention is directed to a data processing system or an I/O bridge host, having a cache and at least one cache entry, serving data transfers between a coherent memory domain and a non-coherent memory domain, including a plurality of memory cells, wherein the plurality of memory cells are configured to store an address corresponding to the cache entry, at least one memory cell,

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for tracking flushes of cache entries... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for tracking flushes of cache entries..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for tracking flushes of cache entries... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3081834

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.