Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output access regulation
Reexamination Certificate
1998-11-13
2002-08-13
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output access regulation
C710S035000, C711S003000, C711S100000, C711S118000, C711S167000
Reexamination Certificate
active
06434639
ABSTRACT:
BACKGROUND
The invention relates to reducing bus loading due to snoop traffic.
Referring to
FIG. 1
, a typical computer system
10
may include a level one (L1) cache
14
to expedite transactions between a microprocessor
16
and a system memory
18
, such as a dynamic random access (DRAM) memory. To accomplish this, the cache
14
typically includes a static random access memory (SRAM) that typically is more expensive and faster than the system memory
18
. Usually, the cache
14
retains a copy of data in the most recently accessed (by the microprocessor
16
) memory locations to make these locations “cached locations.” In this manner, when the microprocessor
16
executes a memory read operation, for example, to retrieve data from one of the cached locations, the cache
14
, instead of the system memory
18
, provides the data to the microprocessor
16
. As a result, the read operation is accomplished in less time than if the data is retrieved from the slower system memory
18
. Because of the cache
14
, memory write operations by the microprocessor
16
may also be accomplished in less time, as the data may be stored in the cache
14
instead of in the system memory
18
. The cache
14
typically is integrated with the microprocessor
16
and may be accessed by devices other than the microprocessor
16
via a local bus
22
that is coupled to the microprocessor
16
.
Because devices other than the microprocessor
16
typically interact with the system memory
18
, measures typically are in place to preserve coherency between data stored in the cache
14
and data stored in the system memory
18
. For example, a bridge
23
may furnish a memory write operation to a cached location. In short, the cache
14
may implement a variety of different protocols to preserve data coherency.
For example, for a “write-through” protocol, the write operation furnished by the bridge
23
in the above example causes the cache
14
to invalidate a corresponding cache line in the cache
14
. A cache line typically includes several bytes of data that are associated with contiguous memory locations. When the microprocessor
16
executes a memory write operation that targets a memory location that is associated with a cache line, then a “write hit” occurs. In general, to determine when a cache hit (i.e., a write or read (described below) hit) occurs, a bus operation called “snooping” occurs on the local bus
22
. In response to this write hit, the cache
14
updates the corresponding cache line and the corresponding locations in the system memory
18
pursuant to the write-through policy. However, when a device other than the microprocessor
16
writes to a memory location that is associated with the cache line, then the cache
14
invalidates the cache line, as at least some of the data of the cache line has become “stale.” Read operations are handled in a slightly different manner. When the microprocessor
16
executes a memory read operation, the cache
14
determines if a “read hit” occurs. A read hit occurs if the read operation targets a memory location that is associated with a cache line that has not been invalidated. When a read hit occurs, the cache
14
(and not the slower system memory
18
) provides the requested data. Otherwise, the system memory
18
provides the requested data.
The cache
14
may alternatively implement a “write-back” policy that improves the performance of memory write operations (versus the “write-through” policy described above) by eliminating the slower writes to system memory
18
every time the cache
14
is updated. Instead, the system memory
18
is updated when coherency problems arise. In this manner, when the microprocessor
16
writes data to a memory location associated with a cache line, a write hit occurs, and in response to this occurrence, the cache
14
updates the corresponding cache line without updating the system memory
18
. When a device other than the microprocessor
16
performs a memory write operation to a memory location that is associated with the cache line, then the cache
14
does not invalidate the associated cache line, as some of the data in the cache line may have been modified by the microprocessor
16
. Instead, the cache
14
halts, or “backs off,” the memory write operation and updates the memory locations in the system memory
18
that are associated with the cache line. After the system memory
18
is updated, the cache
14
permits the original write operation to proceed. For the write-back policy, a similar scenario occurs when a device other than the microprocessor
16
performs a read operation to read data from the system memory
18
.
The memory write operations that are furnished by the bridge
23
may be, for example, in response to a stream of data that is produced by video camera
12
. In this manner, the video camera
12
may continually provide signals indicative of frames of data to a serial bus
11
that is coupled to the bridge
23
. In response, the bridge
23
, in turn, may generate numerous write operations to store the data in predetermined contiguous regions of the system memory
18
where the data may be processed (decompressed, for example) by the microprocessor
16
before corresponding video images are formed on a display
20
.
Unfortunately, the numerous write operations that are furnished by the bridge
23
may cause an extensive number of snooping operations on the local bus
22
. The snooping operations, in turn, may consume a considerable amount of bandwidth of the local bus
22
. As a result, the processing bandwidth of the microprocessor
16
may be effectively reduced.
Thus, there is a continuing need for a computer system to more efficiently handle a stream of data.
SUMMARY
In one embodiment, a method for use with a computer system includes receiving requests to store data in one or more memory locations that are collectively associated with a cache line. The requests are combined to furnish a memory operation.
REFERENCES:
patent: 5434993 (1995-07-01), Liencres et al.
patent: 5446855 (1995-08-01), Dang et al.
patent: 5459842 (1995-10-01), Begun et al.
patent: 5535340 (1996-07-01), Bell et al.
patent: 6012118 (2000-01-01), Jaykumar et al.
patent: 6021467 (2000-02-01), Konigsburg et al.
patent: 410116228 (1998-05-01), None
“Handling the L2-Pipeline Least-Recently-Used”, IBM TDB vol. 36, No. 12 Dec. 1993, pp. 621-624.*
“Mechanism to Detect and Supress Sur-Stores in Store Stacks”, IBM TDB vol. 36, No. 12 Dec. 1993, pp. 559-560.*
“Parallel Synchronization with Hardware Collision Detection and Software Combining”, IBM Technical Bulletin, vol. 32, Issue 4A, pp. 259-261, Sep. 1989.
Gaffin Jeffrey
Perveen Rehana
Trop Pruner & Hu P.C.
LandOfFree
System for combining requests associated with one or more... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System for combining requests associated with one or more..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for combining requests associated with one or more... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2935666