Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2000-08-22
2002-03-05
Nguyen, Hiep T. (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S121000, C711S133000, C711S148000, C711S153000, C710S120000
Reexamination Certificate
active
06353876
ABSTRACT:
BACKGROUND OF THE INVENTION
This invention relates generally to computing systems with multiple shared memory resources and more particularly, to methods for increasing the exchange rate between main memories and cache memories in multiple central process unit (CPU) computing systems.
As is known in the art, complex computing systems, in particular systems which have multiple CPUs, may have some of what are known as “dirty” cache entries. A cache entry, or piece of data, is known as a “dirty” entry if it has been modified since the time it was fetched from the main memory to the cache. This means that the “dirty” cache entry has a different value from the original data fetched from the main memory, due to an action in the CPU, typically an arithmetic operation. Thus the information in the cache memory has been modified by some step in the computation process and the original data entry in the main memory is no longer compatible with the newly calculated value for this particular piece of data in the cache, i.e., what is known as a “dirty” cache value. By contrast, a “clean” cache value is a memory value that has not been modified by the CPU, typically an instruction or a reference data value. Thus, the original data value in the main memory needs to be updated to equal the modified or “dirty” cache entry. This is typically done by a “write-back” command, also known as “retiring the victim”. In a “write-back”, the modified or “dirty” cache entry is written back into the main memory location from which it was initially fetched, thus updating the data value. After the “dirty” cache entry has been rewritten back into the main memory, (i.e., the main memory location for that particular value has been updated to the new value) the memory is said to be “coherent” (i.e., there are not multiple versions of the same data value in the computer system), and the computer system memory is said to be maintaining its “coherency”.
During the time period in which this “dirty” cache entry value is waiting to be written back to the main memory it is necessary to prevent the CPU from “writing over” the “dirty” cache entry with a different piece of information from a different location in the main memory. Such a different piece of data may be required to continue the progress of the program being run by the CPU. For example, if a new piece of information is fetched from some portion of the main memory and placed into the particular cache memory entry location that currently has the “dirty” information, this is known as being “run over” by an “impending fill”. Note that the “run over” “dirty” data can no longer be used to properly update the main memory. A CPU cache block that is displaced or “run over” by an “impending fill” is known as a “victim”. Another way of looking at this is to note that the “impending fill” is the new data that will be stored at the cache memory location, and the “victim” is the old data that was previously stored at the cache location, and needs to be rewritten into the main memory location from which it was originally fetched in order to keep the data in the memory up to date.
As is known in the art, any potential “victim” may be exchanged with the “impending fill” data on the bus system, with the “victim” data then directed back to the original portion of the main memory from which it was initially fetched, thus rewriting the corrected data back into the original main memory location. This system of exchanging data works well in computing systems using bus lines to connect the CPU or CPUs to the main memory or memory modules.
A problem with exchanging data in computing systems that use bus lines is that the period of time required to wait for the main memory (generally composed of Dynamic Random Access Memories (i.e., DRAMs)) to access the correct memory location and to ship the exchanged information on the bus, known as the “latency” period, reduces the operational speed of the system. Thus, the sequence of events on a typical bus system might be:
1. The exchange command, possessing the address of the “fill” data in the main memory and the address of the “victim” cache data which will be run over, is sent out over the bus line;
2. The “victim”, i.e., the “dirty” data, and its address, show up on the bus, which then writes the main portion of the exchange transfer back into the memory address;
3. The main memory provides the new “fill” data.
The above sequence of events slows down the overall functional speed of the system. In other words the “latency” period is increased. This situation of high exchange “latency” cannot be avoided, because as noted above, it is important to maintain “data coherency”, i.e., not have multiple versions of the “same” data value in a computer.
The above-mentioned situation with system coherency becomes even more serious, as compared to the bus type system discussed above, in what is known as a “crossbar switch” type system. A “crossbar switch” is a circuit which connects any of a series of CPUs or other data users (known as commanders) to any of a series of memory resources in an arbitrary fashion or in a fashion dictated by the program. Any one of the data users can attach at any time to any one of the memory resources. This type of arrangement is faster than the bus system used in the prior art, because each CPU and each memory has what is known as a “hard link” with each of the other units. Data values are not simply dumped onto a bus with hopes that they arrive at the desired location without a collision with another data value from another one of the CPUs. Rather, the commander is connected directly to the specific memory resource containing the data value needed and no other commander may have access to that memory resource during the time of interconnection. This is results in what is known as having a wider data transmission bandwidth. With a crossbar switch, the data transmission bandwidth may be the sum of all the individual parts. In other words, in a four processor computer system, a crossbar switch may be four times faster than the individual serial-port bandwidth of an equivalent bus. All four CPUs may be connected to a different one of the memory resources at the same time.
A problem with a bus type computing system, as noted above, is that two or more memory data user elements (commanders), or memory resource elements may be trying to “write” data onto the same bus at the same time. This results in what are known as “contentions” or “collisions” between the multiple commanders and memory units, as each of these users and memories compete for access to what is in essence a single communication resource. The need to detect “collisions”, and to use an arbitor chip to resolve the “collisions” and “contentions”, contributes to the lack of speed in bus systems, particularly bus systems that have large numbers of commanders (or data users) and bystanders (or data resources) connected to them. Typically, arbitors resolve collisions by notifying each of the two contending commanders or bystanders that there was a “collision”, i.e., that the data did not get to its intended location, and ordering each of the contenders to step back and wait for a random period of time before attempting to access the bus again. Clearly, time is lost when the data does not get to its intended destination, and the random waiting period required to decrease the probability of another collision between the same two contenders also represents lost time.
Another way of looking at this problem is to say that a bus type system is limited to some maximum serial bandwidth, whereas, a crossbar switch has the ability to move data in a parallel fashion and thus is the sum of the parts of all of the multiple data paths of the individual serial port bandwidths.
For example, in a computer system having four CPUs, or commanders, four main memory modules, and a crossbar switch, there would exist only a one-in-four chance that the cache “victim” data's original memory address and the new “fill” data's memory address happen to be from the same main memory module
Doren Stephen Van
Goodwin Paul M.
Compaq Information Technologies Group L.P.
Hamilton Brook Smith & Reynolds P.C.
Nguyen Hiep T.
LandOfFree
Cache memory exchange optimized memory organization for a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache memory exchange optimized memory organization for a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory exchange optimized memory organization for a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2873422