Multi-bus data processing system in which all data words in...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S119000, C711S122000, C711S123000, C711S131000, C711S141000

Reexamination Certificate

active

06223260

ABSTRACT:

BACKGROUND OF THE INVENTION
This inventing relates to the structure and operation of the cache memories in a distributed data processing system.
In the prior art, a typical distributed data processing system consists of a single bus, a main memory module coupled to the bus, and multiple digital computers which are coupled to the bus through respective cache memories. One such system, for example, is the Pentium Pro system that was recently announced by Intel in which from one to four digital computers are coupled to a host bus through respective cache memories. See page 1 of Electronic Engineering Times, for Oct. 30, 1995.
Each cache memory in the above distributed data processing system operates faster than the main memory; and thus, the effect of the cache memories is that they provide a performance increase. But, each cache memory has a smaller storage capacity than the main memory; and thus, at any one time instant, each cache memory stores only a subset of all of the data words which are stored in the main memory.
In order to keep track of which data words are in a particular cache memory, each data word is stored in the cache memory with an accompanying compare address and tag bits. This compare address identifies the address of the corresponding data word in the main memory; and the tag bits identify the state of the stored data word. In the above Pentium pro system, there are four tag bits, E, S, M, and I.
Tag bit E is true when the corresponding data word is stored in just a single cache memory. Tag bit S is true when the corresponding data word is stored in more than one cache memory. Tag M is true when a corresponding data word has been modified by the respective computer to which the cache memory is coupled. And, tag bit I is true when the data word cannot be used.
Now, an inherent limitation which the above Pentium Pro data processing system has is that only a limited number of digital computers with their respective cache memories can be connected to the host bus. This limitation occurs because the physical length of the bus must be restricted in order to transfer signals on the bus at some predetermined speed. If the bus length is increased to accommodate more connections by additional digital computers and their respective cache memories, then the speed at which the bus operates must be decreased.
By comparison, in accordance with the present invention, a multi-level distributed data processing system is disclosed which has the following architecture: a single system bus with a main memory couple thereto; multiple high level cache memories, each of which has a first port coupled to the system bus and a second port coupled to a respective processor bus; and, each processor bus being coupled through respective low level cache memories to respective digital computers. With this multi-level distributed data processing system, each processor bus can be restricted in length and thus operate at a high speed; and at the same time, the maximum number of digital computers on each processor bus can equal maximum number of computers in the entire Pentium Pro system.
However, a problem which needs to be addressed in the above multi-level distributed data processing system is that each high level cache memory preferably should be able to respond quickly and simultaneously to two different READ commands, one of which occurs on a processor bus and the other of which occurs on the system bus. If the READ command on the processor bus is for a data word which is stored in the high level cache memory, then the high level cache memory preferably should present that data word on the processor bus quickly in order to enhance system performance. At the same time, if the READ command on the system bus is for a data word which is stored in both the main memory and the high level cache memory, then the high level cache memory also should respond quickly on the system bus with a control signal which indicates to the sender of the READ command that the data word is shared, as opposed to being exclusive. Likewise, if the READ command on the system bus is for a data word that is in the high level cache memory and which has there been modified by a digital computer on the processor bus, then the high level cache memory preferably should respond quickly on the system bus with a control signal which indicates to the sender of the READ command that the requested data word will be deferred. Then the high level cache memory can fetch the modified data word and send it on the system bus.
In the prior art, U.S. Pat. No. 5,513,335 describes a two port cache in which each port has its own set of compare addresses. Thus, this cache is able to make address comparisons quickly for two different READ commands which occur simultaneously on the two ports. However, during the execution of a READ command, the tag bits for the compare address at which the READ command occurs may have to be changed. And, if a READ command on one port causes the tag bits to change on the other port while those tag bits are being used by the other port, a race condition which causes errors will occur. Such a race occurs in the two port cache of U.S. Pat. No. 5,513,335.
Accordingly, a primary object of the invention is to provide a multi-level distributed data processing system in which the above problems are overcome.
BRIEF SUMMARY OF THE INVENTION
In accordance with the present invention, a two-port cache memory, for use in a multi-level distributed data processing system, is comprised of a first port for receiving read commands from a system bus and a second port for receiving read commands from a processor bus. Within this two-port cache, a first tag-memory is coupled to the first port; a second tag-memory is coupled to said second port; and a queue is coupled between the first and second tag memories. Also, within this two-port cache, the first tag-memory initially stores a compare address with tag bits in an initial state, and the second tag-memory initially stores the same compare address with the same tag bits. While the tag bits for the stored compare address are in the initial state, the first tag-memory detects that a read command is received on the first port with an address which equals the stored compare address. In response to that detection, the first tag memory—a) changes the tag bits for the compare address in the first tag-memory from the initial state to a predetermined state, b) sends a first control signal on the system bus, and c) loads the compare address with a second control signal into the queue. Thereafter, the second tag-memory responds to the queue by changing the tag bits for the compare address in the second tag-memory from the initial state to the predetermined state. This change in tag bits in the second tag-memory occurs when the second tag-memory is not busy executing another command from the processor bus. One example of the initial state of the tag bits is the exclusive state, and the predetermined state to which they are changed in the shared state. Another example of the initial state of the tag bits is the modified state, and the predetermined state to which they are changed in the invalid state.


REFERENCES:
patent: 4755930 (1988-07-01), Wilson, Jr. et al.
patent: 5113514 (1992-05-01), Albonesi et al.
patent: 5136700 (1992-08-01), Thacker
patent: 5241641 (1993-08-01), Iwasa et al.
patent: 5249282 (1993-09-01), Segers
patent: 5274790 (1993-12-01), Suzuki
patent: 5285323 (1994-02-01), Hetherington et al.
patent: 5297269 (1994-03-01), Donaldson et al.
patent: 5319768 (1994-06-01), Rastegar
patent: 5394555 (1995-02-01), Hunter et al.
patent: 5398325 (1995-03-01), Chang et al.
patent: 5432918 (1995-07-01), Stamm
patent: 5465344 (1995-11-01), Hirai et al.
patent: 5513335 (1996-04-01), McClure
patent: 5522057 (1996-05-01), Lichy
patent: 5539893 (1996-07-01), Thompson et al.
patent: 5581725 (1996-12-01), Nakayama
patent: 5581729 (1996-12-01), Nishtala et al.
patent: 5598550 (1997-01-01), Shen et al.
patent: 5623632 (1997-04-01), Liu et al.
patent: 5706464 (1998-01-01), Moore et al.
A. Wolfe, “Th

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Multi-bus data processing system in which all data words in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Multi-bus data processing system in which all data words in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-bus data processing system in which all data words in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2440966

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.