Tagged access synchronous bus architecture

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S158000, C711S163000, C711S151000

Reexamination Certificate

active

06247101

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates generally to system bus architectures for data processing systems and in particular to system bus architectures for systems having multiple devices utilizing the system bus for data access. Still more particularly, the present invention relates to a synchronous bus architecture employing reusable tags for system bus transactions to obviate the need for wait states.
2. Description of the Related Art Memories, particularly dynamic random access memories (DRAMs) operate at optimal performance when blocks of data are transferred, as illustrated in Table I below.
TABLE I
Peak Access Rates
Data Block Size (MB/sec)
DRAM TYPE
2
4
8
EDO (60 ns) best case
133
133
133
EDO (60 ns) worst case
59
82
101
SDRAM (100 MHz) best case
400
400
400
SDRAM (100 MHz) worst case
66
114
177
SDRAM (125 MHz) best case
500
500
500
SDRAM (125 MHz) worst case
83
142
222
In the above figures, EDO DRAM assumes 15 ns control clock period. Best case assumes same page access for every block so RAS is held low with no precharge and a 30 ns CAS cycle time. Worst case assumes 60 ns RAS to first data, plus 30 nS CAS cycle for each additional data and 45 ns precharge on every access. SDRAM assumes a RAS to CAS of 3 clocks, a CAS latency of 3 clocks, and a precharge of 4 clocks. Best case assumes continuous access of data the same as with the EDO DRAM. Worst case assumes RAS to CAS of 3 clocks, CAS to data of 3 clocks, 1 clock per data word, and 4 clocks for precharge on every access for a total of 12, 14 and 18 clocks respectively. A sustained rate on SDRAM with 8 word block size and 50% of the time being the same bank access with precharge (worst case) and 50% of the time being bank interleave (close to best case) and including refresh would be 332 MB/sec.
Similar to memory devices, networks and data storage systems provide optimal performance when handling blocks of data. Processors, on the other hand, typically perform single word accesses during execution of instructions and generally have very limited block data movement capability.
In most system architectures, memories function primarily as slaves (devices which service data transfer requests) on the system bus, while processors generally function primarily as masters (devices initiating data transfers). In such architectures, the device being read on the system bus (the slave) determines the amount of time required to complete a data access. If the time required for a particular data access is greater than the normal read cycle time, wait states are typically generated to the device initiating the data access (the master) by the device containing the desired data (the slave). These wait states are wasted bus cycles which could be utilized by another master device on the bus.
Even though the optimal performance characteristics of memories, which are typically bus slaves, and processors, which are usually bus masters, are in conflict with each other, the system bus architecture generally follows that of the processor, even in systems where high data throughput is paramount.
Table I illustrates the performance gains achieved by utilizing synchronous dynamic random access memory (SDRAMs) rather than conventional, page mode EDO DRAMs. The 60 ns EDO DRAMs and the 100 MHz SDRAMs have an equivalent RAS to data access time, while the 125 MHz SDRAMs represent a performance increase. A hidden performance increase is the ability to address banks separately in the SDRAM, effectively overlapping the RAS-to-data time of one bank with block data transfer of another bank. In order to take advantage of this feature, however, the SDRAM controller needs to know the addresses for future reads and/or writes in advance, a feature which is not supported by standard microprocessor bus architectures. A possible exception is the ability of some processors to buffer writes, with write commands being queued to be written to memory. In order to take full advantage of the ability of SDRAMs to overlap accesses, an architecture is required which buffers both reads and writes, allowing many operations to be queued at once.
In order to avoid wasting bus cycles on wait states, the slave must receive the access request and address from the master, then the master and slave must release the bus for use by other devices and complete the access later, after the slave has had time to retrieve the requested data. One approach to implementing this requirement is to permit a bus master to disconnect from the bus after transmitting an address, with the intent of later returning to read the requested data from the slave device. However, this approach suffers from the disadvantage of inability of the bus master to determine when the slave device is ready. Thus, the bus master could reconnect too early, before the slave device is ready with the requested data, thereby wasting bus cycles and requiring wait states or a second disconnect/reconnect.
It would be desirable, therefore, to provide a bus architecture permitting multiple devices to act as bus masters without the need for wait states. It would further be advantageous for reads and writes to be fully buffered, and for address and data tenures of bus transactions to be decoupled.
SUMMARY OF THE INVENTION
Reusable tags are assigned to read and write requests on a tagged access synchronous bus. This allows multiple reads to be queued and overlapped on the tagged access synchronous bus to maximize data transfer rates. Writes are buffered to similarly allow multiple writes to be over-lapped. All data transfers on the tagged access synchronous bus typically would default to a cache block amount of data, with critical word first and early termination capabilities provided to permit processor execution to proceed without waiting for an entire cache block to be loaded. The tagged access synchronous bus architecture thus allows the system to take full advantage of high speed memory devices such as SDRAMs, RDRAMs, etc. while decoupling the bus data transfers from processor execution for increased overall system performance.


REFERENCES:
patent: 4794521 (1988-12-01), Ziegler et al.
patent: 5168568 (1992-12-01), Thayer et al.
patent: 5208914 (1993-05-01), Wilson et al.
patent: 5257356 (1993-10-01), Brockmann et al.
patent: 5313594 (1994-05-01), Wakerly
patent: 5442757 (1995-08-01), McFarland et al.
patent: 5448742 (1995-09-01), Bhattacharya
patent: 5524268 (1996-06-01), Geldman et al.
patent: 5539890 (1996-07-01), Rahman et al.
patent: 5544351 (1996-08-01), Lee et al.
patent: 5592631 (1997-01-01), Kelly et al.
patent: 5603066 (1997-02-01), Krakirian
patent: 5608879 (1997-03-01), Cooke
patent: 5903283 (1999-05-01), Selwan et al.
patent: 5974514 (1999-10-01), Andrewartha et al.
patent: 5995513 (1999-11-01), Harrand et al.
patent: 6009489 (1999-12-01), Mergard

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Tagged access synchronous bus architecture does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Tagged access synchronous bus architecture, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Tagged access synchronous bus architecture will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2446388

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.