High speed/low speed interface with prediction cache

Electrical computers and digital data processing systems: input/ – Intrasystem connection – Bus access regulation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S120000, C710S120000

Reexamination Certificate

active

06301629

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to data processing systems. In particular, the present invention relates to an interface that is capable of communicating with high speed and low speed sub-systems in a data processing system.
2. Discussion of the Related Art
To improve the performance of computer systems and to take advantage of the full capabilities of the CPUs, including their speed, used in these systems, there is a need to increase the speed by which information is transferred from the main memory to the CPU. Microprocessors are becoming faster as microelectronic technology improves. Every new generation of processors is about twice as fast as the previous generation, due to the shrinking features of integrated circuits. Unfortunately, memory speed has not increased concurrently with microprocessor speed. While Dynamic Random Access Memory (DRAM) technology rides the same technological curve as microprocessors, technological improvements yield denser DRAMs, but not substantially faster DRAMs. Thus, while microprocessor performance has improved by a factor of about one thousand in the last ten to fifteen years, DRAM speeds have improved by only 50%. Accordingly, there is currently about a twenty-fold gap between the speed of present day microprocessors and DRAM. In the future this speed discrepancy between the processor and memory will likely increase.
The factors affecting the speed of transferring information from the main memory, that typically includes DRAMs, are the speed discrepancy, as mentioned above, and the limited bandwidth of the currently available off-the-shelf DRAMs. The problem caused by the speed discrepancy is also known as the latency problem.
To reduce the latency problem, cache memory is used to cache the information. However, currently available cache memories have limited capacity. Accordingly, a small portion of the information stored in the main memory can be cached each time. Thus, if the information requested by the CPU is not in the cache, the main memory must be accessed to obtain the information.
An alternative solution is to increase the rate of transfer of the information between the CPU and the main memory. In another word, an alternative solution is to increase the bandwidth of the system. However, the presently available high bandwidth systems have an inherent problem caused by the limitation in the number of loads that can be connected to the high speed buses used in these systems. In addition, the presently available high speed buses are narrow.
FIG. 1
is a block diagram of a presently available high bandwidth data processing system. This system is capable of high speed transferring of information between the CPU and the main memory. The system of
FIG. 1
is generally designated by reference number
10
. It includes a processing unit
12
, cache memory II
20
, memory controller
22
, memory
28
, and I/O controller
30
. Processing unit
12
includes a CPU
14
, a cache memory I
16
, and a high speed interface
18
. Memory controller
24
includes controller
24
and high speed interface
26
. It should be mentioned that typically high speed interfaces
18
and
26
are identical. The processing unit
12
communicates with the memory controller
22
via high speed bus
32
. In addition, memory controller
22
communicates with memory
28
and I/O controller
30
via high speed buses
34
and
36
. Memory
28
includes specially designed high speed DRAMs (not shown).
High speed buses
32
-
36
are designed to transfer information at a very high speed. However, as mentioned above, the currently available high speed buses are very narrow. For example, the currently available buses have between 9 to 16 data lines. This means that at any time a maximum of 16 bits, or 2 bytes, of information can be transferred over these buses. However, since this information is being transferred at a very high speed, the resulting rate of transfer is very fast. For example, the currently available high speed buses are capable of transferring information at a speed between 500 Mhz to 1 GHz. This means that the rate of transfer of a 16 pin bus is between 1Gbyte/sec to 2 Gbyte/sec. Since these buses are operating at very high frequency, special interfaces must be provided for these buses. RAMBUS Inc. of Mountain View, Calif., has designed a high speed interface that is capable of interfacing with high speed buses. Numerous manufacturers are manufacturing the RAMBUS high speed interface under a license from RAMBUS, Inc. In system
10
, high speed interfaces
18
and
26
are used to enable the system to take advantage of the high speed buses
32
-
36
.
The presently available high speed interfaces have limitations that ultimately limit the performance of system
10
. For example, only two loads can be connected to the presently available high speed interface. The presently available RAMBUS high speed interface can support a maximum of two loads, such as two high speed RAMBUS memories. This limits the amount storage available in the high speed data processing systems. Consequently, in systems that require the connection of more than two loads to each subsystem, more than one high speed interface must be used, which increases the cost of the systems.
Finally, to take advantage of the capabilities of the high speed buses, specially designed DRAMs must be used in memory
28
. These DRAMs are expensive and their use would increase the cost of system
10
.
Thus, there is need for a subsystem that is capable of interfacing with presently available “low speed, low cost” subsystems, such as main memories that incorporate presently available DRAMs, and high speed subsystems without causing a degradation in the performance of the high bandwidth data processing systems.
SUMMARY AND OBJECTIVES OF THE INVENTION
It is the object of the present invention to provide a high speed/low speed interface subsystem that provides the capability of interfacing with high speed and low speed subsystems in a high bandwidth data processing system, while maintaining a high information transfer rate.
It is the object of the present invention to provide a high speed/low speed interface subsystem that provides the capability of interfacing with high speed subsystems and low speed, low cost subsystems in a high bandwidth data processing system, while maintaining a high information transfer rate.
It is another object of the present invention to provide a high speed/low speed interface subsystem that is capable of substantially reducing DRAM latency.
It is another object of the present invention to provide a high speed/low speed interface subsystem that is capable of connecting to more than two loads.
It is another object of the present invention to provide a high speed/low speed interface subsystem that is capable of interfacing with high speed and low speed subsystems and is capable of connecting to more than two loads.
It is another object of the present invention to provide a high speed/low speed interface subsystem that is capable of interfacing with high speed and low speed subsystems and is capable of substantially reducing DRAM latency.
It is another object of the present invention to provide a high speed/low speed interface subsystem that is capable of interfacing with high speed and low speed subsystems, is capable of substantially reducing DRAM latency, and is capable of connecting to more than two loads.
It is another object of the present invention to provide a monolithic or a discrete subsystem including a high speed interface, a low speed interface, and a cache prediction unit that provides the capability of interfacing with low speed subsystems and high speed subsystems via low speed buses and high speed buses, respectively, while maintaining a high information transfer rate.
Finally, it is an object of the present invention to provide a monolithic or discrete subsystem including a high speed interface, a low speed interface, a cache prediction unit, and memory controller unit that provides the capability of interfacing with low

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High speed/low speed interface with prediction cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High speed/low speed interface with prediction cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High speed/low speed interface with prediction cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2614934

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.