Vector and scalar data cache for a vector multiprocessor

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S003000, C712S004000, C712S011000, C712S207000

Reexamination Certificate

active

06665774

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to cache memories for high-speed computers and more specifically to cache memories for vector and scalar data in a computer having vector/scalar processors.
BACKGROUND OF THE INVENTION
A high-speed computer needs fast access to data in memory. The largest and fastest of such computers are known as supercomputers. One method of speeding up a computer is by “pipelining,” wherein the computer's digital logic between an input and an output is divided into several serially connected successive stages. Data are fed into the computer's input stage before data previously input are completely processed through the computer's output stage. There are typically many intermediate stages between the input stage and the output stage. Each stage performs a portion of the overall function desired, adding to the functions performed by previous stages. Thus, multiple pieces of data are in various successive stages of processing at each successive stage of the pipeline between the input and output stages. Preferably, each successive system clock propagates the data one stage further in the pipeline.
As a result of pipelining, the system clock can operate at a faster rate than the speed of system clocks of non-pipelined machines. In some of today's computers, the system clock cycles in as fast as one nanoseconds (“ns”) or less, allowing up to billion operations per second or more though a single functional unit. Parallel functional units within each processor, and parallel processors within a single system, allow even greater throughput. Achieving high-performance throughputs is only possible, however, if data are fed into each pipeline at close to the system clock rate.
As processor speeds have increased, the size of memory in a typical computer has also increased drastically. In addition, error-correction circuitry is now placed in the memory path to increase reliability. Memory-access speeds have improved over time, but the increased size of memory and the complexity of error-correction circuitry have meant that memory-access time has remained approximately constant. For example, a typical supercomputer system clock rate may have improved from roughly 8 ns to 4 ns to 2 ns to 1 ns over four generations. Over the same time period, memory-access times may have remained at approximately 60 to 100 ns. These times mean that with a 96 ns memory, the 8-ns processor accesses memory in 12 clocks, the 4-ns processor in 24 clocks, and the 2-ns processor in 48 clocks. As a result, a computer which randomly accessed data throughout memory would see almost no overall data-processing-speed improvement even if the system clock rate is increased dramatically.
One solution has been to organize data into vectors, each including a plurality of data elements, and where, during processing, each element of a vector has similar operations performed on it. Computer designers schedule various portions of the memory to simultaneously fetch various elements of a vector, and these fetched elements are fed into one or more parallel pipelines on successive clock cycles. Within a processor, the vector is held in a vector register having a plurality of vector register elements. Each successive vector-register element holds a successive element of the vector. A “vector-load” operation transfers a vector from memory into a vector register. For example, a vector in memory may be held as a vector image wherein successive elements of the vector are held in successive locations in memory. A vector-load operation moves elements which include a vector into pipelines which couple memory to the vector registers. Overlapped with these vector-load operations, there could be two other pipelines taking data from two other vector registers to feed a vector processor, with the resultant vector fed through a pipeline into a third vector register. Examples of such designs are described in U.S. Pat. No. 4,661,900 issued Apr. 28, 1987 to Chen et al. and U.S. Pat. No. 5,349,667 issued Sep. 20, 1994 to Cray et al., which are hereby incorporated by reference. For example, in a well-tuned system using 2-ns pipeline clocks, the throughput can approach 500 million operations per second for a single vector processor, even with relatively slow memory-access times.
On the other hand, a scalar processor operating in such a system on somewhat randomly located data must deal with a 48-clock to 70-clock pipelined-memory access time, and must often wait for the results from one operation before determining which data to request next.
In very-high-speed vector processors, such as the Cray Y-MP C90 manufactured by Cray Research Inc., the assignee of the present invention, a computer system contains a number of central processing units (“CPUs”), each of which may have more than one vector processor and more than one scalar processor. The computer system also contains a number of common memories which store the programs and data used by the CPUs. Vector data are often streamed or pipelined into a CPU from the memories, and so a long access time may be compensated for by receiving many elements on successive cycles as the result of a single request. In contrast, scalar data read by one of the CPUs from one of the common memories may take an inordinate amount of time to access.
A cache is a relatively fast small storage area inserted between a relatively slow bulk memory and a CPU to improve the average access time for loads and/or stores. Caches are filled with data which, it is predicted, will be accessed more frequently than other data. Accesses from the cache are typically much faster than accesses from the common memories. A “cache hit” is when requested data are found in the data already in the cache. A “cache miss” is when requested data cannot be found in the data already in the cache, and must therefore be accessed more slowly from the common memories. A “cache-hit ratio” is the ratio of requests which result in cache hits divided by the total of cache hits and cache misses. A system or program which has a high cache-hit ratio will usually have better performance than a machine without cache. On the other hand, a poor cache-hit ratio may result in much poorer performance, since much of the memory bandwidth is used up fetching data into the cache which will never be used.
SUMMARY OF THE INVENTION
A method and apparatus for a common scalar/vector data cache apparatus for a scalar/vector computer.
One aspect of the present invention provides a computer system. The computer system includes a common memory. The memory includes a plurality of sections. The computer system also includes a scalar/vector processor coupled to the memory using a plurality of separate address busses and a plurality of separate read-data busses wherein at least one of the sections of the memory is associated with each address bus and at least one of the sections of the memory is associated with each read-data bus. The processor further includes a plurality of scalar registers and a plurality of vector registers and operating on instructions which provide a reference address to a data word. The processor includes a scalar/vector cache unit that includes a cache array, and a FIFO unit that tracks (a.) an address in the cache array to which a read-data value will be placed when the read-data value is returned from the memory, and (b.) a destination code that specifies which of the scalar registers and vector registers into which the read-data value is to be loaded when the read-data value is returned from the memory.
In some embodiments, fetched instructions are also passed through the cache. In some such embodiment, the system allows instruction fetching through the cache to be selectably disabled. In some embodiments the system allows data fetching (i.e., both scalar fetching and vector fetching) through the cache to be selectably disabled. In some embodiments, the selective enabling/disabling of fetches through the cache of instructions and data are separately and independently specified.
In one embodiment, the cac

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Vector and scalar data cache for a vector multiprocessor does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Vector and scalar data cache for a vector multiprocessor, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Vector and scalar data cache for a vector multiprocessor will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3154970

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.