Synchronization of branch cache searches and...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S125000, C711S137000, C711S140000, C712S233000

Reexamination Certificate

active

06175897

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to pipelined computer central processors and their support logic structure. More particularly, this invention relates to a private cache associated with each processor and which incorporates a specially-configured branch cache for increasing the average efficiency and speed in handling transfer instructions in the pipeline which may be subject to a transfer go condition.
BACKGROUND OF THE INVENTION
As faster operation of computers has been sought, numerous hardware/firmware features have been employed to achieve that purpose. One widely incorporated feature directed to increasing the speed of operation is pipelining in which the various stages of execution of a series of consecutive machine level instructions are undertaken simultaneously. Thus, in a simple example, during a given time increment, a first stage of a fourth (in order of execution) instruction may be carried out while a second stage of a third instruction, a third stage of a second instruction and a fourth stage of a first instruction are all performed simultaneously.
Pipelining dramatically increases the apparent speed of operation of a computer system. However, it is well known that the processing of a transfer (sometimes called a branch) instruction when it is necessary to find a target (i.e., when the conditions calling for a transfer are met) temporarily slow down processing while the target instruction is found in the cache. Even when an instruction cache is provided, the target must be found and processed before it can be sent to the pipeline. It is to significantly speeding up the average rate of servicing transfer operations that the present invention is directed.
SUMMARY OF THE INVENTION
The environment of the invention is within a data processing system having a pipelined processor and a cache which includes an instruction cache, instruction buffers for receiving instruction sub-blocks from the instruction cache and providing instructions to the pipelined processor, and a branch cache. The branch cache includes an instruction buffer adjunct for storing an information set for each of sub-blocks which are currently resident in the instruction buffers. The information set includes a search address, a predicted transfer hit/miss, a projected location of a target in a sub-block and a predicted target address and may include additional information. A branch cache directory stores instruction buffer addresses corresponding to current entries in the instruction buffer adjunct, and a target address RAM stores target addresses developed from prior searches of the branch cache. A delay pipe is used to selectively step an information set read from the buffer instruction adjunct in synchronism with a transfer instruction traversing the pipeline. The delay pipe is a plurality of serially coupled registers including: a) a first register for receiving an instruction set from the buffer instruction buffer adjunct concurrently with the issuance of a transfer instruction from the instruction buffers to the pipeline during a first pipeline phase; b) a second register for receiving the instruction set from the first register during a second pipeline phase which is later than the first pipeline phase; and c) a third register for receiving the instruction set from the second register during a third pipeline phase which is later than the second pipeline phase.
A comparison, during the third pipeline phase, determines if the information set identifies, as currently resident in the instruction buffers, a target address that matches the target address in the transfer instruction traversing the pipeline. If there is a finding that the information set traversing the delay pipe identifies a target address in the instruction buffers that matches the target address in the transfer instruction traversing the pipeline and there is an indication of TRA-GO from the pipeline, the instruction identified by the target address is sent to the pipeline from the instruction buffers rather than from the instruction cache, a faster operation. If there is not such a finding, the instruction is sent to the pipeline from the instruction cache. Preferably, the sub-blocks stored in the instruction buffers are four instruction words in length.


REFERENCES:
patent: 4707784 (1987-11-01), Ryan et al.
patent: 4777594 (1988-10-01), Jones et al.
patent: 5506976 (1996-04-01), Jaggar
patent: 5592634 (1997-01-01), Cirrcello et al.
patent: 5664135 (1997-09-01), Schlansker et al.
patent: 5778245 (1998-07-01), Papworth et al.
patent: 5930492 (1999-07-01), Lynch
patent: 6065091 (2000-05-01), Green

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Synchronization of branch cache searches and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Synchronization of branch cache searches and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Synchronization of branch cache searches and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2449048

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.