Electrical computers and digital processing systems: processing – Processing architecture – Array processor
Reexamination Certificate
2000-02-15
2003-02-25
Pan, Daniel H. (Department: 2183)
Electrical computers and digital processing systems: processing
Processing architecture
Array processor
C712S014000, C712S015000, C712S018000, C712S228000, C709S241000, C713S401000
Reexamination Certificate
active
06526498
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to array based computing devices. More particularly, this invention relates to a chip architecture that uses retiming registers in a network under the control of the configuration context of the computing devices.
BACKGROUND OF THE INVENTION
Advances in semiconductor technology have greatly increased the processing power of a single chip general purpose computing device. The relatively slow increase in the inter-chip communication bandwidth requires modem high performance devices to use as much of the potential on chip processing power as possible. This results in large, dense integrated circuit devices and a large design space of processing architectures. This design space is generally viewed in terms of granularity, wherein granularity dictates that designers have the option of building very large processing units, or many smaller ones, in the same silicon area. Traditional architectures are either very coarse grain, like microprocessors, or very fine grain, like field programmable gate arrays (FPGAs).
Microprocessors, as coarse grain architecture devices, incorporate a few large processing units that operate on wide data words, each unit being hardwired to perform a defined set of instructions on these data words. Generally, each unit is optimized for a different set of instructions, such as integer and floating point, and the units are generally hardwired to operate in parallel. The hardwired nature of these units allows for very rapid instruction execution. In fact, a great deal of area on modem microprocessor chips is dedicated to cache memories in order to support a very high rate of instruction issue. Thus, the devices efficiently handle very dynamic instruction streams.
Most of the silicon area of modem microprocessors is dedicated to storing data and instructions and to control circuitry. Therefore, most of the silicon area is dedicated to allowing computational tasks to heavily reuse the small active portion of the silicon, the arithmetic logic units (ALUs). Consequently very little of the capacity inherent in a processor gets applied to the problem; most of the capacity goes into supporting a high diversity of operations.
Field programmable gate arrays, as very fine grain devices, incorporate a large number of very small processing elements. These elements are arranged in a configurable interconnected network. The configuration data used to define the functionality of the processing units and the network can be thought of as a very large semantically powerful instruction word allowing nearly any operation to be described and mapped to hardware.
Conventional FPGAs allow finer granularity control over processor operations, and dedicate a minimal area to instruction distribution. Consequently, they can deliver more computations per unit of silicon than processors, on a wide range of operations. However, the lack of resources for instruction distribution in a network of prior art conventional FPGAs make them efficient only when the functional diversity is low, that is when the same operation is required repeatedly and that entire operation can be fit spatially onto the FPGAs in the system.
Furthermore, in prior art FPGA networks, retiming of data is often required in order to delay data. This delay is required because data that is produced by one processing element during one clock cycle may not be required by another processing element until several clock cycles after the clock cycle in which it was made available. One prior art technique for dealing with this problem is to configure some processing elements to function as memory devices to store this data. Another prior art technique configures processing elements as delay registers to be used in the FPGA network. The problem with both of these prior art technique is that valueable silicon is wasted by using processing elements as memory and delay registers.
Dynamically programmable gate arrays (DPGAs) dedicate a modest amount of on-chip area to store additional instructions allowing them to support higher operational diversity than traditional FPGAs. However, the silicon area necessary to support this diversity must be dedicated at fabrication time and consumes area whether or not the additional diversity is required. The amount of diversity supported, that is, the number of instructions supported, is also fixed at fabrication time. Furthermore, when regular data path operations are required all instruction stores are required to be programmed with the same data using a global signal broadcasted to all DPGAs.
The limitations present in the prior art FPGA and DPGA networks in the form of limited control over configuration of the individual FPGAs and DPGAs of the network severely limits the functional diversity of the networks. For example, in one prior art FPGA network, all FPGAs must be configured at the same time to contain the same configurations. Consequently, rather than separate the resources for instruction storage and distribution from the resources for data storage and computation, and dedicate silicon resources to each of these resources at fabrication time, there is a need for an architecture that unifies these resources. Once unified, traditional instruction and control resources can be decomposed along with computing resources and can be deployed in an application specific manner. Chip capacity can be selectively deployed to dynamically support active computation or control reuse of computational resources depending on the needs of the application and the available hardware resources.
SUMMARY OF THE INVENTION
A method and an apparatus for retiming in a network of multiple context processing elements are provided. According to one aspect of the invention, a programmable delay element is configured to programmably delay signals between a number of multiple context processing elements of an array without requiring a multiple context processing element to implement the delay.
According to another aspect of the invention, the output of a first multiple context processing element is coupled to a first multiplexer and to the input of a number of serially connected delay registers. The output of each of the serially connected delay registers is coupled to the input of a second multiplexer. The output of the second multiplexer is coupled to the input of the first multiplexer, and the output of the first multiplexer is coupled to a second multiple context processing element. The first and second multiplexers are provided with at least one set of data representative of at least one configuration memory context of a multiple context processing element. The first and second multiplexers are controlled to select one of a number of delay durations in response to the received set of data. A delay is programmed in the network structure in response to a data type being transferred between particular multiple context processing elements.
These and other features, aspects, and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description and appended claims which follow.
REFERENCES:
patent: 4597041 (1986-06-01), Guyer et al.
patent: 4748585 (1988-05-01), Chiarulli et al.
patent: 4754412 (1988-06-01), Deering
patent: 4858113 (1989-08-01), Saccardi
patent: 4870302 (1989-09-01), Freeman
patent: 4967340 (1990-10-01), Dawes
patent: 5020059 (1991-05-01), Gorin et al.
patent: 5081575 (1992-01-01), Hiller et al.
patent: 5233539 (1993-08-01), Agrawal et al.
patent: 5301340 (1994-04-01), Cook
patent: 5317209 (1994-05-01), Garverick et al.
patent: 5317755 (1994-05-01), Hartley et al.
patent: 5336950 (1994-08-01), Popli et al.
patent: 5426378 (1995-06-01), Ong
patent: 5457408 (1995-10-01), Leung
patent: 5469003 (1995-11-01), Kean
patent: 5581199 (1996-12-01), Pierce et al.
patent: 5684980 (1997-11-01), Casselman
patent: 5694070 (1997-12-01), Black
patent: 5727021 (1998-03-01), Truebenbach
patent: 5742180 (1998-04-01), DeHon et al.
patent: 5754818 (1998-05-01), Mohamed
patent: 5765209 (1998-06-01), Yetter
patent: 5778439 (1998-07-01)
Eslick Ian
French Robert
Mirsky Ethan
Broadcom Corporation
Pan Daniel H.
LandOfFree
Method and apparatus for retiming in a network of multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for retiming in a network of multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for retiming in a network of multiple... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3132389