Neural chip architecture and neural networks incorporated...

Data processing: artificial intelligence – Neural network – Structure

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S015000, C706S026000, C706S034000, C711S005000

Reexamination Certificate

active

06523018

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to artificial neural network systems and more particularly to a novel neural semiconductor chip architecture having a common memory for all or part of the neurons integrated in the chip. This architecture is well adapted to the improved neuron described in the co-pending application cited below which has been designed to operate either as a single neuron or as two independent neurons. Artificial neural networks (ANNs) built with such neural chips offer maximum flexibility.
CO-PENDING PATENT APPLICATION
Improved neuron structure and artificial neural networks incorporating the same, Ser. No. 09/470,458, filed on the same date herewith.
BACKGROUND OF THE INVENTION
Artificial neural networks (ANNs) are more and more used in applications where no mathematical algorithm can describe the problem to be solved and they are very successful as far as the classification or recognition of objects is concerned. ANNs give very good results because they learn by examples and are able to generalize in order to respond to an input vector which was never presented. So far, most ANNs have been implemented in software and only a few in hardware, however the present trend is to implement ANNs in hardware, typically in semiconductor chips. In this case, hardware ANNs are generally based upon the Region Of Influence (ROI) algorithm. The ROI algorithm gives good results if the input vector presented to the ANN can be separated into classes of objects well separated from each other. If an input vector has been recognized by neurons belonging to two different classes (or categories), the ANN will respond by an uncertainty. This uncertainty may be reduced in some extent by the implementation of the K Nearest Neighbor (KNN) algorithm.
Modern neuron and artificial neural network architectures implemented in semiconductor chips are described in the following U.S. patents:
U.S. Pat. No. 5,621,863 “Neuron Circuit”
U.S. Pat. No. 5,701,397 “Circuit for Pre charging a Free Neuron Circuit”
U.S. Pat. No. 5,710,869 “Daisy Chain Circuit for Serial Connection of Neuron Circuits”
U.S. Pat. No. 5,717,832 “Neural Semiconductor Chip and Neural Networks Incorporated Therein”
U.S. Pat. No. 5,740, 326 “Circuit for Searching/Sorting Data in Neural Networks”
which are incorporated herein by reference. These patents are jointly owned by IBM Corp. and Guy Paillet. The chips are manufactured and commercialized by IBM France under the ZISC036 label. ZISC is a registered Trade Mark of IBM Corp. The following description will be made in the light of the US patents recited above, the same vocabulary and names of circuits will be kept whenever possible.
In U.S. Pat. No. 5,717,832 there is disclosed the architecture of a neural semiconductor chip (
10
) according to the ZISC technology. The ZISC chip includes a plurality of neuron circuits (
11
-
1
, . . . ) fed by different buses transporting data such as the input vector data, set-up parameters, . . . and control signals. Each neuron circuit (
11
) includes an individual R/W memory (
250
) and means for generating local result signals (F, . . . ), e.g. of the “fire” type and a local output signal (NOUT), e.g. of the distance or category type. An OR circuit (
12
) performs an OR function for all corresponding local result and output signals to generate respective first global result (R.) and output (OUT.) signals that are merged in an on-chip common communication bus (COM.-BUS) shared by all neuron circuits of the chip. An additional OR function can then be performed between all corresponding first global result and output signals to generate second global result and output signals, preferably by dotting on an off-chip common communication bus (CON..-BUS) in the driver block (
19
). This latter bus is shared by all the neural chips that are connected thereon to build an artificial neural network of the desired size. In the chip, a multiplexer (
21
) may select either the first or second global output signal to be re-injected in all neuron circuits of the neural network as a feed back signal depending on the chip operates in a single or multi-chip environment via a feed back bus (OR-BUS). The feedback signal results of a collective processing of all the local signals.
Unfortunately, the ZISC chip architecture is not optimized in terms of circuit density because many functions are decentralized locally within each neuron and thus are duplicated every time a neuron is added to the chip. This is particularly true for the local RAM which is implemented in each neuron circuit. During the learning and recognition phases, in the ZISC chip, the component addresses are sent to the local RAM memory of each neuron in sequence, the same set of addresses is thus processed by the RAM internal address decoder in each neuron circuit. The duplication of a decoder function in each neuron circuit produces an obvious waste of silicon room significantly limiting thereby the number of neuron circuits that can be integrated in the ZISC chip.
Moreover, in the ZISC chip, there is a discrepancy between the clock cycles of the input buses feeding the chip and those feeding the neuron circuits, so that the neuron processing capabilities are not fully exploited. For instance, only one distance is calculated during an external clock cycle, although it could have been possible to compute two distances, thereby wasting time during this operation.
In the ZISC chip architecture, there are four input data buses to feed each neuron, but only a few data need to be applied at the same time to a determined neuron circuit. A high number of unemployed buses at the chip level induces a high number of wires and drivers for electrical signal regeneration, which in turn, are a source of unnecessary silicon area consumption in the ZISC chip.
Finally, depending upon the application, the number of input vector components that is required is not necessarily the same. Some applications may need a high number of components while others not. If a chip is built with such a high number for a specific application, for an application requiring only a small number of components, a significant part of the memory space will not be used. In addition, the precision needed on the stored components (weights) may be different. For a determined prototype, some components may need a full precision (a maximum number of bits) while some other components may need a low precision (a low number of bits) instead. With the ZISC neuron architecture if low precision is needed for only a few components, all unused bits are wasted.
SUMMARY OF THE INVENTION
It is therefore a primary object of the present invention to provide a novel neural chip architecture that is adapted to use an on-chip common RAM memory to store prototype vector components (weights) for several neurons.
It is another object of the present invention to provide a novel neural chip architecture wherein the RAM memory is cut in slices, one for each neuron present in the chip.
It is another object of the present invention to provide a novel neural chip architecture wherein each RAM memory slice can be written independently of the others.
It is another object of the present invention to provide a novel neural chip architecture that is well adapted to an improved neuron capable to work either as a single neuron (single mode) or as two independent neurons referred to as the even and odd neurons (dual mode).
It is another object of the present invention to provide a novel neural chip architecture wherein in each RAM memory slice, the lower half addresses are assigned to the even neuron and the upper half addresses are assigned to the odd neuron.
It is another object of the present invention to provide a novel neural chip architecture to be provided with a masking function that allows a variable precision in the storage of the prototype components increasing thereby the number thereof.
It is another object of the present invention to provide an artificial neural network incorporated in such novel neural chip architecture for increased flexibilit

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Neural chip architecture and neural networks incorporated... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Neural chip architecture and neural networks incorporated..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Neural chip architecture and neural networks incorporated... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3145761

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.