Circuits and method for shaping the influence field of...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S015000, C706S018000

Reexamination Certificate

active

06347309

ABSTRACT:

FIELD OF INVENTION
The present invention relates to neural network systems and more particularly to circuits and a method for shaping the influence fields of neurons. These circuits that are placed in front of any conventional neural network based upon a mapping of the input space significantly increases the number of permitted influence field shapes, giving thereby a considerable flexibility to the design of neural networks. The present invention also encompasses the neural networks that result from that combination that are well adapted for classification and identification purposes. In particular, the neural networks of the present invention find extensive applications in the field of image recognition/processing.
BACKGROUND OF THE INVENTION
Artificial neural networks mimic biological nervous systems to solve problems which are difficult to modelize. Neural networks are able to learn from examples and in that sense, they are said to be “adaptive”. Depending on the construction of neural networks, the learning phase can be supervised or not. Another important property of neural networks is their ability to show some tolerance for the imprecision and uncertainty which naturally exist in real-world problems to achieve tractability and robustness. In essence, these properties result from their massively parallel arrangement in which input data are distributed. In a typical representation, the hardware implementation of neural networks generally consists of elementary processing units, called neurons, that are connected in parallel and are driven via weighted links, called synapses. There are a great number of application fields for neural networks. They are extensively used in process control, image recognition/processing, time series prediction, optimization, and the like.
Since the forty's, a lot of different types of neural networks have been developed and described in the literature. The key distinction factors are the way they learn, their generalization capability, their response time and their robustness, i.e. their tolerance to noise and faults. Among the different types of conventional neural networks, those based upon a mapping of the input space appear to be the most promising to date. This type of networks is particularly interesting because of its simplicity of use and its short response time. Several hardware implementations of algorithms exploiting this approach are available in the industry.
Neural networks that are based upon a mapping of the input space require some kind of classification. According to the so-called “Region Of Influence (ROI)” technique, for each area of this input space, one category (or several categories) is (are) associated thereto to allow classification of new inputs. Another conventional classification technique is the so-called K-Nearest Neighbors (KNN). These classification techniques as well as others are widely described in literature.
According to the ROI technique which is the classification technique most commonly used to date, each area is determined in an n-dimensional space by a center and an influence field, which is nothing other than a threshold, thereby defining a hypersphere. For instance, in a three-dimensional space, the influence field can be represented by a sphere. The definition “influence field” is also referred to as the “decision surface”, the “region of influence” or the “identification area” in technical literature, all these terms may be considered as equivalent, at least in some respects. RCE (RCE stands for Restricted Coulomb Energy) neural networks belong to this class of neural networks based upon a mapping of the input space.
In the KNN technique, each area is determined in a n-dimensional space by a prototype which defines a Voronoï domain. All the points enclosed in this domain have this prototype as the closest neighbor.
Let us consider for sake of simplicity, ROI neural networks. Basically, the ROI neural network is comprised of three interconnected layers: an input layer, an internal (or hidden) layer and an output layer. The internal layer consists of a plurality of nodes or neurons. Each node is a processing unit which computes the distance between the input data applied to the input layer and a weight stored therein. Then, this distance is compared with a threshold. In this particular implementation, the input layer simply consists of a plurality of input terminals. The role of the output layer is to provide the category (or categories) to the user corresponding to the class of the input data.
FIG. 1
schematically summarizes the above description of a conventional ROI neural network referenced
10
.
Now turning to
FIG. 1
, the input data is applied to the input layer
11
of ROI network
10
. The input data is represented by a vector A whose components are labelled A1 to An. Each component is applied to an input terminal, then to each of the m neurons of the internal layer
12
. Each neuron memorizes a prototype, labelled P1 to Pm representing an n component vector which corresponds to the above mentioned weight. The components of prototype vector P1 are labelled P1,1, . . . , P1,n. The output layer
13
is designed to provide the categories. In
FIG. 1
, only three categories A, B and C have been represented. In the implementation of
FIG. 1
, the output layer
12
consists of three output terminals, that can be materialized by LEDs. The components of the prototype vector are dynamically established during the learning phase.
Therefore, the ROI algorithm consists in mapping out a n-dimensional space by prototypes to which are assigned a category and a threshold. The role of this threshold is to activate or not the associated neuron. On the other hand, the aim of the learning phase is to map out the input space so that parts of this space belong to one or several categories. This is performed by associating a category to each prototype and computing the influence field for each neuron in the ROI approach. The influence field is a way of defining a subspace demarcated by the threshold in which the neuron is active. When a new prototype is memorized, thresholds of neighbor neurons can be adjusted to reduce any category conflict between the influence fields of these neurons.
The classification of input data is the essential task of neural networks based upon a mapping of the input space. It first consists of computing distances between the input data and prototypes stored in the different neurons of the neural network. The distances are compared with the associated thresholds, and zero, one or several categories are assigned to the input data in the ROI technique. In the KNN approach mentioned above, the output layer is used to determine the k shortest distances.
To compute a distance between the input vector A (components A1, . . . , Ai, . . . , An) and the memorized prototype vector Pj (components Pj,1, . . . , Pj,i, . . . , Pj,n) in the n-dimensional space, several kinds of norms can be used. An easy way is to determine the Euclidian distance, also referred to as the L2 norm, i.e. Dist
j
E2=&Sgr;A (Ai−Pj,i)
2
that will produce a hyperspherical influence field. However, this approach is not currently used because it is difficult to efficiently implement with dedicated circuits (squaring circuits consume too much room). The most extensively used norms to date are the so-called “L1” (or “Manhattan”) norm and the “Lsup” norm. According to norm L1, the distance is given by: Dist
j
=&Sgr; |Ai−Pj,i| while according to Lsup, the distance is given by: Dist
j
=Max|Ai−Pj,i|. In a two-dimensional space, the iso-distances are represented by lozenges with the L1 norm, and by squares with the Lsup norm. The following explanation will give more details.
The respective shapes of the influence field regions that are obtained when the distance between an input vector A (components A1 and A2) and a prototype vector P1 (components P1,1 and P1,2) after computation with norms L1 and Lsup in a two-dimensional space are shown in
FIGS. 2A and 2B

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Circuits and method for shaping the influence field of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Circuits and method for shaping the influence field of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Circuits and method for shaping the influence field of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2983953

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.