Fuzzy neural networks

Data processing: artificial intelligence – Fuzzy logic hardware – Fuzzy neural network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S031000, C706S020000, C706S016000, C382S156000, C382S157000, C382S159000

Reexamination Certificate

active

06192351

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates to neural networks, particularly with regard to pattern recognition.
Multilayer artificial neural networks are commonly used for supervised training problems where input patterns are required to be placed into user defined classes. Such networks consist of sets of processing elements known as neurons or nodes that are arranged into two or more layers. One layer is always an input layer, comprising neurons whose outputs are defined by the input pattern presented, and another layer is always an output layer. Usually there is at least one “hidden” layer of neurons sandwiched between the input and output layers, and the network is a “feedforward” one where information flows in one direction only. Normally inputs to neurons in each layer originate exclusively from the outputs of neurons in the previous layer.
The output of a given neuron in the network is a function of the inputs into the neuron. More specifically, a neuron has n inputs, labelled 0 to n-1, together with an assumed input, called the bias, which is always equal to 1.0. The neuron is characterised by n+1 weights which multiply the inputs and an activation function that is applied to the sum of the weighted inputs in order to produce the output of the neuron. The sum of weighted inputs including the bias is known as the net input, thus the output O of the neuron from a set of n inputs x
i
(i=0, . . . , n−1) can be derived from equation 1:
O
=
f

(
net
)
=
f

(

t
=
0
n
-
1



x
i

w
i
+
w
n
)
(
1
)
where net is the net input, f is the activation function and w
n
is the bias weighting.
The operational characteristics of the neuron are primarily controlled by the weights. The activation function is typically a non-linear function, often some sort of threshold function, that, when applied to the net input of a neuron, determines the output of that neuron. Sigmoid functions are often employed.
Typically the number of output neurons provided is equal to the number of classes of input patterns to be differentiated. Usually, during training in “supervised” mode a set of defined input training patterns for each class is presented to the input layer of the neural network, and an output neuron is set to be “ON” for that class while the other outputs are forced to be “OFF”. The initial weights of the network are set to be random, and the mean squared error for a single presentation of input data is found by squaring the difference between the attained activation and the target activation for each neuron and averaging across all neurons. For each iteration or epoch, an error is calculated by averaging the errors of the training presentations within that epoch. The mean square error in the output activations is calculated and this propagated back into the network so that the mean square error is reduced for each class by iteratively adjusting the weight multipliers for each neuron in the network. Since the partial derivatives
(
δ



Error
δ



w
ij
)
w
kl
are known it is a relatively straightforward exercise to determine which directions the weights should move in order to minimise the error. Such a procedure is known as error backpropagation. Differential competitive learning algorithms used in unsupervised learning neural networks are described in B Kosko, “Unsupervised learning in noise”, IEEE Transactions on Neural Networks Vol. 1 (1990) 44.
The ability to place an input pattern into user defined classes is a frequently exploited attribute of neural networks. One particular application is in processing signals from a multi-element array of gas sensors that display broad and overlapping sensitivity to different classes of chemicals, and in using the relative responses between sensor elements (the input pattern in this context) as a means of differentiating different classes of odour. In the development of a suitable neural network architecture for gas sensing applications, a number of problems have been encountered. One problem is the need to classify odours into global classes, e.g. floral, fishy, fruity, musky, etc, and then to subdivide each of these global classes into local classes, e.g., jasmine, rose, etc as a local class of the floral global class. Another problem relates to the accuracy of classification into classes. Once a network has been trained, the system can recognise incoming patterns and switch different outputs depending on how closely an incoming pattern resembles a pattern with which the network has been trained. However, a question arises regarding the response of the system if an incoming pattern shows at best only a faint resemblance to the pattern classes it has been trained to recognise. Generally, the system will fire the output node or neuron to which there is best match; however, such a response may not be an optimal one. It may be better in some cases for the system to register that an unknown pattern class has been presented to the network.
The present invention addresses the aforementioned problems which, it is noted, apply generally to pattern recognition, and not just to odour classification per se.
SUMMARY OF THE INVENTION
According to the invention there is provided a pattern identifying neural network comprising at least an input and an output layer, the output layer having a plurality of principal nodes, each principal node trained to recognise a different class of pattern, and at least one fuzzy node trained to recognise all classes of pattern recognised by the principal nodes, but with thresholds set at levels higher than the corresponding threshold levels in the principal nodes. The neural network may further comprise at least one hidden layer of nodes, and may employ a feedforward architecture. The number of nodes in the hidden layer or layers may be equal to the number of nodes in the input layer plus a biasing node. Other architectures, such as a Parzen network, or a radial basis function network, may also be employed.
The error backpropagation algorithm may be used to train the network.
The neural network may employ a fuzzy pattern classification system, and this system may involve (in the event that the output from the fuzzy node is the largest nodal output in the output layer but this output does not exceed the output of at least one principal node by a predefined value) the principal node having the output closest to the output of the fuzzy node being selected as representing the most likely class of pattern. Further, a probability distribution representing the likelihood of an input pattern falling into any of the classes pattern represented by each of the principal nodes may be calculated.
The output layer may comprise two slabs, each slab comprising a plurality of principal nodes and at least one fuzzy node, the principal nodes of one slab being trained to recognise global classes of patterns and the principal nodes of the second slab trained to recognise sub-classes of patterns within each global class.
The input pattern input to the input layer of the network may comprise the outputs of a plurality of gas sensors or quantities related thereto. When the output layer comprises two slabs, the principal nodes of the second slab may be trained to recognise patterns representing different concentrations of at least one gas or volatile species. In this manner the neural network may output the concentration of a species in addition to the identity thereof.
The output of a temperature sensor may be input to the input layer.
The output of a humidity sensor may be input to the input layer.
In this manner, variations in patterns caused by temperature and humdity sensitive variations in gas sensor output may be recognised and accounted for by the neural network.
The input pattern may be reduced by a linear or non-linear mapping technique and the results therefrom, together with the unreduced pattern, input to the input layer. The mapping technique may be principal components analysis.
The input pattern to the network may be preprocessed prior to pa

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fuzzy neural networks does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fuzzy neural networks, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fuzzy neural networks will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2608673

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.