Data processing: artificial intelligence – Machine learning – Genetic algorithm and genetic programming system
Reexamination Certificate
1999-01-22
2001-08-07
Powell, Mark R. (Department: 2122)
Data processing: artificial intelligence
Machine learning
Genetic algorithm and genetic programming system
C706S014000, C700S213000, C700S250000
Reexamination Certificate
active
06272479
ABSTRACT:
TECHNICAL FIELD
The present invention relates generally to the automatic evolution of computer programs useful for signal processing and control. It is an evolver program that constructs a classification program. Specifically, the present invention relates to an adaptive method of evolving algorithms to select and combine useful signal features from complex simultaneous signals. Still more specifically the present invention relates to a method of evolving algorithms to map noisy or poorly understood natural signals to a useful output; for example, classifying myoelectric signals for the control of a prosthesis or classifying remotely sensed spectra for the identification of minerals.
BACKGROUND ART
The present invention's background art is generally found in the art of automatic pattern recognition. This art includes digital signal processing; neural networks and rule based expert systems. The present invention's background art also includes the creation and optimization of pattern recognition algorithms by means of evolutionary or genetic programming. The prior art specifically applicable to the best mode for carrying out the specific embodiments of invention is further found in the art of prosthesis control and the use of spectral analysis for the identification of minerals.
Pattern Recognition
As applied to the present invention, pattern recognition is the use of at least one input channel that carries data having multiple features to produce an output that has some utility. Usually the present invention will operate on multiple complex data channels that are simultaneous and may have features that are not well known or understood. The prior art teaches the use of digital signal processing, neural networks, and rule based artificial intelligence systems for pattern recognition.
Digital Signal Processing
Beginning in the late 1960's the general availability of inexpensive analog to digital converters and digital computers gave rise to digital signal processing (DSP). DSP taught the use of filters, sampling, z-transforms, system functions, FIR/IIR and FFT filters, windows, bilinear and band transformations, interpolation and decimation, polyphase filters. The present invention uses these well-known digital signal processing methods to make the digital signal easier to use.
A useful and comprehensive review of this prior art may be found in
The Digital Signal Processing Handbook
(Electrical Engineering Handbook Series) by Vijay K. Madisetti (Editor), Douglas B. Williams (Editor), Doug Williams (January 1998), CRC Press; ISBN: 0849385725
Digital signal processing can do simple pattern recognition, but the processing steps must be uniquely specified for each type of signal. This technology is useful mainly for echo supression and noise reduction in digital signals.
Neural Networks
In 1942 Norbert Weiner and his colleagues were formulating the ideas that were later christened ‘Cybernetics’, and which he defined as ‘control and communication in the animal and the machine’. Central to this programme, as the description suggests, is the idea that biological mechanisms can be treated from an engineering and mathematical perspective. Of central importance here is the idea of feedback. With the rise of AI and cognitive science, the term ‘Cybernetics’ has become unfashionable in recent years it might be argued that, because of its interdisciplinary nature, the new-wave of connectionism should properly be called a branch of Cybernetics: certainly many of the early neural-net scientists would have described their activities in this way.
In the same year that Weiner was formulating Cybernetics, McCulloch and Pitts published the first formal treatment of artificial neural nets. The main result in this historic (but opaque) paper, as summarized in is that any well defined input-output relation can be implemented in a formal neural network.
One of the key attributes of nets is that they can learn from their experience in a training environment. In 1949, Donald Hebb indicated a mechanism whereby this may come about in real animal brains. Essentially, synaptic strengths change so as to reinforce any simultaneous correspondence of activity levels between the pre-synaptic and post-synaptic neurons. Translated into the language of artificial neural nets, the weight on an input should be augmented to reflect the correlation between the input and the unit's output. Learning schemes based on this ‘Hebb rule’ have always played a prominent role.
The next milestone is probably the invention of the Perceptron by Rosenblatt in 1957; much of the work with these is described in his book ‘Principles of Neurodynamics’. One of the most significant results presented there was the proof that a simple training procedure, the perceptron training rule, would converge if a solution to the problem existed.
In 1969 enthusiasm for neural nets was dampened somewhat by the publication of Minsky and Papert's book ‘Perceptrons’. Here, the authors show that there is an interesting class of problems (those that are not linearly separable) that single layer perceptron nets cannot solve, and they held out little hope for the training of multi-layer systems that might deal successfully with some of these. Minsky had clearly had a change of heart since 1951, when he and Dean Edmonds had built one of the first network-based learning machines. The fundamental obstacle to be overcome is the so-called ‘credit assignment problem’: in a multilayer system, how much does each unit (especially one not in the output layer) contribute to the error the net has made in processing the current training vector?
In spite of ‘Perceptrons’, much work continued in what was now an unfashionable area, living in the shadow of symbolic AI Grossberg was laying the foundations for his Adaptive Resonance Theory (ART). Fukushima was developing the cognitron; Kohonen was investigating nets that used topological feature maps, and Aleksander was building hardware implementations of the nets based on the n-tuple technique of Bledsoe and Browning.
In 1982 John Hopfield, then a physicist at Caltech, showed that a highly interconnected network of threshold logic units could be analyzed by considering it to be a physical dynamic system possessing an ‘energy’. The process of associative recall, where the net is started in some initial random state and goes on to some stable final state, parallels the action of the system falling into a state of minimal energy.
This novel approach to the treatment of nets with feedback loops in their connection paths (recurrent nets), has proved very fruitful and has led to the involvement of the physics community, as the mathematics of these systems is very similar to that used in the Ising-spin model of magnetic phenomena in materials. In fact, something very close to the ‘Hopfield model’ had been introduced previously by Little, but here the emphasis was on this analogy with spin systems, rather than the energy-based description. Little also made extensive use of a quantum mechanical formalism, and this may have contributed to the paper's lack of impact.
A similar breakthrough occurred in connection with non-recurrent (feedforward) nets, when it was shown that the credit assignment problem had an exact solution. The resulting algorithm, ‘Back error propagation’ or simply backpropagation also has claim to multiple authorship, as noted by Grossberg. Thus it was discovered by Werbos resulting algorithm, ‘Back error propagation’ or simply backpropagation also has claim to multiple authorship, as noted by Grossberg. Thus it was discovered by Werbos rediscovered by Parker, and discovered again and made popular by Rumelhart, Hinton and Williams.
Aside from these technical advances in analysis, there is also a sense in which neural networks are just one example of a wider class of systems that the physical sciences have started to investigate which include cellular automata, fractals, and chaotic phenomena. Traditionally physics has sought to explain by supposing that there is some simple underlyin
Farry Kristin Ann
Fernandez Julio Jaime
Graham Jeffrey Scott
Dula Arthur M.
Powell Mark R.
Starks Wilber
LandOfFree
Method of evolving classifier programs for signal processing... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method of evolving classifier programs for signal processing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of evolving classifier programs for signal processing... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2459165