Trainable neural network having short-term memory for altering i

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

395 20, 395 24, G06F 1500

Patent

active

056279426

DESCRIPTION:

BRIEF SUMMARY
BACKGROUND OF THE INVENTION

1. Field of the Invention
This invention relates to (artificial) neutral networks (in other words, to parallel processing apparatus comprising or emulating a plurality of simple, interconnected, neural processors, or to apparatus arranged to emulate parallel processing of this kind) and particularly, but not exclusively, to their use in pattern recognition problems such as speech recognition, text-to-speech conversion, natural language translation and video scene recognition.
2. Related Art
Referring to FIG. 1, one type of generalized neural net known in the art comprises a plurality of input nodes 1a, 1b, 1c to which an input data sequence is applied from an input means (not shown), and a plurality of output nodes 2a, 2b, 2c, each of which produces a respective net output signal indicating that the input data sequence satisfied a predetermined criterion (for example, a particular word or sentence is recognized or an image corresponding to a particular object is recognized). Each output node is connected to one or more nodes in the layer below (the input layer) by a corresponding connection including a weight 3a-3i which scales the output of those nodes by a weight factor to provide an input to the node in the layer above (the output layer). Each node output generally also includes a non-linear (compression) stage (not shown).
In many such nets, further intermediate inner or `hidden` layers are included, which receive inputs from a layer below and generate outputs for a layer above. The output of a node in general is a function of its weighted inputs; typically the function is the sum of these inputs, with the subsequent non-linear compression mentioned above. One example of such a net is the well known Multi-Layer-Perceptron (MLP).
Such nets are trained in a training phase by inputting training data sequences which are known to satisfy predetermined criteria, and iteratively modifying the weight values connecting the layers until the net outputs approximate the desired indications of such criteria. Having been trained on a range of training data, it is then found that such trained networks can operate upon real-world data to perform various processing and recognition tasks.
Since the revival of interest in neural nets in recent years such attention has focussed on nets in which processing is unequivocally parallel and distributed, (Rumelhart 1986) and which have recently proved to be admirably suited to tackling problems in signal processing eg (Lynch & Rayner 1989) pattern recognition eg (Hutchinson & Welsh 1989) (Woodland & Smythe 1990) and robotic control eg (Saerens & Soquet 1989). Some attention has also been paid to problems which cannot be seen as signal processing, and in particular various methods of applying neural nets to natural language have been described, from (Rumelhart 1986) and (McClelland & Kawamoto 1986) through to recent papers and reports (Sharkey 1989) , (Weber 1989) and (Jagota & Jajubowitz 1989). A difficulty in these cases is how to present inputs to the net. If unlimited data such as text is to be processed by a neural net of these kinds, either it must be input as some set of lower level features--letters or microfeatures as described in eg (Rumelhart et al 1986), --or if whole words or larger features are to be used the number of input nodes must be very great. In the latter case, too, some retreat from the pure concept of parallel distributed processing must be accepted, since each word can be seen as locally stored.
In other words, the choice is typically between using too few nodes (in which case the network may not train well if features chosen are inappropriate) or too many (in which case the network is tending to act as a simple look up store).
Another problem is that a very large number of iterations can be required for convergence in training, which can consequently be slow and laborious.
In their paper entitled "Learning to understand sentences in a connectionist network", published in the proceedings of the IEEE International Conference o

REFERENCES:
patent: 4912647 (1990-03-01), Wood
patent: 4941122 (1990-07-01), Weideman
patent: 4974169 (1990-11-01), Engel
patent: 4979126 (1990-12-01), Pao et al.
patent: 5003490 (1991-03-01), Castelaz et al.
patent: 5046020 (1991-09-01), Filkin
patent: 5095443 (1992-03-01), Watanabe
patent: 5107454 (1992-04-01), Niki
patent: 5119469 (1992-06-01), Alkon et al.
patent: 5129039 (1992-07-01), Hiraiwa
patent: 5140530 (1992-08-01), Guba et al.
patent: 5327541 (1994-07-01), Reinecke et al.
patent: 5339818 (1994-08-01), Baker et al.
Tsung, et al, "A Sequential Adder Using Recurrent Networks", Proceedings of the International Joint Conference on Neural Networks, 1989, 11-133-11-139.
Rumelhart, D.E. et al, (1986) "Schemata and Sequential Thought Processes in PDP Models", Parallel Distributed Processing 2, McClelland, J.L. and Rumelhart, D.E. (Eds.) Cambridge, Massachusetts: MIT Press.
Rumelhart et al, (1986) "On Learning the Past Tenses of English Verbs", Parallel Distributed Processing. 2, McClelland, J.L. and Rumelhart, D.E. (Eds.), Cambridge, Massachusetts: MIT Press.
McClelland et al. (1986), "Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences", Parallel Distributed Processing. 2, McCelland, J.L. and Rumelhart, E.D. (Eds.), Cambridge, Massachusetts: MIT Press.
Alonso et al, "Machine Translation Technology: On the Way to Market Introduction", International Journal of Computer Applications In Technology, pp. 186-190.
Hutchinson et al, British Telecom Research Laboratories, UK, Comparison of Neural Networks and Conventional Techniques for Feature Location in Facial Images, pp. 201-205.
Saerens et al, A Neural Controller, Universite Libre de Bruxelles, Belgium, pp. 211-215.
Jagota et al., Dept. of Computer Science, State University of New York Buffalo, N.Y. 14260, Knowledge Representation in a Multi-Layered Hopfield Network, pp. I-435-I-442.
Lynch et al, Cambridge University, U.K., "The Properties and Implementation of the Non-Linear Vector Space Connectionist Model", pp. 186-190.
Rumelhart et al, "Feature Discovery by Competitive Learning", Chapter 5, pp. 151-193, Cognitive Science, 1985.
Rumelhart et al, "Learning Internal Representations by Error Propagation", Chapter 8, pp. 318-362.
Sharkey, Centre for Connection Science, Department of Computer Science, University of Exeter, Exeter EX44PX, UK, "A PDP Learning Approach to Natural Language Understanding", pp. 92-116.
Slocum, Microelectronics and Computer Technology Corporation (MCC), Austin, Texas, "A Survey of Machine Translation: Its History, Current Status, and Future Prospects", pp. 1-47.
Weber, Department of Computer Science, University of Rochester, Rochester N.Y. 14627, "A Connectionist Model of Conceptual Representation", pp. I-477-I-483.
Woodland et al, "An Experimental Comparison of Connectionist and Conventional Classification Systems on Natural Data", Speech Communication 9 (1990) 73-82.
Wyard et al, "A single Layer Higher Order Neural Net and Its Application to Context Free Grammar Recognition", Connection Science, vol. 2, No. 4, 1990, pp. 347-371.
IEEE International Conference on Neural Networks, San Diego, California, Jul. 24-27, 1988, S. Nolfi et al: "Learning to Understand Sentences in a Connectionist Network," pp. 215-219.
IEEE International Conference on Neural Networks, San Diego, California, 24-27 Jul. 1988, M.F. Tenorio et al: "Adaptive Networks as a Model for Human Speech Development," pp. 235-242.
Neural Networks from Models to Applications, Paris, 1988, Ekeberg et al: "Automatic Generation of Internal Representations in a Probabilistic Artificial Neural Network," pp. 178-186.
IJCNN International Joint conference on Neural Networks, Washington, 19-22 Jun. 1989, T. Matsuoka et al: "Syllable Recognition Using Integrated Neural Networks," pp. 251-258.
Neuro-Nimes' 89, Nimes, 13-16 Nov. 1989, P. Meiler: "Garbled Text String Recognition with a Spatio-Temporal Pattern Recognition (SPR) Neural Network," pp. 279-292.
Rumelhart et al, "Parallel Distributed Processing",

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Trainable neural network having short-term memory for altering i does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Trainable neural network having short-term memory for altering i, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Trainable neural network having short-term memory for altering i will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2139184

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.