Pyramid learning architecture neurocomputer

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

395 11, 395800, G06F 1500

Patent

active

053254643

ABSTRACT:
The Pyramid Learning Architecture Neurocomputer (PLAN) is a scalable stacked pyramid arrangement of processor arrays. There are six processing levels in PLAN consisting of the pyramid base, Level 6, containing N.sup.2 SYnapse Processors (SYPs), Level 5 containing multiple folded Communicating Adder Tree structures (SCATs), Level 4 made up of N completely connected Neuron Execution Processors (NEPs), Level 3 made up of multiple Programmable Communicating Alu Tree (PCATs) structures, similar to Level 5 SCATs but with programmable function capabilities in each tree node, Level 2 containing the Neuron Instruction Processor (NIP), and Level 1 comprising the Host and user interface. The simplest processors are in the base level with each layer of processors increasing in computational power up to a general purpose host computer acting as the user interface. PLAN is scalable in direct neural network emulation and in virtual processing capability. Consequently, depending upon performance and cost trade-offs, the number of physical neurons N to be implemented is chosen. A neural network model is mapped onto Level 3 PCATs, Level 4 NEPs, Level 5 SCATs, and Level 6 SYPs. The Neuron Instruction Processor, Level 2, controls the neural network model through the Level 3 programmable interconnection interface. In addition, the NIP level controls the high speed and high capacity PLAN I/O interface necessary for large N massively parallel systems. This discussion describes the PLAN processors attached to a Host computer and the overall control of the pyramid which constitutes the neurocomputer system.

REFERENCES:
"Parallel Distributed Processing vol. 1: Foundation" Rumelhart et al, Cambridge, Mass: MIT Press 1986.
"Neurons with Graded Response Have Collective . . . " Hopfield Proceedings of Nat'l Acad. of Sci. 81, pp. 3088-3092, May 1984.
"Neural Networks and Physical Systems with . . . " Hopfield Proceedings of Nat'l Acad. of Sci. 79, pp. 2554-2558, 1982.
"NETtalk: A Parallel Network that Learns to Read Aloud" T. J. Seijnowski et al, JHU/EECS-8601, 1986.
"Scaling the Neural TSP Algorithm" R. Cuykendall et al Biological Cybernetics 60, pp. 365, 371, 1989.
"Explorations in Parallel Distributed Processing . . . " McClelland et al, Cambridge, Mass: MIT Press, 1988.
"Neural Computation of Decisions in Optimization Problems" Hopfield et al, Biological Cybernetics 52, pp. 141, 152, 1985.
"Optimization by Neural Networks" Ramanujam et al, IEEE Int'l Conf. on Neural Network vol. II, pp. 325-332, Jul. 1988.
DAIR Computer Systems, Connections: The Traveling Salesman User Manual Release 1.0 Palo Alto, CA, DAIR Computer 1988.
"Alternative Networks for Solving tge Traveling . . . " IEEE Int'l Conf. on Neural Network, vol. II, pp. 333-340, Jul. 1988.
Implementing Neural Nets with Programmable Logic; Vidal IEEE Trans. on Acoustics, . . . vol. 36 No. 7 pp. 1180-1190 Jul. 1988.
Design of Parallel Hardware Neural Network from Custom . . . Eberhardt et al: IJCNN; vol. 2 Jun. 18-22, 1989; pp. II-190.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Pyramid learning architecture neurocomputer does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Pyramid learning architecture neurocomputer, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Pyramid learning architecture neurocomputer will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2383005

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.