Hierarchical pattern recognition system with variable selection

Image analysis – Histogram processing – For setting a threshold

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

382 10, 395 23, 395 24, G06K 962, G06K 900

Patent

active

053116011

DESCRIPTION:

BRIEF SUMMARY
BACKGROUND OF THE INVENTION

Adaptive Resonance Theory (ART) architectures are neural networks that carry out stable self-organization of recognition codes for arbitrary sequences of input patterns. Adaptive Resonance Theory first emerged from an analysis of the instabilities inherent in feedforward adaptive coding structures (Grossberg, 1976a). More recent work has led to the development of two classes of ART neural network architectures, specified as systems of differential equations. The first class, ART 1, self-organizes recognition categories for arbitrary sequences of binary input patterns (Carpenter and Grossberg, 1987a and U.S. patent application Ser. No. 07/086,732, filed Jul. 23, 1987). A second class, ART 2, does the same for either binary or analog inputs (Carpenter and Grossberg, 1987b and U.S. Pat. No. 4,914,708).
Both ART 1 and ART 2 use a maximally compressed, or choice, pattern recognition code. Such a code is a limiting case of the partially compressed recognition codes that are typically used in explanations by ART of biological data (Grossberg, 1982a, 1987). Partially compressed recognition codes have been mathematically analysed in models for competitive learning, also called self-organizing feature maps, which are incorporated into ART models as part of their bottom-up dynamics (Grossberg 1976a, 1982a; Kohonen, 1984). Maximally compressed codes were used in ART 1 and ART 2 to enable a rigorous analysis to be made of how the bottom-up and top-down dynamics of ART systems can be joined together in a real-time self-organizing system capable of learning a stable pattern recognition code in response to an arbitrary sequence of input patterns. These results provide a computational foundation for designing ART systems capable of stably learning partially compressed recognition codes. The present invention contributes to such a design.
The main elements of a typical ART 1 module are illustrated in FIG. 1. F.sub.1 and F.sub.2 are fields of network nodes. An input is initially represented as a pattern of activity across the nodes of feature representation field F.sub.1. The pattern of activity across category representation F.sub.2 corresponds to the category representation. Because patterns of activity in both fields may persist after input offset (termination of the input) yet may also be quickly inhibited, these patterns are called short term memory, or STM, representations. The two fields, linked by bottom-up adaptive filter 22 and top-down adaptive filter 24, constitute the Attentional Subsystem. Because the connection weights defining the adaptive filters may be modified by inputs and may persist for very long times after input offset, these connection weights are called long term memory, or LTM, variables.
Each node of F.sub.1 is coupled to each node of F.sub.2 through a weighted connection in the adaptive filter 22. Those weights change with learning. Thus, selection of a category node in F.sub.2 is determined by the nodes which are activated by an input pattern and the weights from those nodes to F.sub.2. Each node of F.sub.2 is in turn connected to each node of F.sub.1 through weighted connections of the adaptive filter 24. Those weights are also learned. The learned weights define a template pattern from a selected category, and that pattern is received at the nodes of F.sub.1 through the adaptive filter 24. Intersection of the input pattern from input 20 and the template through the adaptive filter 24 is activated as a matching pattern in F.sub.1. The norm of the matching pattern is compared to the norm of the input pattern at 26. If the comparison exceeds a threshold vigilance parameter .rho., the system is allowed to resonate and the adaptive filters 22 and 24 adjust their weights in accordance with the matching pattern. On the other hand, if the comparison does not exceed the vigilance parameter threshold, F.sub.2 is reset and a different category is selected. Prior to receiving the template pattern through the adaptive filter 24, a gain control gain 1 activates all nodes which r

REFERENCES:
patent: 4914704 (1990-04-01), Carpenter et al.
Cohen et a.,, "Speech Perception and Production By A Self-Organizing Neural Network," In In Evolution, Learning, Cognition and Advanced Architecture, Y. C. Lee (Ed), Hong Kong: World Scientific Publishers, 1988.
Carpenter et al., "Search mechanisms for Adaptive Resonance Theory (ART) Architectures," IJCNN International Joint Conference on Neural Networks, Sheraton Washington Hotel, Jun. 19-22, 1989, pp. 201-205 (see whole document).
B. Kosko, "Competitive Adaptive Bidirectional Associative Memories," IEEE First International Conference on Neural Networks, San Diego, Calif., Jun. 21-24, 1987, pp. II-759-II-766.
Carpenter et al., "ART 3: Hierarchical Search Using Chemical Transmitters in Self-Organizing pattern Recognition Architectures", Neural Networks, vol. 3, No. 2, 1990, pp. 129-152.
G. A. Carpenter & S. Grossberg, "ART 2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns" Applied Optics, Dec. 1, 1987, pp. 4919-4930.
G. A. Carpenter 7 S. Grossberg, "The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network", IEEE, Computer, vol. 21, No. 3, pp. 77-88, Mar. 1988.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Hierarchical pattern recognition system with variable selection does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Hierarchical pattern recognition system with variable selection , we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hierarchical pattern recognition system with variable selection will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2417634

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.