Modular, hierarchically organized artificial intelligence...

Data processing: artificial intelligence – Machine learning

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S013000

Reexamination Certificate

active

06738753

ABSTRACT:

CROSS-REFERENCE TO RELATED APPLICATIONS
Not applicable.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT
Not applicable.
REFERENCE TO MICROFICHE APPENDIX
Not applicable.
BACKGROUND—FIELD OF INVENTION
This invention relates to artificial intelligence systems and more particularly to the organization and structure of a plurality of learning artificial intelligence entities.
BACKGROUND—DESCRIPTION OF PRIOR ART
For purposes of this document, we consider an artificially intelligent (AI) entity as having three defining properties. Two are conventional within the AI discipline; the third is sometimes used and sometimes omitted, depending on the emphasis of the AI effort.
First, an AI entity exhibits complex behavior that affects the world external to itself. It may send control information to electronic or mechanical devices; it may output information to human beings; it may directly alter some property of its environment. Second, an AI entity responds to information about its environment. Its ‘senses’ may be electronic readings, digitally coded information, physical movement or any other method of bringing information from outside. In general usage, ‘complex’ behavior means ‘non-obvious’ behavior. For example, a simple controller like the governor on a steam engine would not usually be considered artificially intelligent since the source of its response to sensed engine speed is apparent to observation.
AI devices with these two properties exhibit complex behavior in an unchanging way. Examples in widespread current use would be (1) ‘expert systems’, where a set of facts and rules is input to an execution device which will then, in the absence of new inputs, give the same answers to the same questions, (2) stock charting systems, where the rules for choosing investments, once defined, make the same recommendations whenever the same patterns appear, and (3) ‘multi-agent systems,’ AI applications in resource allocation where the ‘agents’ are executing fixed algorithms and are given a language or protocol in which to communicate and negotiate with each other.
The third property in the present definition is that the AI entity changes its behavior as a result of experience. That is, the same situation will evoke a different response from the AI entity if the entity has ‘seen it’ before. We say that such an entity is a ‘learning AI entity’.
To summarize, an AI entity accepts sense data from its environment, produces complex behavior in response, and as the definition is used here learns from experience.
Current AI in the non-learning sense includes knowledge bases and multi-agent processing schemes. Knowledge bases are organized around collections of information with rules for making inferences and answering queries. Multi-agent schemes combine numerous entities operating on fixed algorithms. Often these aggregations include convenient methods for people to update the algorithms, inference rules and other recipes that govern their behavior. However, the ‘learning’ is actually happening in their human keepers, but not on the aggregation itself.
Current AI learning technology consists largely of refinements of two basic models developed in the 1960s, as described in the next section.
The Bases of Computer Artificial Intelligence
Single Entity and Scoring Polynomial (Newell, Samuel)
The 1958 paper by Newell, Shaw and Simon
i
and the 1959 paper by Samuel
ii
laid the groundwork for the single AI entity using the scoring polynomial approach. In Newell, et al., a chess-playing automaton is described. Samuel's version played checkers. In both cases the ‘senses’ consisted of various measures of game positions. In chess, measures like point values of pieces for each side, occupancy of key center squares, control of long files, etc., were used. A move generator created a list of possible chains of moves and countermoves, ending in a list of accessible future positions. Each position had its sense values, and the imputed value of each position was the sum of each sense value multiplied by a factor specific to that sense. Learning, a major factor in the Samuel paper, involved adjusting the factors applied to each sense by applying feedback from positions actually attained.
i
Newell, A., J. C. Shaw, and H. A. Simon. 1958. Chess-Playing Programs and the Problem of Complexity. IBM J. Res. Develop. 2:320-25.
ii
Samuel, A. L. 1959. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Develop. Pp. 210-229.
The defining characteristics of this model, then, are (1 ) the single entity using a defined set of senses and a scoring polynomial, and (2) reinforcement by adjustment of the sense factors in the polynomial.
Neural Net (Rosenblatt)
The Rosenblatt
iii
model, named the Perceptron, attempted to mimic the action of neurons in animals. It was used in a simple character-recognition activity. A large number of identical cell-like entities, each exhibiting simple behavior, were connected, each to all others. Senses were applied to some cells, which propagated simple on-off pulses to other connected cells. Reinforcement was applied to other cells, which also sent on-off pulses to their connected neighbor cells. Cells receiving pulses would transmit pulses to their own connected neighbors if their total receipts exceeded a threshold value unique to that cell. Learning consisted of adjusting the individual cells' thresholds based on reinforcement pulses received.
iii
Rosenblatt, F. 1958. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Cornell Aeronautical Laboratory, Psychological Review, v.65, No. 6, p. 386-408.
The defining characteristics of the Rosenblatt model, then, are (1) a large number of simple threshold-type cells working with on-off pulses, (2) initial connection of cells to neighbors, and (3) learning by adjustment of thresholds.
Current art encompasses the Newell/Samuel models of single AI entities, which are able to sense environmental input, exhibit complex behavior, and learn through use of various scoring methods. The single-entity scoring polynomial is used in such areas as scoring of loan applications, although in practice the learning process is ‘frozen’ to prevent unpredictable behavior in a business environment. There is also a great deal of current art based on the Rosenblatt neural net model. Neural net models based on the original Perceptron actually learn in operation in, for example, stock-picking applications. While they have grown in complexity by ‘layering’, connecting multiple ‘simple’ Rosenblatt assemblages, they are still based on the relay-line threshold-activated undifferentiated cell.
There have been no combinations of the single complex learning (Newell) entity into complex assemblages including role differentiation and internally driven learning. However, such an AI super-entity constructed of an arrangement of modular learning AI entities, role differentiated and hierarchically organized, and motivated by policies set for subordinates by their superiors, would more accurately model such super-intelligent entities as communities, teams, societies, or corporations.
Accordingly, there is a need in the art for a form of AI entity that combines the cooperative aspects of the simple Rosenblatt model with the more sophisticated individual behavior of the Newell-Samuel model, adding to standard modular form the new elements of role differentiation and variation of behavior as a result of experience—both the direct experience of the entity and that of other entities.
Further, there is a need in the art for a mode of integration of AI entities of this type with other entities, including human beings, in a cooperative network using the same communication structures interchangeably.
Further, there is a need in the art for the learning behavior of the super-entity created by linking numerous AI entities, and the application of this super-entity to complex problems and to simulation of actual multi-entity situations.
SUMMARY OF THE INVENTION
The invention is an artificial intelligence entit

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Modular, hierarchically organized artificial intelligence... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Modular, hierarchically organized artificial intelligence..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Modular, hierarchically organized artificial intelligence... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3202724

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.