Method and system for intelligent agent decision making for...

Data processing: structural design – modeling – simulation – and em – Simulating electronic device or electrical system – Event-driven

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C703S002000, C703S020000, C706S031000, C706S061000

Reexamination Certificate

active

06360193

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a novel method and system that uses intelligent agent for decision making, for example, during tactical aerial defensive warfare, or other real-time decision making processes.
PRIOR ART
Presently, intelligent agent functions are available in different areas, mostly related to desktop/office system functionality—e.g., automatic spelling correctors, automatic email address selectors, etc.—Presently, there is no tactical intelligent agent for decision making in the area of air combat. Further, the intelligent agents that are present in other areas do not collaborate among themselves either in a homogeneous (i.e., among intelligent agents of the same type) or heterogeneous (i.e., among intelligent agents of different types operating in a common problem space) environment.
Further, the intelligent agents that are present in other areas do not collaborate with human users. Existing intelligent agents either act in an isolated fashion, or get directions from the user and follow these directions. Such intelligent agents collect and process information from the “environment” and report back to the user. None of the existing intelligent agents accept real-time corrections to the “environment” (as it perceives it) from the user (either in delayed or in real-time fashion).
In addition, no intelligent agent today takes into consideration such factors as mental and physical state of a human user, including user degree of fatigue, stress, etc.
SUMMARY OF THE INVENTION
In the described AWACS Trainer Software (ATS), which is one exemplary application of the present invention, there is a tactical intelligent agent for decision making in the area of air combat. Other situations may also be used with the present invention as described below in more detail. The agent is tactical because it considers not only immediate certainties and near certainties (e.g., if a hostile fighter is not shot at it will shoot at us) but also longer-term possibilities (e.g., if the bulk of our fighters are committed early, they may not be available should an enemy strike force appear in the future). The agent is intelligent because it exhibits autonomous behavior and engages in human-like decision process. The agent assists in decision making in the area of air combat because the agent gives explicit advice to human AWACS Weapons Directors (WD) whose job it is to coordinate air combat. The agent is also capable of making independent decisions in the area of air combat, replacing a human WD.
The described ATS employs groups of collaborating intelligent agents for decision making. The agents are collaborating because not every agent has all the information regarding the problem at hand, and because global decisions are made that affect all agents and humans, on the basis of agents exchanging, debating and discussing information, and then making overall decisions. Thus for instance, agents assisting individual WDs exchange threat information and then coordinate their recommendations, such as what fighters to commit to what enemy assets, without resource collisions. That is, an agent A will not recommend to its WD A to borrow a fighter pair P from another WD (WD B) while WD B's agent (agent B) recommends to WD B to use the same fighter pair P to target another threat.
The described ATS supports collaboration among (a heterogeneous set of) intelligent agents and a combination of (a heterogeneous set of) intelligent agents and humans. The set of agents is heterogeneous because it includes role-playing agents (e.g., an agent that plays a WD) and adviser agents (e.g., an agent that recommends a particular fighter allocation to a WD) (as well as other agents). The set of humans is heterogeneous because it includes WDs and Senior WDs (different roles, a.k.a. SDs). Agents and humans collaborate because agents and humans jointly perform air combat tasks.
Existing intelligent agents get directions from the user and follow these directions. They collect and process information from the “environment” and report back to the user. None of the existing intelligent agents accept real-time corrections to the “environment” (as it perceives it) from the user (either in delayed or in real-time fashion).
The described ATS provides a feedback loop between an intelligent agent and a user. Agents and users (humans or other agents) exchange information throughout ATS running. As changes occur (e.g., new planes appear), agents and users exchange this information and agents, naturally adjust (as do the users). For instance, as a pair of fighters becomes available, an agent may recommend to the human WD how to assign this pair. WD's reaction results in the agent learning what happened and possibly how to (better) advise the WD in the future. In particular, the agent may also change its perception of the environment. For instance, a repeated rejection of a particular type of agent recommendation may result in the agent re-prioritizing objects and actions it perceives.
The described ATS provides intelligent agents representing multiple users (e.g., impersonating or assisting WDs, SDs, instructors). These agents collaborate, as already illustrated. However, the agents do not all perceive the environment the same way. For instance, an agent representing WD A may only be able to probe the status of the planes WD A controls. An agent representing another WD B may only be able to probe the status of the planes controlled by WD B. An agent representing an SD is able to probe the status of a plane controlled by any WD that reports to the SD. A strike WD may command a stealth bomber which does not show on AWACS radar, and thus even its position and movement are not visible to the other WDs.
The described ATS intelligent agents learn over time by accumulating knowledge about user' behavior, habits and psychological profiles. An agent may observe that a WD it advises tends to always accept recommendations to target advancing enemy with CAP'ed (engaged in Combat Air Patrol assignment) fighters but never with fighters on their way to tank (even though the agent may consider these fighters adequately fueled and otherwise ready for another dog-fight). The agent may then over time learn not to recommend the WD assign fighters on their way to tank to other tasks.
The described ATS intelligent agent may observe that a WD tends to press mouse buttons more times than it needed, to accept a recommendation. This conclusion may lead the agent to believe that a WD is overly stressed out and tired. The agent may then recommend to the SD's advising agent to recommend that SD consider rotating this WD out. Perhaps as a compromise, the two agents and the two humans (the WD and the SD) may then decide that the best course of action is for the WD to continue for a while but that no fighters be borrowed for other tasks from this WD, and that after the next air combat engagement, the WD be rotated out anyway.
Since multiple intelligent agents and humans may be involved in the ATS decision making process, it is not surprising that they may differ in opinion as to what constitutes the best course of action. The reasons for the differences include the following: non-uniform availability of information (e.g., a particular agent may be privy to detailed information on the planes that belong to its WD only), strategy preferences (e.g., a particular WD may be very risk-averse compared to others), and one group's considerations vs. another group's considerations (e.g., a WD (and its agent) may not wish to loose a pair of fighters; on the other hand, from the point of view of the entire WD team, it may be acceptable to send that same pair of fighters to divert enemy air defenses (at a great risk to themselves)away from a strike package). Given the differences in opinion, the ATS agents exchange opinions and debate options, among themselves and with humans. Standard resolution protocols may be used to ensure that an overall decision is reached after a final amount of such exchanges. Examples include

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for intelligent agent decision making for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for intelligent agent decision making for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for intelligent agent decision making for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2876952

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.