LUT-based system for simulating sensor-assisted perception...

Data processing: structural design – modeling – simulation – and em – Electrical analog simulator – Of physical phenomenon

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C703S002000, C701S211000

Reexamination Certificate

active

06735557

ABSTRACT:

BACKGROUND
1. Field of Invention
The present disclosure relates generally to Sensor-assisted Perception of Terrain (SaPOT).
It relates more specifically to computer systems that simulate Sensor-assisted Perception of Terrain. It relates yet more specifically to simulation of real-time SaPOT, where the latter Sensor-assisted Perception of Terrain may be provided, by way of example, for a pilot in an aircraft flying over a terrain of a given topology and a given makeup of materials, where the terrain is illuminated by various sources of radiation (e.g., moonlight, star light) and is thermally-preloaded (e.g., by the daytime sun).
2. Reference to Included Computer Program Listings
This application includes one or more listings of computer programs. The owner of the present application claims certain copyrights in said computer program listings. The owner has no objection, however, to the reproduction by others of such listings if such reproduction is for the sole purpose of studying them to understand the invention. The owner reserves all other copyrights in the program listings including the right to reproduce the computer program in machine-executable form.
3. Description of Related Art
Pilots of aircraft or other pilot-controlled vehicles sometimes guide their craft over a given terrain with the assistance of vision-augmenting equipment.
Examples of vision-augmenting equipment include Night Vision Goggles (NVG's) and Low Level Light Television Cameras (LLLTV's).
Such vision-augmenting equipment typically convert hard-to-see imagery in one or more, of the visible and/or invisible spectral bands into imagery that is more clearly visible to the human eye. One example of an invisible spectral band is the Near Infrared Range (NIR) which begins at a wavelength (&lgr;) of around 0.65 &mgr;m and/or slightly longer wavelengths and continues to longer wavelengths. Another example of an invisible spectral band is the Far Infrared Range (FIR) which begins at around 1.00 &mgr;m and/or slightly longer wavelengths and continues to longer wavelengths.
LLLTV's usually amplify photon energy in one or more, of the visible spectral bands (whose &mgr; is between around 0.4 &mgr;m {violet end} and 0.65 &mgr;m {red end}) and sometimes, also downshift and amplify energy in the low NIR band so as to make imagery within those bands more visible to the human eye. The exact bands or sub-bands of operation may vary depending on specific missions. Such missions can be of a law-enforcement nature, or of a military nature, or of another nature. By way of example, a given kind of law-enforcement or agricultural mission might include the seeking out certain types of vegetation whose chlorophyl reflects or absorbs with a specific spectral signature. In such a case, LLLTV sensor response might be controlled to peak in correlation with spectral signature peaks of the targeted vegetation. Searchers will then be able to spot the targeted vegetation more easily.
NVG's usually operate in the NIR band and produce visible imagery by downshifting wavelength and amplifying. The exact bands or sub-bands of operation may vary depending on specific missions. For example if the mission includes seeking out certain types of objects that are highly reflective of moonlight, the NVG response might be controlled to peak in correlation with spectral signature peaks of the targeted objects.
Forward Looking InfraRed (FLIR) sensors are another example of vision-augmenting equipment. FLIR's usually operate in a broadband mode that extends across the FIR and NIR bands for producing visible imagery by downshifting wavelength and amplifying. The exact bands or sub-bands of operation and filtering functions may vary depending on specific missions. For example if the mission includes seeking out certain types of very-hot objects that emit black body radiation, the FLIR response might be controlled to peak in correlation with black body emission peaks of the targeted, high temperature objects.
Generally speaking, vision-augmenting equipment use one or more, specially-designed sensors to detect radiations that are either invisible to the human eye or difficult for the human eye to see as distinct imagery. Electronic means or optoelectronic means or other means are provided for converting and/or amplifying the imagery of the sensor-detected radiations into imagery which is clearly visible to the human eye. The sensor-detected radiations may be in the form of reflected energies (e.g., moonlight reflected from IR-reflecting materials) or in the form of directly radiated energies (e.g., IR radiated from a hot automobile engine or from flares), or combinations of both.
Specific uses of vision-augmenting equipment (e.g., specific missions) may occur in combination with specific lighting environments (e.g., moonlight, cloud reflection, etc.) and specific mixes of terrain materials (e.g., IR-reflecting features, UV-absorbing features, etc.). Each unique kind of mission may call for a correspondingly unique mix of sensors, where each sensor may have a respective, application-specific set of frequency responses and/or sensitivities.
Because vision-augmenting equipment often display imagery which is different from that which a pilot may be accustomed to seeing naturally with his or her own eyes, it is often desirable to train pilots ahead of time so that the pilots can correctly interpret what they see with the vision-augmenting equipment when actual flights occur.
One method of training pilots is to let them practice flying over a given terrain, first while looking out-of-window from the cockpit in daylight conditions, and second by repeating the daytime run under night conditions while using the vision-augmenting equipment to view the same terrain.
A more cost-effective method subjects pilots to computer simulations of both Out-of-Window Perception Of Terrain (OwPOT) and Sensor-assisted Perception Of Terrain (SaPOT) under different conditions of lighting, given different terrain topologies, different terrain materials, different sensor types, and so forth.
A conventional OwPOT/SaPOT simulating system will typically have a very large farm of disk drives (or bank of tape drives) that store approximately 1-TeraByte (10
12
bytes) of digitized terrain information or more.
This ‘terrain map’ information is usually formatted as a collection of three-dimensional polygons (generally triangles) within an x, y, and z Cartesian frame of reference that represents a real world region having dimensions of approximately 1000 kilometers (1000 Km) by 1000 Km (1 Mm) or more. The distance between adjacent vertices of each polygon will typically correspond to a real world distance of about 100 meters (100 m), which inter-vertex distance is, about 10
−4
times the side dimension of the overall terrain map.
The exemplary 1 Mm×1 Mm area given above for the terrain map may represent a maximal, mission range for a given type of aircraft. The real world area that is represented by a given terrain map can be greater or smaller as may vary with mission specifics. Similarly, the exemplary, 100 m inter-vertex distance given above for each polygon may represent an empirically selected resolution setting. The inter-vertex distance can be greater or smaller as may vary with different and practical map-creating techniques, different mission objectives, and differing capabilities of OwPOT/SaPOT simulating systems to handle maps of higher or lesser resolution.
In a conventional OwPOT/SaPOT simulating system, the following, ‘per-vertex’ items of digitized information will usually be stored in the terrain map for each vertex of each polygon:(a) the x, y, z spatial coordinates of the vertex point; (b) a three-dimensional normal vector for defining the slope of a material surface at only the vertex point; (c) color values for defining a perceived, reflected or radiant color of the surface material at the vertex point; and (d) texture values for use in adding texture to the interior of the polygon during rasterization.
The disk farm

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

LUT-based system for simulating sensor-assisted perception... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with LUT-based system for simulating sensor-assisted perception..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and LUT-based system for simulating sensor-assisted perception... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3217354

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.