Image analysis – Image transformation or preprocessing – Convolution
Reexamination Certificate
1999-03-05
2001-12-11
Boudreau, Leo (Department: 2723)
Image analysis
Image transformation or preprocessing
Convolution
C708S420000, C708S315000, C702S020000, C382S103000
Reexamination Certificate
active
06330373
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention pertains generally to scene simulation systems and methods, and more particularly to a real time detailed scene convolver system. The system allows matching of complex imaged scenes to weapon system optics to generate a realistic detailed response that may be presented to weapon signal processing electronics for testing.
2. Background
Various weapon systems have been developed that employ advanced high resolution electro-optical/infrared (EO/IR) raster-scan image-based seeker systems. These image-based weapon systems typically utilize advanced signal processing algorithms to increase the probability of target detection and target recognition, with the ultimate goal of accurate target tracking and precise aim-point selection for maximum probability of kill and minimum collateral damage. Validation of such signal processing algorithms has traditionally been carried out through free flight, captive carry and static field tests of the image-based weapon systems, followed by lab analysis, modification of the algorithms, and then subsequent field tests followed by further analysis and modifications. This process is generally costly and time-intensive. Obtaining the correct target and weather conditions can add additional cost and time delay to this test/modify/re-test cycle and is of no use in a simulated or “virtual” environment.
Recently, the development and testing of signal processing algorithms for EO/IR image-based weapon system electronics has been facilitated by the creation of digital video-based detailed scene injection (DSI). This technique allows realistic testing of image-based weapon system signal processing in the loop (SPIL) electronics in a laboratory environment with complex flight path, target, and weather condition scenarios. Detailed scene injection provides for the use of spatially, temporally and spectrally correct images rendered using a video-generating computer, such as the Silicon Graphics ONYX II Supercomputer. These images can then be delivered to the signal processing electronics of an image-based raster-scanned weapon system for dynamic “real-time” testing.
The use of digital video injection has been limited to imaging weapon systems that employ raster scanning. However, many weapon systems employ EO/IR detectors with scanning techniques other than raster scan, such as frequency modulated (FM) conically-scanned reticle, amplitude modulated (AM) center (or outer-nulled) spin-scanned reticle and rosette-scanned detectors. With systems employing these non-raster scanning detectors, digital video injection cannot provide a sufficiently high-resolution, pre-stored, complex image in “real-time” to present to the signal processing electronics for processing and system testing.
Accordingly, there is a need for a real time detailed scene convolver system and method that can be used with detectors that employ other than raster scanning, and that generates correct, complex, detailed output in real time. A preferred embodiment of the present invention satisfies these needs, as well as others, and generally overcomes the deficiencies found in the background art.
SUMMARY OF THE INVENTION
A preferred embodiment of the present invention provides a system and method that captures realistic detailed responses of a weapon system's optical detector for transfer to that weapon's signal processing electronics in real time. A preferred embodiment of the present invention interfaces with a graphics supercomputer that generates a “virtual” or synthetic scene. A preferred embodiment of the present invention utilizes a “map” of an impulse response signal that simulates the actual signal that would be provided by a weapon system's optical detector's response (consisting of a reticle and “raw” optics signal processing circuitry only, hereinafter detector) to a point source, i.e., a sub-diffraction limit sized source. This response map is over-laid, or superimposed, on the imaged scene provided by the graphics supercomputer so that a representation of where the imaged scene's pixels are actually striking the detector is provided. The imaged scene is comprised of pixels hereinafter identified as scene pixels. A value associated with the intensity of each scene pixel “striking” or “hitting” within a “response region” of the map is multiplied by a value assigned to each pixel resident on the map, i.e., a map pixel, that is associated with that map pixel's intensity and the selection of which map pixel to multiply is determined, at least in part by the map pixel's location on the map. These individual products are summed to provide a single overall detector response value, or image, for a response region within the imaged scene. The process is facilitated for simulation in real time by pre-calculating detector responses and storing them in look-up tables or other suitable addressable memory. In general terms, a preferred embodiment of the present invention:
generates a map as a configuration of pixels representing a detector's response signal that may be sub-divided into sections sized to accommodate a desired resolution, i.e., the addressable sections may be composed of small addressable sub-categories, often termed cells, or may be further sub-divided into addressable sub-cells to achieve enhanced resolution;
populates addressable memory, e.g., an X-Y look-up table, with numerical values representing predetermined addresses of the centers of each of the smallest addressable divisions (e.g., sub-cells) of maps of a detector's response to the imaged scenes that will be used in a planned simulation, as determined from a reference-based position;
receives a computer-generated complex imaged scene; generates modified base addresses for selected pixels from the detector response maps;
selects base addresses from the complex imaged scene and the modified base addresses, where the base addresses may be modified as a result of an input from a gyro model;
generates a detector response output using the modified base addresses, the selected base addresses, and the complex imaged scene;
directs the detector response output to an external interface; and
moves the map of the detector's response signal over the complex imaged scene in real time, thereby “flying” the system in the lab to simulate an actual system in flight.
More specifically, the output generated by a preferred embodiment of the present invention replaces the output from the detector of a weapon to allow simulated operation of that weapon in a synthesized scenario. A preferred embodiment of the present invention uses an actual threat signal processor-in-the-loop (T-SPIL) with detailed scene injection (DSI) in a realistic simulation of multiple types of scanning detectors incorporated in weapon systems. A preferred embodiment of the present invention incorporates surface-to-air, air-to-surface or air-to-air threat weapon signal processors (including tracker, counter-countermeasure and guidance circuitry). These processors are coupled with the generation and injection of detailed imaged scenes to create a realistic threat analysis capability that simulates actual weapons in free flight. The T-SPIL processes inputs from processors that are linked to digital models of a weapon system's detectors, as well as a model of the weapon system's gyros and, optionally, its airframe. The combination of inputs produces a highly realistic and credible simulation of the weapon system's responses during dynamic simulations of realistic scenarios. The DSI provides the system with composite computer-generated detailed imaged scenes (>16,000 pixels) that may include targets, countermeasures and backgrounds.
The advantages of using T-SPIL in combination with DSI simulation include:
generating the realistic free flight of a weapon while using actual weapon signal processors;
operating in real time;
providing fully detailed targets, countermeasures and backgrounds;
facilitating dynamic behavior of all objects in an
Channer John D.
Heydlauff Bruce M.
McKinney Dennis G.
Baugher Jr. Earl H.
Bokar Greg M.
Boudreau Leo
Kassa Yosef
Serventi Anthony N.
LandOfFree
Real-time detailed scene convolver does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Real-time detailed scene convolver, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Real-time detailed scene convolver will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2562878