Image analysis – Pattern recognition – Classification
Reexamination Certificate
2000-10-10
2004-10-05
Mariam, Daniel (Department: 2621)
Image analysis
Pattern recognition
Classification
C382S103000, C382S104000, C382S284000, C701S045000
Reexamination Certificate
active
06801662
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to object detection systems and methods. More specifically, the present invention relates to object detection systems and methods for detection and classification of objects for use in control of vehicle systems, such as air bag deployment systems, and other systems.
BACKGROUND OF THE INVENTION
Virtually all modern passenger vehicles have air bag deployment systems. The earliest versions of air bag deployment systems provided only front seat driver-side air bag deployment, but later versions included front seat passenger-side deployment. The latest versions of deployment systems now include side air bag deployment. Future air bag deployment systems will likely include protection for passengers in rear seats. Current air bag deployment systems generally deploy whenever there is a significant vehicle impact, and will deploy even if the area to be protected is not occupied or is occupied by someone not likely to be protected by the deploying air bag.
While thousands of lives have been saved by air bags, a large number of people have been injured and a few have been killed by the deploying air bag. Many of these injuries and deaths have been caused by the vehicle occupant being too close to the air bag when it deploys. Children and small adults are particularly susceptible to injuries from air bags. Also, an infant in a rear-facing infant seat placed on the right front passenger seat is in serious danger of injury if the passenger airbag deploys. The United States Government has recognized this danger and has mandated that car companies provide their customers with the ability to disable the passenger side air bag. Of course, when the air bag is disabled, full size adults are provided with no air bag protection on the passenger side.
Therefore, there exists a need to detect the presence of a vehicle occupant within an area protected by an air bag. Additionally, if an occupant is present, the nature of the occupant must be determined so that air bag deployment can be controlled in a fashion so as to not injure the occupant.
Various mechanisms have been disclosed for occupant sensing. Breed et al. in U.S. Pat. No. 5,845,000, issued Dec. 1, 1998, describe a system to identify, locate, and monitor occupants in the passenger compartment of a motor vehicle. The system uses electromagnetic sensors to detect and image vehicle occupants. Breed et al. suggest that a trainable pattern recognition technology be used to process the image data to classify the occupants of a vehicle and make decisions as to the deployment of air bags. Breed et al. describe training the pattern recognition system with over one thousand experiments before the system is sufficiently trained to recognize various vehicle occupant states. The system also appears to rely solely upon recognition of static patterns. Such a system, even after training, may be subject to the confusions that can occur between certain occupant types and positions because the richness of the occupant representation is limited. It may produce ambiguous results, for example, when the occupant moves his hand toward the instrument panel.
A sensor fusion approach for vehicle occupancy is disclosed by Corrado, et al. in U.S. Pat. No. 6,026,340, issued Feb. 15, 2000. In Corrado, data from various sensors is combined in a microprocessor to produce a vehicle occupancy state output. Corrado discloses an embodiment where passive thermal signature data and active acoustic distance data are combined and processed to determine various vehicle occupancy states and to decide whether an air bag should be deployed. The system disclosed by Corrado does detect and process motion data as part of its sensor processing, thus providing additional data upon which air bag deployment decisions can be based. However, Corrado discloses multiple sensors to capture the entire passenger volume for the collection of vehicle occupancy data, increasing the complexity and decreasing the reliability of the system. Also, the resolution of the sensors at infrared and ultrasonic frequencies is limited, which increases the possibility that the system may incorrectly detect an occupancy state or require additional time to make an air bag deployment decision.
Accordingly, there exists a need in the art for a fast and reliable system for detecting and recognizing occupants in vehicles for use in conjunction with vehicle air bag deployment systems. There is also a need for a system that can meet the aforementioned requirements with a sensor system that is a cost-effective component of the vehicle.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a fast and reliable system for detecting and tracking objects within a specified area that can be adapted for detecting and recognizing occupants within a vehicle to determine whether an airbag deployment system should be triggered or not. It is a further object of the present invention to provide for the use of Commercial-Off-The-Shelf (COTS) components within the invention to lower the cost of the deployment of embodiments of the invention. It is still another object of the present invention to provide that occupancy determination is made using multiple types of information extracted from the same set of sensors, thereby further reducing the cost of the deployment of embodiments of the invention.
The present invention provides a vision-based system for automatically detecting the position of objects (such as close to the instrument panel or away, etc.) as well as recognizing the type of object (such as an adult, child, empty seat, etc.). The method and system of the present invention provide this capability by recognizing the type of occupant and his position by combining different types of information extracted from a video stream generated by an imaging sensor, such as a solid-state CCD or CMOS vision sensor. The vision sensors of the present invention may view a scene that is lit only with ambient light, or additional light may be provided to adequately light the viewed scene. The different types of information extracted from the video stream are used to provide separate confidences as to occupant status. The present invention provides a sensor fusion architecture which optimally combines the confidence determinations made by a set of classifiers operating separately on edge, motion, and range information. The final classification decision is more accurate than that achieved by the classifiers separately.
An embodiment of the present invention provides a method of object detection comprising the steps of: capturing images of an area occupied by at least one object; extracting image features from the images; classifying the image features to produce object class confidence data; and performing data fusion on the object class confidence data to produce a detected object estimate. Classifying the image features may be accomplished through the use of classification algorithms, such as a C5 decision tree, a Nonlinear Discriminant Analysis network, a Fuzzy Aggregation Network, or a Hausdorff template matching process.
Another embodiment of the present invention provides a system for classifying objects, that comprises: means for capturing images of an area occupied by at least one object; means for extracting features from the images to provide feature data; means for classifying object status based on the feature data to produce object class confidences; and means for processing the object class confidences to produce system output controls. Means for capturing images of an area may comprise CMOS or CCD cameras, or other devices known in the art that allow digital images of a viewed area to be captured. Means for extracting features may comprise algorithms that process the digital images to allow edge features, motion features, or other features of the viewed images to be generated. Means for classifying object status may be implemented through the use of classification algorithms, such as a C5 decision tree, a Nonlinear Discriminant Analysis network, a Fuzzy Aggregation
Boscolo Riccardo
Medasani Swarup S.
Owechko Yuri
Srinivasa Narayan
HRL Laboratories LLC
Ladas & Parry
Mariam Daniel
LandOfFree
Sensor fusion architecture for vision-based occupant detection does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Sensor fusion architecture for vision-based occupant detection, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Sensor fusion architecture for vision-based occupant detection will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3331607