System for automatically generating database of objects of...

Image analysis – Applications – Vehicle or traffic control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S156000

Reexamination Certificate

active

06363161

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to the field of automated image analysis and identification. More specifically, the present invention relates to a system for automatically generating a database of images and positions of objects of interest identified from video images depicting roadside scenes that are recorded from a vehicle navigating a road and having a system that stores location metrics for the video images.
BACKGROUND OF THE INVENTION
The recognition of objects of interest, such as road sign images, in a set of video images has been developed primarily for use in connection with automated vehicle navigation systems. By recognizing such images from the real time output of a forward facing video camera on board a vehicle, instructions and navigational assistance can be provided to a driver or an entirely automated vehicle navigation system can be developed. Unlike object recognition systems that are deployed in a controlled environment with known lighting and background conditions, road sign recognition systems must be able to perform under a wide range of environmental and lighting conditions. In addition, the images must be recognized quickly and in real time so that the information can be immediately available for use. Fortunately, most navigational road sign images are made of regular shapes and have colors that are known and conform to certain standards and combinations. All of these factors have resulted in the use of template matching and color pair matching as the most common image analysis techniques for quickly isolating road sign images from the real time video signal.
While existing techniques that identify road sign images by using template matching can identify a limited subset of road signs that are particularly helpful in driving instruction and navigational information, the need to perform these operations in real time limits the overall number and types of road signs that can be identified by such a template matching technique. The more template patterns and color pairs that are added to a template matching technique, the longer it takes to process the video image. Consequently, many types of road signs that convey non-navigational information, such as parking information, are excluded from the template matching process in order to reduce the total possible number of combinations that must be evaluated.
U.S. Pat. No. 5,633,946 describes a method and apparatus for collecting and processing visual and spatial information from a moving platform. This patent describes a vehicle with multiple video cameras oriented in different directions that record road scenes as the vehicle is driven along a road. Positional information from a Global Positioning System (GPS) receiver and an inertial navigation system (INS) in the vehicle is simultaneously recorded with the video images. Frames of the video signal from multiple cameras are interleaved together and each frame is recorded on a video tape with a time code along with the current spatial position information provided by the GPS receiver and the INS. The video images are then analyzed with a centerline offset process to create street segments that represent a sequence of video images associated with a given segment of a street or road. The patent describes a number of applications for using the street segment information which can include the creation and update of address ranges, the integration of address attribute information, the creation and maintenance of street network topologies, the collection of vehicle routing information, the creation and maintenance of map boundary polygon topologies and attributes, and the accurate location of point features and their attributes. In each of these cases, however, the video images in a given street segment must be examined visually by an operator in order to extract relevant attribute information.
One problem that is encountered when analyzing images recorded by a moving vehicle to identify objects of interest is the massive amount of video images that must be reviewed. As described in U.S. Pat. No. 5,633,946, multiple video cameras are preferably used to capture roadside images as the moving vehicle travels along a road or street. In a preferred embodiment of this patent, video images from eight separate cameras are combined and stored on a single video tape at a combined constant rate of about 30 frames per second, or about 4 frames per second per camera. The result is a tremendous amount of video data generated by the moving vehicle that must be analyzed in order to identify potential objects of interest. This problem is only increased by the desire to add more resolution to the video images to enhance the video images. This massive amount of video data overwhelms conventional techniques for identifying objects of interest, such as road sign images, that have been used to date in connection with vehicle navigation systems. Moreover, this problem is only increased by the desire to further increase pixel resolution of the video images in order to enhance the quality of those video images. The problem is further compounded by the fact that the rate of data capture of the video images effectively dictates the speed at which the moving vehicle can travel in recording the images. The faster the vehicle goes, the more desirable it is to increase the rate of data capture, thereby exacerbating the problem of needing to evaluate massive amounts of data.
It would be desirable to provide a system for automatically generating a database of images and positions of objects of interest identified from video images depicting roadside scenes that are recorded from a vehicle navigating a road and having a system that stores location metrics for the video images.
SUMMARY OF THE INVENTION
The exemplary embodiment described, enabled, and taught herein is directed to the task of building a database of road signs by type, location, orientation, and condition by processing vast amounts of video image frame data. The image frame data depict roadside scenes as recorded from a vehicle navigating said road. By utilizing differentiable characteristics the portions of the image frame that depict a road sign are stored as highly compressed bitmapped files each linked to a discrete data structure containing one or more of the following memory fields: sign type, relative or absolute location of each sign, reference value for the recording camera, reference value for original recorded frame number for the bitmap of each recognized sign. The location data is derived from at least two depictions of a single sign using techniques of triangulation, correlation, or estimation. Thus, output signal sets resulting from application of the present method to a segment of image frames can include a compendium of data about each sign, and bitmap records of each sign as recorded by a camera. Thus, records are created for image-portions that possess (and exhibit) detectable unique differentiable characteristics versus the majority of other image-portions of a digitized image frame. In the exemplary sign-finding embodiment herein these differentiable characteristics are coined “sign-ness.” Thus, based on said differentiable characteristics, or sign-ness, information regarding the type, classification, condition (linked bitmap image portion) and/or location of road signs (and image-portions depicting said road signs) are rapidly extracted from image frames. Those image frames that do not contain an appreciable level of sign-ness are immediately discarded.
Differentiable characteristics of said objects include convexity/symmetry, lack of 3D volume, number of sides, angles formed at corners of signs, luminescence of lumina values, which represent illumination tolerant response in the L*u*v* of LCH color spaces (typically following a transforming step from a first color space like RGB); relationship of edges extracted from portions of image frames, shape, texture, and/or other differentiable characteristics of one or more objects of interest versus background objects. The differentiable

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System for automatically generating database of objects of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System for automatically generating database of objects of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for automatically generating database of objects of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2846166

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.