Image analysis – Applications – Vehicle or traffic control
Reexamination Certificate
2000-11-22
2004-11-16
Johns, Andrew W. (Department: 2621)
Image analysis
Applications
Vehicle or traffic control
Reexamination Certificate
active
06819779
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to road navigation systems and more particularly to road lane detection systems.
BACKGROUND OF THE INVENTION
Automated road navigation systems provide various levels of assistance to automobile drivers to increase safety and reduce driving effort. Road navigation systems use various types of sensors to determine the type, location and relative velocity of obstacles in a vehicles path or vicinity. The sensors gather relevant information about a vehicle's surrounding environment. The information from sensors must be rapidly processed to determine whether a system should influence control of a vehicle or alert a driver to a dangerous condition.
Various systems have been developed which use active techniques such as radar, lasers or ultrasonic transceivers to gather information about a vehicle's surroundings for automated navigation systems. For example, various adaptive cruise control methods use Doppler radar, lasers or ultrasonic transceivers to determine the distance from a subject vehicle to the vehicle traveling before it along a road. These systems are classified as “active” systems because they typically emit some form of energy and detect the reflected energy or other predefined energy emitters. Active systems are susceptible to interference from other similar active systems operating nearby. Emitted energy can be reflected and scattered by surrounding objects thereby introducing errors into the navigation system. For example, the space between a vehicle and the ground can act as a wave-guide to certain active signals thereby distorting any reflected signal and providing a source for errors. Active systems also consume substantial electrical power and are not well suited for adaptation to an existing infrastructure, i.e. the national highway system.
The most relevant information for an automated navigation system is typically the same information used by human drivers, that is, visual, audible and acceleration measurement data. This is because the road and highway infrastructure was designed and constructed to provide clear information and guidance to human drivers who can naturally detect such data.
Passive systems such as optical systems typically detect energy without first emitting a signal, i.e., by viewing reflected or transmitted light. Certain passive systems such as optical systems advantageously are more suited to sensing signals which were designed for a human observer because they more closely emulate human vision. Optical detection systems typically perform extensive data processing roughly emulating human vision processing in order to extract useful information from incoming optical signal. Automated navigation systems using optical systems must solve a variety of processing tasks to extract information from input data, interpret the information and trigger an event such as a vehicle control input or warning signal to an operator or other downstream receivers.
Vehicle navigation systems comprise a variety of interdependent data acquisition and processing tasks to provide various levels of output. Road navigation systems may, for example, include adaptive cruise control, obstacle detection and avoidance, multi-vehicle tracking and lane detection. Adaptive cruise control may provide a warning signal or stimulate other operations which would adjust a vehicle's speed to maintain relative separation between a leading and a following vehicle. Obstacle detection and avoidance may alert a driver or alter a vehicle's course to avoid obstacles or obstructions in the road. Multi-vehicle tracking may provide information regarding traffic in other lanes which may be useful for any number of navigation or collision avoidance tasks.
Lane detection is loosely defined as determining where the left and right lanes are located relative to a vehicle. This is achieved by finding lane markers (portions of lanes or complete lanes) and then classifying them as left or right. Lane detection also may be used as a component of adaptive cruise control, multiple-vehicle tracking, obstacle avoidance and lane departure warning systems. Similarly, some of the functions necessary for obstacle detection may provide useful image information (i.e., features) which can be further processed for lane detection without having to re-process each image again to detect such features.
Earlier solutions to lane detection from an image have been computationally slow and also susceptible to confusion from spurious roadway markings. Because of naturally occurring distance perspective, parallel lines on a road plane beneath the viewer and along the direction of travel tend to converge at the horizon. Some lane detection implementations attempt to convert the image into a space in which the lines are equal width and equal distance from each other over relatively large distances, i.e., a “bird's eye view.” This may simplify the recognition of what should then appear as parallel lines, however the effort necessary to perform this transformation for every point in an image is largely wasted effort. In fact, some implementations actually map every point of the image to the birds-eye view, regardless of whether it could possibly correspond to a lane marker. See, for example, Massimo Bertozzi, GOLD: A Parallel Real-Time Stereo Vision System of Generic Obstacle and Lane Detection, IEEE Transactions on Image Processing, Vol. 7, No. 1, January 1998.
Other implementations require 3-D processing which not only doubles the cost of the cameras, but also the necessary processing to detect the basic features, not to mention the processing to perform the necessary correspondence solution prior to beginning to detect whether the objects are lane markers. Another problem in the prior art is the inability to cope with the normally occurring elements in a roadway image, such as shadows on the road, skid marks, old and misleading paint marks, or pavement lines that may not be aligned with the travel lane.
SUMMARY OF THE INVENTION
The present invention includes a method and apparatus for detecting road lane markings in a machine vision system. A video camera is mounted to a subject vehicle and provides frames of image data to an image processor. The camera uses perspective geometry to relate images in a 3D world coordinate system to a 2D image coordinate system. The invention accounts for the perspective geometry using a dynamic thresholding scheme.
An algorithm recursively computes lane marker width in pixels for each image row, given the lane width in pixels of the bottom-most (i.e., “nearest”) rows. The expected lane marker width can be pre-computed and stored in a table as a function of a row number. Similarly, a table of lane widths (i.e., distance between adjacent lane lines) as a function of rows may be computed and stored in a table. The 2D image data is processed on a row-by-row basis to produce a set of dense points along the boundary of the lane marker.
A feature edge-detection algorithm produces a list of connected edgelets (chains). Features that are associated with sufficiently long chains are processed further. The feature edge detection algorithm creates a list of edges for all features detected in the image, characterized by location, size, and direction. The list of edges is then reorganized in a row-by-row fashion and a list of edges for each image row is indexed by row number.
A lane marker detection component evaluates lane marker edges from the list of all feature edges. The lane marker detection algorithm loops through data corresponding to each row of an image and compares each possible combination of edges in a given row with a set of three criteria to determine whether the edge is associated with a lane marker or some other feature. The three criteria are: width between edge pairs of the markers, opposite orientation of the intensity gradient angles of the edges, and absolute gradient angle orientation of the pair of edges. An image gradient angle represents the direction of the gradient change in intensity.
Pairs of edges th
Cognex Corporation
Johns Andrew W.
Michaelis Brian
Nakhjavan Shervin
LandOfFree
Lane detection system and apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Lane detection system and apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Lane detection system and apparatus will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3360468