Vehicle navigation system with vision system preprocessor...

Data processing: vehicles – navigation – and relative location – Vehicle control – guidance – operation – or indication – Automatic route guidance vehicle

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C701S050000, C348S116000, C348S138000, 28, 28, C382S162000, C382S163000

Reexamination Certificate

active

06678590

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to navigation of an unmanned ground vehicle (UGV), and more particularly to a navigational system used by a UGV to obtain terrain information to identify objects and terrain characteristics for autonomous vehicle navigation.
2. Description of the Related Art
Unmanned ground vehicles, or UGV's, are robots that are designed to navigate across terrain without direct human involvement. To prevent the vehicle from driving over terrain that would cause the vehicle to roll over and from colliding with objects in its path, various navigational systems have been proposed to provide the vehicle with information that allows the vehicle to detect navigable routes. Previous proposals for vehicle navigational systems attempted to provide extensive vehicle mobility by incorporating complex on-road and off-road navigation tools, sensing techniques, and route planning algorithms. Currently known navigational systems often include a scanning laser that detects the presence and range of objects in the vehicle's path as well as terrain characteristics. These systems include sensors that receive the laser light as it bounces off of objects within the vehicle's field of view.
As also shown in
FIG. 1
, the forward-facing cameras
102
,
104
each obtain image frames of a scene in front of the vehicle
100
. For any given'scene, the two frames (forming a forward-looking stereo pair) obtained from the two forward-facing cameras are obtained from points separated by a distance D, which is the distance between the two cameras. For an unmanned vehicle that is the size of an all-terrain vehicle, the distance D can range from ½ to 1 meter. In
FIG. 2
, two successive side-looking frames (forming a stereo pair looking either right or left) can be taken by the same side-facing camera
200
or
202
from successive vehicle positions separated by the vehicle's travel distance V &Dgr;t. For example, at V=15 m/s and &Dgr;t=1/30 sec, V &Dgr;t ½ meter.
As also shown in
FIG. 1
, the forward-facing cameras
102
,
104
each obtain image frames of a scene in front of the vehicle
104
. For any given scene, the two frames (forming a forward-looking stereo pair) obtained from the two forward-facing cameras are obtained from points separated by a distance D, which is the distance between the two cameras. For an unmanned vehicle that is the size of an all-terrain vehicle, the distance D can range from ½ to 1 meter. In
FIG. 2
, two successive side-looking frames (forming a stereo pair looking either right or left) can be taken by the same side-facing camera
200
or
202
from successive vehicle positions separated by the vehicle's travel distance V &Dgr;t. For example, at V=15 m/s and &Dgr;t=1/30 sec, V &Dgr;t=½ meter.
In both camera configurations, the distance between the vehicle
100
and any given object in the field of view, or “range”, can be computed from the parallax of each object, i.e., the difference in its direction in the two frames forming the stereo pair due to separation of their viewpoints. If an object appears at angles differing by &thgr; (in radians, assumed small) as seen by points separated by D, it is considered to be at range R=D/&thgr;. Distant objects will appear not to move (in angle) between the two image frames in a given stereo pair, while close objects will appear to move abruptly across the foreground when the two frames are compared.
From this range calculation, a navigation system (not shown) for the vehicle
100
generates a range map that maps all of the objects in the vehicle's forward and/or side field of view and indicates their range. Locating the presence of objects as well as their corresponding ranges is one key feature that makes unmanned land navigation possible because it allows the vehicle
100
to detect and avoid large objects in its path.
In addition to object range detection, the vehicle
100
must also detect the terrain characteristics, such as longitudinal and lateral slope, and determine whether they fall within the limits of the vehicle's traction capabilities. This requires the vehicle to measure the local vertical vector, or gravity vector, with respect to the camera direction to determine the slope of the terrain over which the vehicle will travel.
The problem with currently known navigational systems, however, is that the processing needed for detecting the parallax of each object and computing the range from the image frame pairs is complex and requires computers with high-speed processing capabilities. For example, each video camera may capture 300 K pixels (640 by 480 per frame) 30 times per second, where each pixel has 8 bits each of red, blue, and green color data. This results in a data flow of 220 Mb/s (27.6 MByte/s) per camera. Using a general purpose computer to accommodate this data rate and match corresponding objects in the two frames in each stereo pair is impractical at this high data rate. Expensive high-speed digital processing systems do exist, but these systems can cost tens of thousands of dollars and are not widely available. Incorporating laser scanning systems are an alternative to the video cameras, but these systems are also expensive, are not widely available, and also require specialized high-speed data processing equipment. As a result, there are currently no known practical solutions for constructing an affordable unmanned vehicle navigation system.
There is a need for a navigation system that can equip vehicles with autonomous, unmanned driving capabilities without requiring expensive, complex equipment for capturing or processing navigational data.
SUMMARY OF THE INVENTION
Accordingly, the present invention is directed to a navigation system having a pre-processor, such as an MPEG encoder, for pre-processing image data from at least one camera on an unmanned vehicle. In one embodiment, the pre-processor captures a pair of image frames and compresses the frames by converting the pixel data in the image frames from Red, Green, Blue (RGB) data into luminance and chrominance (YUV) data. A processor then uses the luminance and chrominance data to generate a range map indicating the presence and distance of objects in the vehicle's field of view and a slope map of the terrain characteristics, both of which are used to guide autonomous movement of the vehicle without human intervention. More particularly, the vehicle uses the range map and the slope map to avoid collisions with objects in its path and slopes that are too steep for the vehicle to negotiate.
In one embodiment of the inventive system, the range map is generated by segmenting each frame into macroblocks, determining a range for each macroblock from a corresponding motion vector, determining an average block color from the chrominance data, and merging adjacent blocks having the same color and range to identify objects. The slope map is generated based on the relationship between lowest and highest spatial frequencies for a given terrain texture, which is obtained from the luminance data as well as the relationship between the lowest spatial frequency, and a gravity vector.
The pre-processor in the inventive system allows commercially available, off-the-shelf processors to generate range and slope maps for unmanned vehicle navigation in real time by reducing the data rate of the image data from the camera sufficiently so that inexpensive, commonly available processors can process the image data for navigation purposes. Further, by dividing the frames into blocks and generating the range and slope maps based on these blocks rather than the individual pixels, the amount of processing required to obtain the navigation data is greatly reduced, thereby eliminating the need for expensive high-speed processors. Using image data, rather than complex lasers and laser sensors, further simplifies the manner in which navigational data is generated.


REFERENCES:
patent: 4695156 (1987-09-01)

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Vehicle navigation system with vision system preprocessor... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Vehicle navigation system with vision system preprocessor..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Vehicle navigation system with vision system preprocessor... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3204409

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.