Method and apparatus for identifying complex objects based...

Communications: directive radio wave systems and devices (e.g. – Return signal controls external device – Radar mounted on and controls land vehicle

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C342S118000, C342S126000, C342S145000, C342S146000

Reexamination Certificate

active

06664918

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to the determination of the shapes and locations of complex objects based on range measurements from multiple sensors. More particularly, the invention relates to determining by trilateration the shapes and locations of multiple complex objects detected by multiple, spaced range sensors.
BACKGROUND OF THE INVENTION
Trilateration is the art of determining the location of an object in space based on knowledge of the range (distance) of the object from multiple known locations. For instance, for simplicity, let us assume an idealized point object, i.e., an object that is infinitely small. Knowledge of the range of the object from a known location (e.g., one particular sensor) defines a sphere on which the object must lie, that sphere being the sphere that is centered at the sensor and has a radius equal to the measured range value. A range value from two separate locations (sensors) defines two distinct spheres on which the object must lie. Accordingly, the object must lie on the locus of points defined by the intersection of the two spheres, which is a circle. If the range from a third location (or sensor) to the object is known, then the object is known to lie on the locus of points defined by the intersection of all three spheres. For many practical scenarios, the intersection of these three spheres defines a single point which locates the object.
As another example, in a two dimensional environment (or at least an environment that can be assumed to be two dimensional), range readings from only two sensors to the same idealized point object will define two circles that overlap at two points. For many practical scenarios, however, only one of these intersections will be located in the detection areas of the sensors, thus locating the point object.
Of course, real world objects are not points, but are complex, having size and shape. Thus, real objects do not have a single well-defined location. Often, however, the measured range is the range to a point on the object. The particular point on the object that results in a reading at the sensor can depend on several factors, most particularly, the shape of the object and the orientation of the object with respect to the observing sensor. The particular point on an object that renders the range determined by the sensor often is the point on the object which presents a surface perpendicular to the beam propagation.
One example of a system that provides a range measurement, but no bearing measurement is a broad azimuth radar reflection system. As is well known in the related arts, one can send out a radio frequency (RF) beam from a known location and then receive reflections of that beam at the same known location and detect the time delay between the time the beam was issued and its reflection back to the sensor. Assuming that the detection point is approximately the same as the origination point of the beam, the delay period can be converted to a round-trip distance by multiplying it by the speed of the beam. The round trip distance can be divided by two to obtain the range to the object.
Of course, if the radar beam has a defined azimuth, the radar detection system also provides at least some bearing information. Air traffic radar is a well known example of a radar that provides both range and bearing information. Such radars send out very narrow azimuth beams from a rotating transmitter antenna. Therefore, range can be determined from the delay of the reflected beam, while at least some bearing information can be determined from the angular orientation of the antenna at the time of the receipt of the reflected beam.
In actuality, virtually all radar systems give some bearing information because the transmitters rarely generate totally spherical wave fronts with a full 360° azimuth. For instance, even a radar with an azimuth as wide as 180° eliminates half of the bearing spectrum (assuming one knows the direction in which the sensor is pointing).
In theory, when there is a single, point object in the field of view, trilateration is mathematically simple. However, real objects are not point objects. For instance, three sensors detecting the same object may detect slightly different surfaces of the object, wherein each surface is, essentially by definition, at a different location. Further, each sensor has some error range and thus each sensor reading will be inaccurate by some amount. Accordingly, in a real world situation, the three circles defined by the range readings of each sensor of a single object may not, in fact, intersect at a single point. Rather, there may be three closely spaced intersection points of two circles each, i.e., first and second circles, first and third circles, and second and third circles. Accordingly, various algorithms have been developed for estimating an exact location based on such readings.
In many real world uses of trilateration that assume a point object, if the objects are substantially farther away from the sensors than the sensors are from each other, the fact that each sensor might receive a reflection from a different point on the object is not problematic. However, when an object is close to the sensors, a point-object assumption can lead to significant errors in the determination of the location of an object or even the recognition that an object exists. For instance,
FIG. 1A
illustrates range measurements
13
,
15
by two sensors
12
,
14
to an ideal point object
16
located very close to the sensors in an environment that can be assumed to be two dimensional, while
FIG. 1B
illustrates range measurements by the same two sensors
12
,
14
to an ideal line object
18
(commonly called a plate object) located very close to the sensors. If the detection algorithm assumes a point object, it can easily misinterpret the telemetry. For instance, the range circles
20
,
22
from the two sensors
12
,
14
do not intersect and thus an algorithm that assumes a point object would not detect the plate object
18
.
On the other hand, if the algorithm that interpreted the range measurements assumed that the object was a plate object, it could accurately detect plate object
18
, but would misinterpret the telemetry from the point object
16
in
FIG. 1A
as being a plate object represented by dashed line
24
in FIG.
1
A.
If some information is known about the shape of an object in the field of view of a sensor array, it can be used to better determine the location or even the shape of the object. For example, if it is known that a sensor array is in an environment that can be assumed to be two dimensional and that consists entirely of plate objects, then the location and orientation of an object can be determined from only two sensor readings. Specifically, if the object can be assumed to be a plate, then the distance and orientation of the line is given by the line that is tangential to both range circles, as illustrated by line
18
in FIG.
1
B. Assuming that the azimuth of the sensors is 180° or less, as assumed in all Figures herein (and that they are pointing generally in the same direction and that direction is approximately perpendicular to a line drawn between the two sensors), there is likely only one line that will meet that criterion. Further, while the width of the plate object would not be known exactly, it would be known to be at least as wide as the distance between the two points
26
and
28
where the line is tangent to the two circles, respectively. This trilateration technique can be extended into three dimensions by adding a third sensor range reading.
Regardless of what assumptions can be made about the environment, matters can be become extremely complicated if there is potentially more than one object in the field of view of a sensor array. In many real world applications, there may be more than one object in the field of view such that each sensor receives a plurality of reflected wave fronts and, therefore, a plurality of range readings. Merely as an example, let us consider a highly simplified example in which four sens

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for identifying complex objects based... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for identifying complex objects based..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for identifying complex objects based... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3101330

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.