Data processing: measuring – calibrating – or testing – Calibration or correction system – Sensor or transducer
Reexamination Certificate
1998-05-15
2001-03-13
Assouad, Patrick (Department: 2857)
Data processing: measuring, calibrating, or testing
Calibration or correction system
Sensor or transducer
C702S196000
Reexamination Certificate
active
06202033
ABSTRACT:
TECHNICAL FIELD
This invention relates generally to all practical applications of Kalman filtering and more particularly to controlling dynamic systems with a need for fast and reliable adaptation to circumstances.
BACKGROUND ART
Prior to explaining the invention, it is helpful to understand first the prior art of conventional Kalman recursions as well as the Fast Kalman Filtering (FKF) method for both calibrating a sensor system PCT/FI90/00122 (WO 90/13794) and controlling a large dynamic system PCT/FI93/00192 (WO 93/22625).
The underlying Markov (finite memory) process is described by the equations from (1) to (3). The first equation tells how a measurement vector y
t
depends on a state vector s
t
at time point t, (t=0,1,2 . . . ). This is the linearized Measurement (or observation) equation:
y
t
=H
t
s
t
+e
t
(1)
Matrix H
t
is the design (Jacobian) matrix that stems from the partial derivatives of actual physical dependencies. The second equation describes the time evolution of the overall system and it is known as the linearized System (or state) equation:
s
t
=A
t
s
t−1
+B
t
u
t−1
+a
t
(2)
Matrix A
t
is the state transition (Jacobian) matrix and B
t
is the control gain (Jacobian) matrix. Equation (2) tells how present state s
t
of the overall system develops from previous state s
t−1
, control/external forcing u
t−1
and random error a
t
effects. When measurement errors e
t
and system errors a
t
are neither auto- (i.e. white noise) nor cross-correlated and are given by the following covariance matrices:
Q
t
=Cov(
a
t
)=E(
a
t
a
t
′)
and
R
t
=Cov(
e
t
)=E(
e
t
e
t
′) (3)
then the famous Kalman forward recursion formulae from (4) to (6) give us Best Linear Unbiased Estimate (BLUE) ŝ
t
of present state s
t
as follows:
ŝ
t
=A
t
ŝ
t−1
+B
t
u
t−1
+K
t
{y
t
−H
t
(A
t
ŝ
t−1
+B
t
u
t−1
)} (4)
and the covariance matrix of its estimation errors as follows:
{circumflex over (P)}
t
=Cov(
ŝ
t
)=E{(
ŝ
t
−s
t
)(
ŝ
t
−s
t
)′}=A
t
{circumflex over (P)}
t−1
A′
t
+Q
t
−K
t
H
t
(A
t
{circumflex over (P)}
t−1
A′
t
+Q
t
) (5)
where the Kalman gain matrix K
t
is defined by
K
t
=(A
t
{circumflex over (P)}
t−1
A′
t
+Q
t
)H′
t
{H
t
(A
t
{circumflex over (P)}
t−1
A′
t
+Q
t
)H′
t
+R
t
}
−1
(6)
This recursive linear solution is (locally) optimal. The stability of the Kalman Filter (KF) requires that the observability and controlability conditions must also be satisfied (Kalman, 1960). However, equation (6) too often requires an overly large matrix to be inverted. Number n of the rows (and columns) of the matrix is as large as there are elements in measurement vector y
t
. A large n is needed for making the observability and controlability conditions satisfied. This is the problem sorted out by the discoveries reported here and in PCT/FI90/00122 and PCT/FI93/00192.
The following modified form of the State equation has been introduced
A
t
ŝ
t−1
+B
t
u
t−1
=I
s
t
+A
t
(
ŝ
t−1
−s
t−1
)−
a
t
(7)
and combined with the Measurement equation (1) in order to obtain the so-called Augmented Model:
[
y
t
A
t
⁢
s
^
t
-
1
+
B
t
⁢
u
t
-
1
]
=
[
H
t
I
]
⁢
s
t
+
[
e
t
A
t
⁡
(
s
^
t
-
1
-
s
t
-
1
)
-
a
t
]
⁢


⁢
i
.
e
.
⁢
z
t
=
Z
t
⁢
s
t
+
η
t
(
8
)
The state parameters can be estimated by using the well-known solution of a Regression Analysis problem as follows:
ŝ
t
=(Z′
t
V
t
−1
Z
t
)
−1
Z′
t
V
t
−1
z
t
(9)
The result is algebraically equivalent to use of the Kalman recursions but not numerically (see e.g. Harvey, 1981: “Time Series Models”, Philip Allan Publishers Ltd, Oxford, UK, pp. 101-119). The dimension of the matrix to be inverted in equation (9) is now the number (=m) of elements in state vector s
t
. Harvey's approach is fundamental to all different variations of the Fast Kalman Filtering (FKF) method.
An initialization or temporary training of any large Kalman Filter (KF), in order to make the observability condition satisfied, can be done by Lange's High-pass Filter (Lange, 1988). It exploits an analytical sparse-matrix inversion formula for solving regression models with the following so-called Canonical Block-angular matrix structure:
[
y
1
y
2
⋮
y
K
]
=
[
X
1
G
1
X
2
G
2
⋰
⋮
X
K
G
K
]
⁢
[
b
1
⋮
b
K
c
]
+
[
e
1
e
2
⋮
e
K
]
(
10
)
This is the matrix representation of the Measurement equation of e.g. an entire windfinding intercomparison experiment. The vectors b
1
,b
2
, . . . ,b
K
typically refer to consecutive position coordinates e.g. of a weather balloon but may also contain those calibration parameters that have a significant time or space variation. The vector c refers to the other calibration parameters that are constant over the sampling period.
For all large multiple sensor systems their design matrices H
t
are sparse. Thus, one can do in one way or another the same sort of
Partitioning
⁢
:
⁢
⁢
s
t
=
[
b
t
,
1
⋮
b
t
,
K
c
t
]
⁢
y
t
=
[
y
t
,
1
y
t
,
2
⋮
y
t
,
K
]
⁢
H
t
=
[
X
t
,
1
G
t
,
1
X
t
,
2
G
t
,
2
⋰
⋮
X
t
,
K
G
t
,
K
]
(11)
A
=
[
A
1
⋰
A
K
A
c
]
⁢
⁢
and
,
B
=
[
B
1
⋰
B
K
B
c
]
⁢
where
c
t
typically represents calibration parameters at time t
b
t,k
all other state parameters in the time and/or space volume
A state transition matrix (block-diagonal) at time t
B matrix (block-diagonal) for state-independent effects u
t
at time t.
If the partitioning is not obvious one may try to do it automatically by using a specific algorithm that converts every sparse linear system into the Canonical Block-angular form (Weil and Kettler, 1971: “Rearranging Matrices to Block-angular Form for Decomposition (and other) Algorithms”, Management Science, Vol. 18, No. 1, Semptember 1971, pages 98-107). The covariance matrix of random errors e
t
may, however, loose something of its original and simple diagonality.
Consequently, gigantic Regression Analysis problems were faced as follows:
Augmented Model for a space volume case: e.g. for the data of a balloon tracking experiment with K consecutive balloon positions:
[
y
t
,
1
A
1
⁢
b
^
t
-
1
,
1
+
B
1
⁢
u
b
t
-
1
,
1
_
y
t
,
2
A
2
⁢
b
^
t
-
1
,
2
+
B
2
⁢
u
b
t
-
1
,
2
_
⋮
y
t
,
K
A
K
⁢
b
^
t
-
1
,
K
+
B
K
⁢
u
b
t
-
1
,
K
_
_
A
c
⁢
c
^
t
-
1
+
B
c
⁢
u
c
t
-
1
]
=
[
X
t
,
1
I
G
t
,
1
X
t
,
2
I
G
t
,
2
⋰
⋮
X
t
,
K
I
G
t
,
K
I
]
⁡
[
b
t
,
1
b
t
,
2
⋮
b
t
,
k
c
t
]
+
[
e
t
,
1
A
1
⁡
(
b
^
t
-
1
,
1
-
b
t
-
1
,
1
)
-
a
b
t
,
1
_
e
t
,
2
A
2
⁡
(
b
^
t
-
1
,
2
-
b
t
-
1
,
2
)
-
a
b
t
,
2
_
⋮
e
t
,
K
A
K
⁡
(
b
^
t
-
1
,
K
-
b
t
-
1
,
K
)
-
a
b
t
,
K
_
_
A
c
⁡
(
c
^
t
-
1
-
c
t
-
1
)
-
a
c
t
]
Augmented Model for a moving time volume: (e.g. for “whitening” an observed “innovations” sequence of residuals e
t
for a moving sample of length L):
[
y
t
A
⁢
s
^
t
-
1
+
Bu
t
-
1
_
y
t
-
1
A
⁢
s
^
t
-
2
+
Bu
t
-
2
_
⋮
y
t
-
L
+
1
A
⁢
s
^
t
-
L
+
Bu
t
-
L
_
_
A
⁢
C
^
t
-
1
+
Bu
c
t
-
1
]
=
[
H
t
I
F
t
H
t
-
1
I
F
t
-
1
⋰
&
LandOfFree
Method for adaptive kalman filtering in dynamic systems does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for adaptive kalman filtering in dynamic systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for adaptive kalman filtering in dynamic systems will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2518812