Image analysis – Image transformation or preprocessing – Changing the image coordinates
Reexamination Certificate
1999-05-27
2002-08-13
Boudreau, Leo (Department: 2621)
Image analysis
Image transformation or preprocessing
Changing the image coordinates
C382S295000, C382S293000, C382S130000, C345S630000, C345S629000
Reexamination Certificate
active
06434279
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method and apparatus for registering a plurality of digital images at subpixel accuracy (an accuracy finer than the size of a pixel).
2. Background Art
When performing image processing of a digital image, it is desirable to register a plurality of digital images at a finer accuracy than the size of a pixel (subpixel accuracy). Conventional techniques for registering a plurality of images at subpixel accuracy can be roughly divided into three kinds. A first technique is a correlation function interpolation method represented by the technique of V. N. Dvorchenko (V. N. Dvorchenko: “Bounds on (deterministic) correlation functions with applications to registration”, IEEE Trans. Pattern. Anal. Mach. Intell., Vol. PAMI-5, No.2, pp.206-213, 1983) which involves performing curve fitting on inter-image cross-correlation functions and obtaining the coordinates of the maximum value points of the cross-correlation functions at subpixel accuracy. A second technique is a gray level value interpolation method represented by the technique of J. A. Parker et.al. (J. A. Parker, R. V. Kenyon, and D. E. Droxel: “Comparison of interpolating methods for image resampling”, IEEE Trans. Med. Imaging, Vol. MI-2, No.1, pp.31-39, 1983) which involves for the respective images to be compared, obtaining the gray level values between pixels for one image by interpolation, shifting the image by 1/N pixel units, performing template matching at the other image which is not shifted, and then obtaining the best matching coordinate to thereby obtain the misregistration at an accuracy of 1/N pixels. A third technique is one represented by the technique of T. S. Huang (T. S. Huang: “Image Sequence Analysis”, p.303, Springer-Verlag, 1981), being a difference method which involves; considering one image (image A) to be displaced from the other image (image B) in the X direction by an incremental distance Dx, and in the Y direction by an incremental distance Dy, subjecting the gray level value of the image B to a Taylor expansion for Dx and Dy and obtaining a first order simultaneous equation for Dx and Dy from the difference of the gray level values of the image A and the image B, and then solving this equation to obtain Dx and Dy. Hereunder is a detailed description of the difference method. The gray level values of the images to be compared are respectively represented by f (x, y), g (x, y), where g (x, y) is for where f (x, y) is displaced by (Dx, Dy), and can be represented by g (x, y)=f (x-Dx, y-Dy). If (Dx, Dy) is very small, then the difference I (x, y) between f (x, y) and g (x, y) can be represented by:
I
⁡
(
x
,
y
)
=
∂
f
⁡
(
x
,
y
)
∂
x
⁢
⁢
Dx
+
∂
f
⁡
(
x
,
y
)
∂
x
⁢
⁢
Dy
(
1
)
Values of I (x, y) and the partial differentials values of f (x, y) are obtained for various coordinates, and simultaneous equations for (Dx, Dy) in the various coordinates are set up from equation (1). Then by solving these simultaneous equations, (Dx, Dy) being the amount of shift of the relative positions between the two images, are obtained.
With the gray level value interpolation method and the difference method, the ways of representation are different, however these are theoretically the same as the correlation function interpolation method which searches for the point where the cross-correlation function takes a maximum.
With the abovementioned conventional techniques however, the implicitly assumed conditions are that the images to be registered are completely identical images, and only the positions relative to each other are shifted. Therefore in the case where deformation or noise is added to the image, the assumed conditions collapse, with the likelihood of not being able to correctly obtain the misregistration amount. Hereunder examples are given to explain the situation where it is not possible to correctly obtain the misregistration amount with the conventional techniques.
FIG. 10A
shows an example of an input image where the gray level value change of a sloping portion on the edge is comparatively gentle, while
FIG. 10B
shows an example of an input image where the gray level value change of a sloping portion on the edge is comparatively abrupt. The results for where these two input images are registered and superposed are shown in FIG.
10
C. In the case where, as shown in FIG.
10
A and
FIG. 10B
, there is only one edge in the image, and the edge profiles are different between the input images, attributable for example to the optical characteristics of the imaging device, then when the above mentioned conventional techniques are used, since as shown in
FIG. 10C
these are superposed so that the area of the parts which are not superposed becomes a minimum, the edge pair defined by the maximum value point of the first order differential cannot always be superposed. Consequently the misregistration amount cannot be correctly obtained.
FIG. 11A
shows an example of an input image with a radius at a corner, that is, the curvature of the corner is comparatively small, while
FIG. 11B
shows an example of an input image with no radius at the corner, that is, the curvature of the corner is comparatively large. The result of registering and superposing these two input images so that the straight line edge portions coincide is shown in FIG.
11
C. Moreover, the result of superposing so that the area of the portions which do not overlap each other is a minimum is shown in FIG.
11
D. In the case where in this way, the shapes of the corners of the patterns differ, then with the conventional technique, since these are superposed so that the area of the portions which are not superposed are a minimum, these are superposed with a shift as shown in
FIG. 11D
rather than with the edge pair lying on top of each other as shown in FIG.
11
C.
FIG. 12A
shows an example of an input image with a texture on the pattern.
FIG. 12B
also shows an example of an input image with a texture on the pattern. Of significance is that with the input image of
FIG. 12B
the texture is positioned slightly towards the edge compared to the input image of FIG.
12
A. The result of superposing these two input images so that the edge portions coincide is shown in
FIG. 12C
, while the result of superposing so that the textures on the patterns coincide is shown in FIG.
12
D. In the case where, as with
FIGS. 12A and 12B
, there is a texture with undulations of subtle gray level values on the pattern, attributable to variations in sensitivity between the respective detection elements of the imaging device or to video jitter and the like due to start timing shift of the respective scanning lines, then there is the likelihood of superimposing with the edge pair slightly shifted as shown in
FIG. 12D
, rather than with the edge pair superposed as shown in FIG.
12
C.
FIG. 13A
shows an example of an input image with a texture due to slight periodic noise on the pattern.
FIG. 13B
also shows an example of an input image with a texture due to a slight periodic noise on the pattern, however, compared to the input image of
FIG. 13A
, the phase of the periodic noise is shifted slightly towards the edge. The result of superimposing these two input images so that the edge portions coincide is shown in FIG.
13
C. The result of superimposing so that the textures due to the slight periodic noise on the patterns coincide is shown in FIG.
13
D. In the case where, as shown in
FIGS. 13A and 13B
, there is a periodic noise having a frequency and an amplitude which is the same for both of the compared images, but the phases of the noise are slightly shifted between images, then the area of the non-coinciding portion for when the images are superposed as shown in
FIG. 13D
so that the edge pair are not coincidingly overlapped but the periodic noise pair are coincidingly overlapped, is smaller than the area of the non-coinciding portion for when the images are superposed as shown in
FIG. 13C
so th
LandOfFree
Image registration method, image registration apparatus and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image registration method, image registration apparatus and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image registration method, image registration apparatus and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2887285