Image processing apparatus and method

Image analysis – Pattern recognition – Classification

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C380S217000

Reexamination Certificate

active

06404924

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to image processing apparatuses and methods, and in particular, to an image processing apparatus and method adapted for, e.g., converting a standard-definition image into a high-definition image.
2. Description of the Related Art
In the case where a standard-definition or low-definition image (hereinafter referred to as an “SD image”) is converted into a high-definition image (hereinafter referred to as an “HD image”), or an image is enlarged, a general “interpolation filter” or the like is used to interpolate (compensate) the levels of missing pixels.
Using the interpolation filter to interpolate the pixels however cannot restore components (high-frequency components) of the HD image, which are not included in the SD image. Thus, a high-definition image cannot be obtained.
Accordingly, the present inventor has already proposed an image converting apparatus for converting an SD image into an HD image including high-frequency components which are not included in the SD image.
The image converting apparatus uses linear coupling of an SD image and prediction coefficients to perform adaptive processing for finding the predicted values of pixels in an HD image, whereby high-frequency components not included in the SD image can be restored.
It is assumed that the predicted value E[y] of pixel level y on a pixel (hereinafter referred to as a “HD pixel”) included in the HD image is found using a first-order linear coupling model defined by linear coupling of the levels (hereinafter referred to as “learning data”) x
1
, x
2
. . . of a plurality of SD pixels, and predetermined prediction coefficients w
1
, w
2
. . . In this case, predicted value E[y] can be expressed by the following equation.
E[y]=w
1
x
1
+w
2
x
2
+ . . .   (1)
Accordingly, for generalization, when matrix W composed of a set of prediction coefficients w, matrix X composed of a set of learning data, and matrix Y′ composed of a set of predicted values E[y] are defined as follows:
X
=
(
x
11
x
12

x
1

n
x
21
x
22

x
2

n




x
m1
x
m2

x
mn
)
W
=
(
W
1
W
2

W
n
)
,
Y

=
(
E

[
y
1
]
E

[
y
2
]

E

[
y
m
]
)
the following observation equation holds.
XW=Y′
  (2)
Next, least squares are applied to the observation equation in order to find predicted value E[y] close to pixel level y of the HD image. In this case, when matrix Y composed of a set of true pixel levels y of the HD pixels as teaching data, and matrix E composed of a set of residuals e of predicted values E[y] corresponding to HD pixel levels y are defined as follows:
E
=
(
e
1
e
2

e
m
)
,
Y
=
(
y
1
y
2

y
m
)
the following equation holds from equations (2).
XW=Y+E
  (3)
In this case, prediction coefficient w
i
for finding E[y] close to HD-image pixel level y can be found by minimizing the following square error:

i
=
1
m

e
i
2
Therefore, the above square error is differentiated to be zero, in other words, prediction coefficient w
i
, which satisfies the following equation, is an optimal value for finding predicted value E[y] close to HD-image pixel level y.
e
1


e
1

w
i
+
e
2


e
2

w
i
+

+
e
m


e
m

w
i
=
0

(
i
=
1
,
2
,



,
n
)
(
4
)
Initially, by using prediction coefficient w
i
to differentiate equation (3), the following equations hold.

e
1

w
i
=
x
i1
,

e
i

w
2
=
x
i2
,



,

e
i

w
n
=
x
i



n
,
(
i
=
1
,
2
,



,
m
)
(
5
)
From equations (4) and (5), equations (6) are obtained.

i
=
1
m

e
i

x
i1
=
0
,

i
=
1
m

e
i

x


i2
=
0
,





i
=
1
m

e
i

x
i



n
=
0
(
6
)
In addition, in view of the relationship among learning data x, a set of prediction coefficients w, residuals e, the following normal equations can be obtained from equations (6).
{
(

i
=
1
m

x
i1

x
i1
)

w
1
+
(

i
=
1
m

x
i1

x
i2
)

w
2
+

+
(

i
=
1
m

x
i1

x
i



n
)

w
n
=
(

i
=
1
m

x
i1

y
i
)
(

i
=
1
m

x
i2

x
i1
)

w
1
+
(

i
=
1
m

x
i2

x
i2
)

w
2
+

+
(

i
=
1
m

x
i2

x
i



n
)

w
n
=
(

i
=
1
m

x
i2

y
i
)

(

i
=
1
m

x
i



n

x
i1
)

w
1
+
(

i
=
1
m

x
i



n

x
i2
)

w
2
+

+
(

i
=
1
m

x
i



n

x
i



n
)

w
n
=
(

i
=
1
m

x
i



n

y
i
)
(
7
)
The number of the normal equations as equations (7) that can be formed is identical to the number of sets of prediction coefficients w to be found. Thus, by solving equations (7), sets of optimal coefficients w can be found. (To solve equations (7), it is required in equations (7) that a matrix composed of coefficients as to sets of prediction coefficients w be nonsingular) For solving equations (7), for example, sweeping (Gauss-Jordan elimination) etc. can be applied.
Finding sets of optimal prediction coefficients w in the above manner and using the found sets of prediction coefficients w in equation (1) to find predicted values E[y] close to HD-image pixel levels y is adaptive processing. Adaptive processing differs from interpolation in that components included in the HD image which are not included in the SD image are reproduced. In other words, from only equations (1), it is found that the adaptive processing is identical to interpolation with a general “interpolation filter”. However, since sets of prediction coefficients w corresponding to tap coefficients of the interpolation filter are found by learning with teaching data y, the components included in the HD image can be reproduced. In other words, a high-definition image can be easily obtained. From this fact, it may be said that the adaptive processing has the function of creating an image (definition).
FIG. 12
shows a block diagram of an image converting apparatus for using the above-described adaptive processing to convert an SD image into an HD image.
An SD image is supplied to a classification unit
201
and an adaptive processing unit
204
. The classification unit
201
includes a class-tap generating circuit
202
and a classification circuit
203
, and classifies HD pixels (HD pixels being referred to)(hereinafter referred to as “reference pixels”), as to which predicted values are found, into predetermined classes based on the properties of SD-image pixels corresponding to the reference pixels.
In other words, in the class-tap generating circuit
202
, SD pixels corresponding to the reference pixels, which are used to classify the reference pixels (hereinafter referred to as “class-taps”), for example, a plurality of SD pixels having predetermined positional relationships with respect to the reference pixels are extracted from the SD image supplied to the classification unit
201
, and they are supplied to the classification circuit
203
. The classification circuit
203
detects the pattern (pixel-level distribution) of SD pixel levels constituting the class-taps from the class-tap generating circuit
202
, and supplies a pre-assigned value to the pattern as a reference pixel class to the adaptive processing unit
204
.
Specifically, by way of example, it is assumed that the HD image is composed of pixels (HD pixels) indicated by symbols × in
FIG. 13
, and the SD image is composed of pixels (SD pixels) indicated by symbols ◯ in FIG.
13
. In
FIG. 13
, the SD image has half the number of horizontal or vertical pixels as the HD image. In
F

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image processing apparatus and method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image processing apparatus and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image processing apparatus and method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2969074

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.