Data processing apparatus, data processing method, learning...

Image analysis – Learning systems – Trainable classifiers or pattern recognizers

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S254000, C382S156000, C706S020000, C706S021000

Reexamination Certificate

active

06678405

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to data processing apparatuses and methods, learning apparatuses and methods, and media, and in particular, to a data processing apparatus and method for increasing processing efficiency when image data, etc., are processed, and to a medium provided with the same.
2. Description of the Related Art
The assignee of the present invention proposed classification adaptive processing as processing for increasing image quality, etc., and for improving other image features.
The classification adaptive processing consists of classifying processing and adaptive processing. In the classifying processing, data are classified based on their properties, and adaptive processing on the data in each class is performed. The adaptive processing is the following technique.
In the adaptive processing, by linearly linking, for example, pixels (hereinafter referred to as “input pixels”) constituting an input image (an image to be processed by the classification adaptive processing) and predetermined prediction coefficients, predicted values of pixels of an original image (e.g., an image including no noise, an image free from blurring, etc.) are found, whereby an image in which noise included in the input image is eliminated, an image in which blurring generated in the input image is reduced, etc., can be obtained.
Accordingly, it is, for example, assumed that the original image is teacher data, and it is also assumed that an image obtained by superimposing noise on or blurring the original image is student data. The finding of predicted values E[y] of pixel levels y of pixels constituting the original image by using a linear first-order linking model defined by linear linking of a set of a plurality of student data (pixel levels) x
1
, x
2
, etc., and predetermined prediction coefficients w
1
, w
2
, etc., is considered. In this case, the predicted values E[y] can be represented by the following expression:
E[y]=w
1
x
1
+w
2
x
2
+ . . .   (1)
For generalizing expression (1), by using the following expressions:
X
=
[
x
11
x
12

x
1

n
x
21
x
22

x
2

n




x
m1
x
m2

x
mn
]
W
=
[
w
1
w
2

w
n
]
,
Y

=
[
E

[
y
1
]
E

[
y
2
]

E

[
y
m
]
]
to define a matrix W composed of a set of prediction coefficients w, a matrix X composed of a set of student data, and a matrix Y′ composed of a set of predicted values E[y], the following observation equation holds:
XW=Y′
  (2)
Here, a component x
ij
of the matrix X represents the j-th student data in an i-th set of student data (a set of student data for use in prediction of the i-th teacher data y
i
), and a component w
j
of the matrix W represents a prediction coefficient in which the product of the coefficient and the j-th student data in the set of student data is calculated. A component y
j
represents the j-th teacher data, and E[y
j
] accordingly represents a predicted value of the j-th teacher data.
The finding of each predicted value E[y] close to each pixel level y of the original pixel by applying a least square operation to the observation equation (2) is considered. In this case, by using the following expressions:
E
=
[
e
1
e
2

e
m
]
,
and



Y
=
[
y
1
y
2

y
m
]
to define a matrix Y composed of a set of actual pixel levels y of the original pixels which are used as teacher data, and a matrix E composed of a set of residuals e of predicted values E[y] from the pixel levels y of the original pixels, the following residual equation holds from expression (2):
XW=Y+E
  (3)
In this case, the prediction coefficients w for the predicted values E[y] close to the pixel levels of the original pixels can be found by minimizing the following squared error:

i
=
1
m



e
i
2
Therefore, when the result of differentiating the squared errors with respect to the prediction coefficient w
i
is zero, the prediction coefficient w
i
, which satisfies the following expression, is an optimal value for finding the predicted values E[y] close to the pixel levels y of the original pixels.
e
1




e
1

w
i
+
e
2




e
2

w
i
+

+
e
m




e
m

w
i
=
0



(
i
=
1
,
2
,



,
n
)
(
4
)
Accordingly, by using the prediction coefficient w
i
to differentiate expression (3), the following expressions hold:

e
i

w
1
=
x
i1
,

e
i

w
2
=
x
i2
,



,

e
i

w
n
=
x
i



n



(
i
=
1
,
2
,



,
m
)
(
5
)
From expressions (4) and (5), the following expressions are obtained:

i
=
1
m



e
i



x
i1
=
0
,

i
=
1
m



e
i



x
i2
=
0
,



,

i
=
1
m



e
i



x
i



n
=
0
(
6
)
When relationships among the student data x, the prediction coefficients w, the teacher data y, and the residuals e, are taken into consideration, the following normalization equations can be obtained from expression (6).
(

i
=
1
m



x
i1



x
i1
)



w
1
+
(

i
=
1
m



x
i1



x
i2
)



w
2
+

+
(

i
=
1
m



x
i1



x
i



n
)



w
n
=
(

i
=
1
m



x
i1



y
i
)
(

i
=
1
m



x
i2



x
i1
)



w
1
+
(

i
=
1
m



x
i2



x
i2
)



w
2
+

+
(

i
=
1
m



x
i2



x
i



n
)



w
n
=
(

i
=
1
m



x
i2



y
i
)
(

i
=
1
m



x
i



n



x
i1
)



w
1
+
(

i
=
1
m



x
i



n



x
i2
)



w
2
+

+
(

i
=
1
m



x
i



n



x
i



n
)



w
n
=
(

i
=
1
m



x
i



n



y
i
)
}
(
7
)
By preparing a certain number of student data x and a certain number of teacher data y, the normalization equations (7) can be formed corresponding to the number of prediction coefficients w. Accordingly, by solving equations (7) (although the solution of equations (7) requires a matrix composed of coefficients on the prediction coefficients w to be regular), the optimal prediction coefficients w can be found. For solving equations (7), Gauss-Jordan elimination, or the like, may be used.
The adaptive processing is the above-described processing in which the predicted values E[y] close to the pixel levels y of the original pixels are found based on expression (1), using the prediction coefficients w found beforehand.
The adaptive processing differs from, for example, simple interpolation in that components which are not included in an input image but which are included in the original image are reproduced. In other words, as long as attention is paid to only expression (1), the adaptive processing is the same as interpolation using a so-called interpolating filter. However, in the adaptive processing, the prediction coefficients w corresponding to tap coefficients of the interpolating filter are obtained by a type of learning using the teacher data y for each class. Thus, components included in the original image can be reproduced. That is, an image having a high signal-to-noise ratio can be easily obtained. From this feature, it may be said that the adaptive processing has an image creating (resolution cre

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data processing apparatus, data processing method, learning... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data processing apparatus, data processing method, learning..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data processing apparatus, data processing method, learning... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3212932

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.