Data processing apparatus, data processing method, and medium

Data processing: generic control systems or specific application – Specific application – apparatus or process

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C700S043000

Reexamination Certificate

active

06571142

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to data processing apparatuses, data processing methods, and media, and more particularly, to a data processing apparatus, a data processing method, and a medium which allow an image having a low signal-to-noise (s
) ratio to be converted to an image having a high s
ratio.
2. Description of the Related Art
The assignee of the present invention has already proposed class-classification adaptive processing as processing for improving the quality of an image, such as an s
ratio and a resolution, and for improving an image.
The class-classification adaptive processing is formed of class-classification processing and adaptive processing. The class-classification processing classifies data into classes according to its characteristic and the adaptive processing is applied to each class.
In the adaptive processing, pixels (hereinafter called low-s
pixels, if necessary) constituting a low-s
image (an image to be processed by the class-classification adaptive processing) are linearly coupled with predetermined prediction coefficients to obtain the prediction values of the pixels of the original image, which is a high-s
image, of the low-s
image. With the adaptive processing, an image is obtained by removing noise from the low-s
image, or an image is obtained by improving blur in the low-s
image.
More specifically, for example, it is assumed that the original image (such as an image having no noise or image having no blur) is used as master data, a low-s
image obtained by superposing noise on the original image or by adding blur thereto is used as apprentice data, and the prediction value E[y] of the pixel value “y” of a pixel (hereinafter called the original pixel, if necessary) constituting the original image is to be obtained by a linear coupling model specified by a linear coupling of a set of the pixel values x
1
, x
2
, . . . of several low-s
pixels (pixels constituting the low-s
image) and predetermined prediction coefficients w
1
, w
2
, . . . In this case, the prediction value E[y] is expressed by the following expression.
E[y]=w
1
x
1
+w
2
x
2
+  (1)
To generalize the expression (1), a matrix “W” formed of the set of the prediction coefficients “w”, a matrix “X” formed of the set of the apprentice data, and a matrix “Y′” formed of the prediction values E[y] are defined in the following way.
X
=


[
X
11
X
12

X
1

n
X
21
X
22

X
2

n




X
m1
X
m2

X
mn


]
W
=
[


W
1
W
2

W
n


]
,


Y

=
[


E

[
y
1
]
E

[
y
2
]

E

[
y
m
]


]
Then the following observation equation is derived.
XW=Y′
  (2)
A component x
ij
of the matrix X indicates the j-th apprentice data in the i-th apprentice-data set (apprentice-data set used for predicting the i-th master data y
i
), and a component w
j
in the matrix W indicates a prediction coefficient to be multiplied by the J-th apprentice data in the apprentice-data set. The i-th master data is indicated by y
i
, and therefore, E[y
i
] indicates the prediction value of the i-th master data.
It is also assumed that the least squares method is applied to this observation equation to obtain a prediction value E[y] close to the pixel value “y” of the original pixel. In this case, when a matrix “Y” formed of the set of the true pixel values “y” (true values) of the original pixels serving as master data and a matrix “E” formed of the set of the remainders “e” of the prediction values E[y] against the pixel values “y” of the original pixels are defined in the following way,
E
=
[
e
1
e
2

e
m


]
,


Y
=
[


y
1
y
2

y
m


]
the following remainder equation is derived, from the equation (2).
XW=Y+E
  (3)
In this case, the prediction coefficient w
i
used for obtaining the prediction value E[y] close to the pixel value “y” of the original pixel is obtained by setting the square error,

i
=
1
m



e
i
2
to the minimum.
Therefore, the prediction coefficient wi obtained when the above square error differentiated by the prediction coefficient w
i
is zero, in other words, the prediction coefficient w
i
satisfying the following expression, is the most appropriate value for a prediction value E[y] close to the pixel value “y” of the original pixel.
e
1


e
1

w
i
+
e
2


e
2

w
i
+

+
e
m


e
m

w
i
=
0



(
i
=
1
,
2
,



,
n
)
(
4
)
The expression (3) is differentiated by the prediction coefficient w
i
to obtain the following expressions.

e
i

w
1
=
x
i1
,

e
i

w
2
=
x
i2
,



,

e
i

w
n
=
x
in
,


(
i
=
1
,
2
,



,
m
)
(
5
)
From the expressions (4) and (5), the expression (6) is derived.

i
=
1
m



e
i

x
i1
=
0
,

i
=
1
m



e
i

x
i2
=
0
,





i
=
1
m



e
i

x
in
=
0
(
6
)
With the relationship among the apprentice data “x”, the prediction coefficients “w”, the master data “y”, and the remainders “e” in the remaining equation (3) being taken into account, the following normal equations are obtained from the expression (6).
{
(

i
=
1
m



x
i1

x
i1
)

w
1
+
(

i
=
1
m



x
i1

x
i2
)

w
2
+

+
(

i
=
1
m



x
i1

x
in
)

w
n
=
(

i
=
1
m



x
i1

y
i
)
(

i
=
1
m



x
i2

x
i1
)

w
1
+
(

i
=
1
m



x
i2

x
i2
)

w
2
+

+
(

i
=
1
m



x
i2

x
in
)

w
n
=
(

i
=
1
m



x
i2

y
i
)

(

i
=
1
m



x
in

x
i1
)

w
1
+
(

i
=
1
m



x
in

x
i2
)

w
2
+

+
(

i
=
1
m



x
in

x
in
)

w
n
=
(

i
=
1
m



x
in

y
i
)
(
7
)
The same number of normal equations (7) as that of prediction coefficients “w” to be obtained can be generated when a certain number of apprentice data “x” and master data “y” are prepared. Therefore, the equations (7) are solved (to solve the equations (7), it is necessary that the matrix formed of the coefficients applied to the prediction coefficients “w” be regular) to obtain the most appropriate prediction coefficients “w”. It is possible to use a sweeping method (Gauss-Jordan elimination method) to solve the equations (7).
As described above, the most appropriate prediction coefficients “w” are obtained first, and then, a prediction value E[y] close to the pixel value “y” of the original pixel is obtained from the expression (1) by the use of the prediction coefficients “w.” This is the adaptive processing.
The adaptive processing differs, for example, from a simple interpolation processing in that a component not included in a low-s
image but included in the original image is reproduced. In other words, the adaptive processing is the same as interpolation processing using a so-called interpolation filter as far as the expression (1) is seen. Since the prediction coefficients “w,” which correspond to the tap coefficients of the interpolation filter are obtained by learning with the use of mater data “y,” a component included in the original image can be reproduced. This means that a high-s
image can be easily obtained. From this condition, it can be said that the adaptive processing has an image creation (resolution improving) function. Therefore, the processing can be used not only for a case in which prediction values of the original image are obtaine

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data processing apparatus, data processing method, and medium does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data processing apparatus, data processing method, and medium, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data processing apparatus, data processing method, and medium will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3018317

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.