Image processing apparatus

Facsimile and static presentation processing – Static presentation processing – Attribute control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C358S520000, C382S167000

Reexamination Certificate

active

06501563

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image processing apparatus capable of suppressing the change of hue of color image signals as image data transferred from external devices and also capable of increasing a sharpness of outlines or boundaries of the image data.
2. Description of the Related Art
CONVENTIONAL EXAMPLE 1
FIG.1 is a block diagram showing a configuration of a conventional image processing apparatus disclosed in a patent document whose laid-open publication number is JP-A-58/198969, “Method of sharpness for image”. In
FIG. 1
, the reference character S
0
designates a sharp signal and U
0
denotes un-sharp signal. The reference number
141
designates a subtracter. The subtracter
141
inputs both the sharp signal S
0
and un-sharp signal U
0
to calculate a difference (S
0
−U
0
) between both signals S
0
and U
0
. A multiplier
142
inputs the difference from the subtracter
141
. The multiplier
142
multiplies the difference (S
0
−U
0
) by a constant value k. A multiplier
143
inputs the result of the multiplication of the multiplier
142
.
Next, a divider
144
inputs an image signal I
0
and an image signal Ii. In order to obtain a sharpness highlighting signal, the divider
144
performs a division of image signals I
0
and Ii and outputs a divisional result I
0
/Ii. The multiplier
143
inputs the result I
0
/Ii from the divider
144
and performs a multiplication of the result k(S
0
−U
0
) and the result I
0
/Ii. An adder
145
inputs both the image signal Ii and the result k(S
0
−U
0
) (I
0
/Ii) and adds them and outputs the result Ii′ of the addition. That is, the image signal Ii′ obtained by the sharpness processing can be expressed by the following equation (1):
I
i

=
I
i
+
I
i
I
0
×
K
×
(
S
0
-
U
0
)
.
(
1
)
CONVENTIONAL EXAMPLE 2
FIG. 2
is a block diagram showing a configuration of a conventional another image processing apparatus disclosed in the patent document whose laid-open publication number is JP-A-61/273073, “Edge highlighting processing apparatus for color gradation image information”. In
FIG. 2
, the reference number
151
designates a RGB/brightness conversion unit,
152
denotes a RGB/YMC conversion unit,
153
indicates a multiplexer, and
154
designates an outline highlighting unit.
Both the RGB/brightness conversion unit
151
and the RGB/YMC conversion unit
152
input digital signals R, G, and B. The RGB/brightness conversion unit
151
outputs a brightness I. This brightness I can be expressed by the following equation (2).
I=0.30
×R
+0.59
×G
+0.11
×B
  (2).
On the other hand, the RGB/YMC conversion unit
152
performs only a complementary operation because R and G, B and Y, M and C are in a complementary color relationship, respectively.
The multiplexer
153
inputs an output as a result from the RGB/YMC conversion unit
152
. The multiplexer
153
selects and outputs only one of three kinds of information Y, M, and C supplied from the RGB/YMC conversion unit
152
through input terminals of the multiplexer
153
according to a state of selection terminals. The outline highlighting unit
154
inputs the selected one, namely the YMC signal, from the multiplexer
153
through an input terminal of the outline highlighting unit
154
. The outline highlighting unit
154
further inputs the brightness I from the RGB/brightness conversion unit
151
. The brightness signal I is delayed per pixel according to a clock signal and then converted to intermediate data D
1
that will be expressed by the following equation (3).
D
1
=2
I
n
−(
I
n+1
+I
n−1
)  (3)
where I
n
is a brightness I of the n-th pixel.
The intermediate data designate a result of an edge extraction operation. For example, the intermediate data D
1
becomes 0 when the brightness I is not changed, and becomes a negative value or a positive value when the brightness I is changed.
Next, the intermediate data D
1
are converted into a complementary coefficient D
2
with reference to a table showing a relationship between edge extraction results and complementary coefficients. In the table showing the relationship between the edge extraction results and the complementary coefficients, the value
1
is set when the edge extraction result is 0, and the value is in
0
and
1
when the edge extraction result is a negative value, and the value is not less than 1 when the edge extraction result is a positive value. The final result can be expressed by the following equation (4) when the complementary coefficients D
2
is multiplied by selected YMC signal.
O
=
{
Y
×
D2
M
×
D2
C
×
D2
.
(
4
)
CONVENTIONAL EXAMPLE 3
FIG. 3
is a diagram showing the operation flow of an image processing method executed by another conventional image processing apparatus disclosed in the patent document whose laid-open publication number is JP-A-3/175876, “Edge processing method for color images”. In the conventional image processing method shown in
FIG. 3
, the color information obtained by scanning color documents by an image device are divided or resolved into a red component, a green component, and a blue component. Each of the red, blue, and green components is scanned per pixel and sampled. Finally, the sampled image information are used as input image data (R
1
, G
1
, B
1
).
The input image data R
1
, G
1
, and B
1
are converted to three stimulus values X
1
, Y
1
, and Z
1
for a target pixel and adjacent pixels that are adjacent to the target pixel in a specific pixel area (Step ST
161
).
Then, a CIE color coordinate x
1
and y
1
and a visual reflection factor Y
1
are obtained based on the three stimulus values X
1
, Y
1
, and Z
1
(Step ST
162
). A sharpness processing for the visual reflection factor Y
1
is performed (Step ST
163
) by using the Laplacian filter, that is well known, in order to obtain the three stimulus values X
2
, Y
2
, and Z
2
based on the CIE color coordinate x
1
and y
1
and the visual reflection factor Y
2
. Then, the three stimulus values X
2
, Y
2
, and Z
2
are calculated by using the CIE color coordinate x
1
and y
1
and the visual reflection factor Y
2
that has been obtained by the edge processing (Step ST
164
).
Finally, the three stimulus values X, Y
2
, and Z
2
are converted to the image information R
2
, G
2
, and B
2
(Step ST
165
) and those image information R
2
, G
2
, and B
2
are then outputted as output image data to external image devices (not shown).
A concrete example for the above image processing will be explained.
In Step ST
161
, the input image information R
1
, G
1
, and B
1
are converted based on the following equation (5).
(
X1
Y1
Z1
)
=
(
0.6067
0.1736
0.2001
0.2988
0.5868
0.1144
0.0
0.0661
1.1150
)

(
R1
G1
B1
)
.
(
5
)
At Step ST
162
, the CIE color coordinate x
1
, y
1
and the visual reflection factor Y
1
are calculated based on the following equations (6) and (7), respectively.
x1
=
X1
X1
+
Y1
+
Z1



y1
=
Y1
X1
+
Y1
+
Z1
,
(
6
)
 Visibility=
Y
1
  (7)
In Step ST
163
, the sharpness processing is performed by using the Laplacian filter. In this sharpness processing, when the visibility of a target pixel is Y
1
, and when the visibilities of adjacent pixels that are located at front, behind, right, and left pixels adjacent to the target pixel are Yb, Yc, Yd, and Ye, respectively, the degree “Parm” of the sharpness can be expressed by the following equation (8).
Y
2
=
Y
1
−Parm×(
Yb+Yc+Yd+Ye
−(4×
Y
1
))  (8).
In Step ST
164
, the three stimulus values X
2
, Y
2
, and Z
2
are calculated by using the CIE color coordinate x
1
and y
1
and the visual reflection factor Y
2
that has been obtained by the edge processing. These three stimulus values X
2
, Y
2
, and Z
2
are obtained by performing the inverse conversion function shown in the equation (6).
In Step ST
165
, the three stimulus values X
2
, Y
2
, and Z
2
are conve

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image processing apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image processing apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image processing apparatus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2990180

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.