Image analysis – Image enhancement or restoration – Edge or contour enhancement
Reexamination Certificate
2004-05-17
2010-06-15
Hung, Yubin (Department: 2624)
Image analysis
Image enhancement or restoration
Edge or contour enhancement
C382S199000, C382S254000
Reexamination Certificate
active
07738725
ABSTRACT:
A method generates a stylized image of a scene including an object. A set of n input images are acquired of the scene with a camera. Each one of the n input images is illuminated by one of a set of n light sources mounted on a body of the camera at different positions from a center of projection of a lens of the camera. Ambient lighting can be used to illuminate one image. Features in the set of n input images are detected. The features include depth edges, intensity edges, and texture edges to determine qualitative depth relationships between the depth edges, the intensity edges and the texture edges. The set of n input images are then combined in an output image to enhance the detected features according to the qualitative relationships.
REFERENCES:
patent: 5325449 (1994-06-01), Burt et al.
patent: 5697001 (1997-12-01), Ring et al.
patent: 5949901 (1999-09-01), Nichani et al.
patent: 6088470 (2000-07-01), Camus et al.
patent: 6490048 (2002-12-01), Rudd et al.
patent: 6859565 (2005-02-01), Baron
patent: 6903359 (2005-06-01), Miller et al.
patent: 6919906 (2005-07-01), Hoppe et al.
patent: 7071989 (2006-07-01), Nakata
patent: 7574067 (2009-08-01), Tu et al.
patent: 2002/0050988 (2002-05-01), Petrov et al.
patent: 2002/0118209 (2002-08-01), Hylen
patent: P07-93561 (1995-04-01), None
patent: P08-294113 (1996-11-01), None
patent: P09-186935 (1997-07-01), None
patent: P11-265440 (1999-11-01), None
patent: P2001-175863 (2001-06-01), None
patent: P2003-296736 (2003-10-01), None
DeCarlo et al. (“Stylization and Abstraction of Photographs,” Proc. 29th Conf. on Computer Graphics and Interactive Techniques, V. 21, Issue 3, Jul. 2002, pp. 769-776).
Ganesan et al. (“Edge detection in untextured and textured images—A common computational framework,” IEEE T. Systems, Man and Cybernetics—Part B: Cybernetics, V. 27, No. 5, Oct. 1997, pp. 823-834).
Ntalianis et al. (“Tube-Embodied Gradient Vector Flow Fields for Unsupervised Video Object Plane (VOP) Segmentation,” Proc. Int'l Conf. Image Processing, V. 2, Oct. 7-10, 2001, pp. 637-640).
Raskar et al. (“Blending Multiple Views,” Proceedings 10th Pacific Conf. on Computer Graphics and Applications, Oct. 9-11, 2002, pp. 145-153).
Ma et al. [“Integration of Skin, edge and Texture to Extract Natural Gesture,” SPIE V. 4875 (2nd International Conference on Image and Graphics), 2002, pp. 716-722].
Perez-Jacome et al. (“Target detection via combination of feature-based target-measure images”, SPIE vol. 3720, Apr. 1999, pp. 345-356).
Saito, et al., “Comprehensible Rendering of 3-D Shapes,” Proceedings of SIGGRAPH'90, 1990.
Markosian, et al., “Real-Time Non-photorealistic Rendering,” Proceedings of SIGGRAPH'97, pp. 415-420, 1997.
Hertzmann, “Painterly Rendering with Curved Brush Strokes of Multiple Sizes,” ACM SIGGRAPH, pp. 453-460, 1998.
Lin, et al., “Building detection and description from a single intensity image,” Computer Vision and Image Understanding: CVIU 72, 2, pp. 101-121, 1998.
Chuang, et al., “Shadow matting and compositing,” ACM Trans. Graph. 22, 3, pp. 494-500, 2003.
Toyama, et al., “Wallflower: Principles and Practice of Background Maintenance,” ICCV, pp. 255-261, 1999.
Weiss, “Deriving intrinsic images from image sequences,” Proceedings of ICCV, vol. 2, pp. 68-75, 2001.
Geiger, et al., “Occlusions and binocular stereo,” European Conference on Computer Vision, pp. 425-433, 1992.
Intille, et al., “Disparity-space images and large occlusion stereo,” ECCV (2), pp. 179-186, 1994.
Birchfield, et al., “Depth discontinuities by pixel-to-pixel stereo,” International Journal of Computer Vision 35, 3, pp. 269-293, 1999.
Scharstein, et al., “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International Journal of Computer Vision, vol. 47(1), pp. 7-42, 2002.
Sato, et al., “Stability issues in recovering illumination distribution from brightness in shadows,” IEEE Conf. on CVPR, pp. 400-407, 2001.
Huggins, et al., “Finding Folds: On the Appearance and Identification of Occlusion,” IEEE Conf. on Computer Vision and Pattern Recognition, IEEE Computer Society, vol. 2, pp. 718-725, 2001.
Langer, et al., “Space occupancy using multiple shadowimages,” International Conference on Intelligent Robots and Systems, pp. 390-396, 1995.
Daum, et al., “On 3-D surface reconstruction using shape from shadows,” CVPR, pp. 461-468, 1998.
Kriegman, et al., “What shadows reveal about object structure,” Journal of the Optical Society of America, pp. 1804-1813, 2001.
Savarese, et al., “Shadow Carving,” Proc. of the Int. Conf. on Computer Vision, 2001.
Feris Rogerio
Raskar Ramesh
Brinkman Dirk
Hung Yubin
Mitsubishi Electric Research Laboratories Inc.
Vinokur Gene
LandOfFree
Stylized rendering using a multi-flash camera does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Stylized rendering using a multi-flash camera, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Stylized rendering using a multi-flash camera will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-4177805