Colors only process to reduce package yield loss

Semiconductor device manufacturing: process – Making device or circuit responsive to nonelectrical signal – Responsive to electromagnetic radiation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C438S144000

Reexamination Certificate

active

06482669

ABSTRACT:

BACKGROUND OF THE INVENTION
(1) Field of the Invention
The present invention relates to light collection efficiency and package yield improvements for the optical structure and microelectronic fabrication process of semiconductor color imaging devices.
(2) Description of Prior Art
Synthetic reconstruction of color images in solid'state analogor digital video cameras is conventionally performed through a combination of an array of optical microlens and spectral filter structures and integrated circuit amplifier automatic gain control operations following a prescribed sequence of calibrations in an algorithm.
Typically solid-state color cameras are comprised of charge-coupled device (CCD), Charge-Injection Device (CID), or Complementary Metal-Oxide Semiconductor (CMOS) structures with planar arrays of microlenses and primary color filters mutually aligned to an area array of photodiodes patterned onto a semiconductor substrate. The principal challenge in the design of solid-state color camera devices is the trade-off between adding complexity and steps to the microelectronic fabrication process wherein color filters are integrally formed in the semiconductor cross-sectional structure versus adding complexity and integrated electronic circuitry for conversion of the optical analog signals into digital form and signal processing with color-specific automated gain-control amplifiers requiring gain-ratio balance. The trade-off between microelectronic fabrication process complexity versus electronic complexity is determined by a plurality of factors, including product manufacturing cost and optoelectronic performance.
Color-photosensitive integrated circuits require carefully configured color filters to be deposited on the upper layers of a semiconductor device in order to accurately translate a visual image into its color components. Conventional configurations may generate a color pixel by employing four adjacent pixels on an image sensor. Each of the four pixels is covered by a different color filter selected from the group of red, blue and two green pixels, thereby exposing each monochromatic pixel to only one of the three basic colors. Simple algorithms are subsequently applied to merge the inputs from the three monochromatic pixels to form one full color pixel. The color filter deposition process and its relationship to the microlens array formation process determine the production cycle-time, test-time, yield, and ultimate manufacturing cost. It is an object of the present invention to teach color-filter processes which optimize these stated factors without the microlens array(s) and the associated complex process steps.
While color image formation may be accomplished by recording appropriately filtered images using three separate arrays, such systems tend to be large and costly. Cameras in which a full color image is generated by a single detector array offer significant improvements in size and cost but have inferior spatial resolution. Single-chip color arrays typically use color filters that are aligned with individual columns of photodetector elements to generate a color video signal. In a typical stripe configuration, green filters are used on every other column with the intermediate columns alternatively selected for red or blue recording. To generate a color video signal using an array of this type, intensity information from the green columns is interpolated to produce green data at the red and blue locations. This information is then used to calculate a red-minus-green signal from red-filtered columns and a blue-minus-green signal from the blue ones.
Complete red-minus-green and blue-minus-green images are subsequently interpolated from this data yielding three complete images. Commercial camcorders use a process similar to this to generate a color image but typically utilize more complicated mosaic-filter designs. The use of alternate columns to yield color information decreases the spatial resolution in the final image.se a process similar to this to generate a color image but typically utilize more complicated mosaic-filter designs. The use of alternate columns to yield color information decreases the spatial resolution in the final image.
The elementary unit-cell of the imager is defined as a pixel, characterized as an addressable area element with intensity and chroma attributes related to the spectral signal contrast derived from the photon collection efficiency. Prior art conventionally introduces a microlens on top of each pixel to focus light rays onto the photosensitive zone of the pixel.
The optical performance of semiconductor imaging arrays depends on pixel size and the geometrical optical design of the camera lens, microlenses, color filter combinations, spacers, and photodiode active area size and shape. The function of the microlens is to efficiently collect incident light falling within the acceptance cone and refract this light in an image formation process onto a focal plane at a depth defined by the planar array of photodiode elements. Significant depth of focus may be required to achieve high resolution images and superior spectral signal contrast since the typical configuration positions the microlens array at the top light collecting surface and the photosensors at the semiconductor substrate surface.
When a microlens element forms an image of an object passed by a video camera lens, the amount of radiant energy (light) collected is directly proportional to the area of the clear aperture, or entrance pupil, of the microlens. At the image falling on the photodiode active area, the illumination (energy per unit area) is inversely proportional to the image area over which the object light is spread. The aperture area is proportional to the square of the pupil diameter and the image area is proportional to the square of the image distance, or focal length The ratio of the focal length to the clear aperture of the microlens is known in Optics as the relative aperture or f-number. The illumination in the image arriving at the plane of the photodetectors is inversely proportional to the square of the ratio of the focal length to clear aperture. An alternative description uses the definition that the numerical aperture (NA) of the lens is the reciprocal of twice the f-number. The concept of depth of focus is that there exists an acceptable range of blur (due to defocussing that will not adversely affect the performance of the optical system. The depth of focus is dependent on the wavelength of light, and, falls off inversely with the square of the numerical aperture. Truncation of illuminance patterns falling outside the microlens aperture results in diffractive spreading and clipping or vignetting, producing undesirable nonuniformities and a dark ring around the image.
The limiting numerical aperture or f-stop of the imaging camera's optical system is determined by the smallest aperture element in the convolution train. Typically, the microlens will be the limiting aperture in video camera systems. Prior Art is characterized by methods and structures to maximize the microlens aperture by increasing the radius of curvature, employing lens materials with increased refractive index, or, using compound lens arrangements to extend the focal plane deeper to match the multilayer span required to image light onto the buried photodiodes at the base surface of the semiconductor substrate. Light falling between photodiode elements or on insensitive outer zones of the photodiodes, known. as dead zones, may cause image smear or noise. With Industry trends to increased miniaturization, smaller photodiodes are associated with decreasing manufacturing cost, and, similarly, mitigate against the extra steps of forming layers for Prior Art compound lens arrangements to gain increased focal length imaging. Since the microlens is aligned and matched in physical size to shrinking pixel sizes, larger microlens sizes are not a practical direction. Higher refractive index materials for the microlens would increase the reflection-loss at the air-microlens interface

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Colors only process to reduce package yield loss does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Colors only process to reduce package yield loss, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Colors only process to reduce package yield loss will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2923379

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.