Throughput enhanced video communication

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06597736

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates generally to communications within a computer network and more particularly to video image communications and display.
Video imaging refers to the rendering of text and graphics images on a display. Each video image is a sequence of frames, typically thirty frames are displayed on a screen every second. Images are transmitted over various high bit rate communications media, such as coaxial cable and Asymmetric Digital Subscriber Line (“ADSL”), as well as over lower bit rate communications media, such as Plain Old Telephone Service (“POTS”), wireless phone service and power line communication networks. Video images may be displayed in black and white, gray scale or color. A 24-bit color video image at 640×480 pixel resolution would occupy almost one megabyte per frame, or over a gigabyte per minute to display, therefore lower bit rate communication media is unable to provide real time display of video images without some improvement.
One improvement in the throughput of video communications has been the use of video compression to reduce the size of files and packets containing video images represented in digital form, thereby increasing the resolution of displayed video images. Video compression can be applied both intraframe (using only information contained in a single frame) or interframe (using information in other frames of the video image). Because humans cannot perceive very small changes in color or movement, compression techniques need not preserve every bit of information. These lossy compression techniques can be used to achieve large reductions in video image size without affecting the perceived quality of the image. Compression techniques alone have not produced the transmission quality required for video applications (e.g., video telephony) on lower bit rate networks.
MPEG (Moving Picture Experts Group) is an ISO/IEC working group developing international standards for compression, decompression, and representation of moving pictures and audio. MPEG-4 is a part of the standard currently under development designed for videophones and multimedia applications. MPEG-4 provides for video services on a lower bandwidth of up to 64 kilobits per second. MPEG-4 uses media objects to represent audiovisual content. Media objects can be combined to form compound media objects. MPEG-4 multiplexes and synchronizes the media objects before transmission to provide higher quality of service. MPEG-4 organizes the media objects in a hierarchical fashion where the lowest level has primitive media objects like still images, video objects, audio objects. MPEG-4 has a number of primitive media objects which can be used to represent two or three-dimensional media objects. MPEG-4 also defines a coded representation of objects for text, graphics, synthetic sound, and talking synthetic heads. The visual part of the MPEG-4 standard describes methods for compression of images and video, it also provides algorithms for random access to all types of visual objects as well as algorithms for spatial, temporal and quality scalability, content-based scalability of textures, images and video. Additionally, algorithms for error robustness and resilience in error prone environments are also part of the standard. For synthetic objects MPEG-4 has parametric descriptions of human face and body, parametric descriptions for animation streams of the face and body. MPEG-4 also describes static and dynamic mesh coding with texture mapping, texture coding with view dependent applications.
MPEG-4 supports coding of video objects with spatial and temporal scalability. Scalability allows decoding a part of a stream and constructing images with reduced decoder complexity (reduced quality), reduced spatial resolution, reduced temporal resolution., or with equal temporal and spatial resolution but reduced quality. Scalability is desired when video is sent over heterogeneous networks, or receiver can not display at full resolution (limited power). Robustness in error prone environments is an important issue for mobile communications. MPEG-4 has tools to address robustness, including resynchronization of the bit stream and the decoder when an error has been detected. Data recovery tools can also be used to recover lost data. Error concealment tools are used to conceal the lost data. MPEG-4 is a general purpose scheme designed to maximize video content over communication lines.
Streaming is a technique used for sending audiovisual content in a continuous stream and having it displayed as it arrives. The content is compressed and segmented into a sequence of packets. A user does not have to wait to download a large file before seeing the video or hearing the sound because content is displayed as it arrives, and additional content is downloaded as already downloaded content is displayed. Streaming can be applied to MPEG-4 media objects to enhance a user's audiovisual experience.
H.261 is a standard that was developed for transmission of video at a rate of multiples of 64 Kbps. Videophone and videoconferencing are some applications. H.261 standard is similar to JPEG still image compression standard and uses motion-compensated temporal prediction.
H.263 is a standard that was designed for very low bit rate coding applications. H.263 uses block motion-compensated Discrete Cosine Transform (“DCT”) structures for encoding. H.263 encoding has higher efficiency than H.261 encoding. H.263 is based on H.261 but it is significantly optimized for coding at low bit rates. Video coding is performed by partitioning each picture into macroblocks. Each macroblock consists of 16×16 luminance block and 8×8 chrominance blocks of Cb and Cr. Cb and Cr are the color difference signals in ITU-R 601 coding. The two color difference signals are sampled at 6.75 MHZ co-sited with a luminance sample. Cr is the digitized version of the analogue component (R-Y), likewise Cb is the digitized version of (B-Y). Each macroblock can be coded as intra or as inter. Spatial redundancy is exploited by DCT coding, temporal redundancy is exploited by motion compensation. H.263 includes motion compensation with half-pixel accuracy and bidirectionally coded macroblocks. 8×8 overlapped block motion compensation, unrestricted motion vector range at picture boundary, and arithmetic coding are also used in H.263. These features are not included in MPEG-1 and MPEG-2 since they are useful for low bit rate applications. H.263 decoding is based on H.261 with enhancements to support coding efficiency. Four negotiable options are supported to improve performance. These are unrestricted motion vector mode, syntax-based arithmetic coding mode, advanced prediction mode and PB-frames mode. Unrestricted motion vector mode allows motion vectors to point outside a picture. Syntax-based arithmetic coding mode allows using arithmetic coding instead of Huffman coding. Advanced prediction mode uses overlapped block motion compensation with four 8×8 block vectors instead of a single 16×16 macroblock motion vector. PH-frames mode allows a P-frame and a B-frame to be coded together as a single PB-frame.
Model based video-coding schemes define three-dimensional structural models of a scene, the same model is used by a coder to analyze an image, and by a decoder to generate the image. Traditionally research in model-based video coding (“MBVC”) has focused on head modeling, head tracking, local motion tracking, and expression analysis, synthesis. MBVC has been mainly used for videoconferencing and videotelephony, since in those applications the focus is on the modeling of the human head. MBVC has concentrated its modeling on images of heads and shoulders, because they are commonly occurring shapes certain video applications (e.g., videotelephony). In model-based approaches a parameterized model is used for each object (e.g., a head) in the scene. Coding and transmission is done using the parameters associated with the objects. Tools from image analysis and computer vision are used to analyze the im

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Throughput enhanced video communication does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Throughput enhanced video communication, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Throughput enhanced video communication will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3000851

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.