Decoder for a software-implemented end-to-end scalable video...

Interactive video distribution systems – Video distribution system components – Headend

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240110, C375S240220, C348S413100, C348S402100, C348S403100

Reexamination Certificate

active

06266817

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to client decoders for use with video delivery systems, and more specifically to client decoders for use in video delivery systems in which video may be delivered scalably, so as to maximize use of network resources and to minimize user-contention conflicts.
BACKGROUND OF THE INVENTION
It is known in the art to use server-client networks to provide video to end users, wherein the server issues a separate video stream for each individual client.
A library of video sources is maintained at the server end. Chosen video selections are signal processed by a server encoder stored on digital media, and are then transmitted over a variety of networks, perhaps on an basis that allows a remote viewer to interact with the video. The video may be stored on media that includes magnetic disk, CD-ROM, and the stored information can include video, speech, and images. As such, the source video information may have been stored in one of several spatial resolutions (e.g., 160×120, 320×240, 640×480 pixels), and temporal resolutions (e.g., 1 to 30 frames per second). The source video may present bandwidths whose dynamic range can vary from 10 Kbps to 10 Mbps. The signal processed video is transmitted to the clients (or decoders) over one or more delivery networks that may be heterogeneous, e.g., have widely differing bandwidths. For example, telephone delivery lines can transmit at only a few tens of Kbps, an ISDN network can handle 128 Kbps, ethernet at 10 Mbps, whereas ATM networks handle even higher transmission rates.
Although the source video has varying characteristics, prior art video delivery systems operate with a system bandwidth that is static or fixed. Although such system bandwidths are fixed, in practice, the general purpose computing environment associated with the systems are dynamic, and variations in the networks can also exist. These variations can arise from the outright lack of resources (e.g., limited network bandwidth and processor cycles), contention for available resources due to congestion, or a user's unwillingness to allocate needed resources to the task.
Prior art systems tend to be very computationally intensive, especially with respect to decoding images of differing resolutions. An encoder may transmit a bit stream of, say, 320×240 pixel resolution, whereas the decoder requires 160×120 pixel resolution. Decoding, in the prior art, requires that several processes be invoked, including decompression, entropy coding, quantization, discrete cosine transformation and down-sampling. Collectively, these process steps take too long to be accomplished in real-time.
Color conversions, e.g., YUV-to-RGB are especially computationally intensive, in the prior art. In another situation, an encoder may transmit 24 bits, representing 16 million colors, but a recipient decoder may be coupled to a PC having an 8 bit display, capable of only 256 colors. The decoder must then dither the incoming data, which is a computationally intensive task. Prior art decoders also take too long to complete operations such as color palletization to operate in real-time.
Unfortunately, fixed bandwidth prior art systems cannot make full use of such dynamic environments and system variations. The result is slower throughput and more severe contention for a given level of expenditure for system hardware and software. When congestion (e.g., a region of constrained bandwidth) is present on the network, packets of transmitted information will be randomly dropped, with the result that no useful information may be received by the client.
Video information is extremely storage intensive, and compression is necessary during storage and transmission. Although scalable compression would be beneficial, especially for browsing in multimedia video sources, existing compression systems do not provide desired properties for scalable compression. By scalable compression it is meant that a full dynamic range of spatial and temporal resolutions should be provided on a single embedded video stream that is output by the server over the network(s). Acceptable software-based scalable techniques are not found in the prior art. For example, the MPEG-2 compression standard offers limited extent scalability, but lacks sufficient dynamic range of bandwidth, is costly to implement in software, and uses variable length codes that require additional error correction support.
Further, prior art compression standards typically require dedicated hardware at the encoding end, e.g., an MPEG board for the MPEG compression standard. While some prior art encoding techniques are software-based and operate without dedicated hardware (other than a fast central processing unit), known software-based approaches are too computational intensive to operate in real-time. For example, JPEG software running on a Sparcstation 10 workstation can handle only 2-3 frames/second, e.g., about 1% of the frame/second capability of the present invention.
Considerable video server research in the prior art has focussed on scheduling policies for on-demand situations, admission control, and RAID issues. Prior art encoder operation typically is dependent upon the characteristics of the client decoders. Simply stated, relatively little work has been directed to video server systems operable over heterogeneous networks having differing bandwidth capabilities, where host decoders have various spatial and temporal resolutions.
In summary, there is a need for a video delivery system that provides end-to-end video encoding such that the server outputs a single embedded data stream from which decoders may extract video having different spatial resolutions, temporal resolutions and data rates. The resultant video compression would be bandwidth scalable and thus deliverable over heterogeneous networks whose transmission rates vary from perhaps 10 Kbps to 10 Mbps. Such a system should accommodate lower bandwidth links or congestion, and should permit the encoder to operate independently of decoder capability or requirements.
Preferably the decoder should be software-based (e.g., not require specialized dedicated hardware beyond a computing system) and should permit real-time decompression. Alternatively, the decoder should be implementable in hardware, using relatively inexpensive components. The system should permit user selection of a delivery bandwidth to choose the most appropriate point in spatial resolution, temporal resolution, data-rate and in quality space. The system should also provide subjective video quality enhancement, and should include error resilience to allow for communication errors.
The present invention provides such a decoder an end-to-end scalable video delivery system.
SUMMARY OF THE INVENTION
In a first embodiment, the present invention provides a software-based, rapidly operating decoder of low computational complexity for use with an end-to-end scalable video delivery system whose software-based server-encoder operates independently of the capabilities and requirements of the software-based decoder(s). The decoder operates in conjunction with a central processor unit (“CPU”) and relies upon stored look-up tables containing preprocessed operations including color conversion, dithering, color palletization, interpolation, decimation, edge enhancement, and the like. In a second embodiment, the decoder functions are permanently stored in a read-only memory (“ROM”) that operates in conjunction with a relatively simple central processor unit.
The encoder uses a scalable compression algorithm based upon Laplacian pyramid decomposition. An original 640×480 pixel image is decimated to produce a 320×240 pixel image that is itself decimated to yield a 160×120 pixel base image that is encoder-transmitted. This base image is then compressed to form a 160×120 pixel base layer that is decompressed and up-sampled to produce an up-sampled 320×240 pixel image. The up-sampled 320×240 pixel image is then subtracted from the 320×

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Decoder for a software-implemented end-to-end scalable video... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Decoder for a software-implemented end-to-end scalable video..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Decoder for a software-implemented end-to-end scalable video... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2488568

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.