Pulse or digital communications – Bandwidth reduction or expansion
Reexamination Certificate
1998-05-22
2004-06-01
An, Shawn S. (Department: 2613)
Pulse or digital communications
Bandwidth reduction or expansion
C375S240270, C375S240280, C375S355000, C348S423100, C348S439100, C348S461000, C348S462000, C348S464000, C348S515000, C348S521000, C348S500000
Reexamination Certificate
active
06744815
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to a method and a system for synchronizing audio and video signals in general and to a method and a system for synchronizing MPEG video and audio streams in particular.
BACKGROUND OF THE INVENTION
Methods and systems for providing synchronized audio and video streams are known in the art. For example, MPEG specifications ISO/IEC 11172-1,2,3 (MPEG1) and the ISO/IEC 13818-1,2,3 (MPEG2) describe a method of encoding and decoding analog audio and video.
The encoding process consists of three stages. The first stage is digitizing the analog audio/video signals. The second stage is compression of the digital signals to create elementary streams. The third stage is multiplexing the elementary streams into a single stream.
The decoding process consists of inversing each of these stages and applying them in the reverse order. Reference is now made to
FIG. 1
, which is a schematic illustration of an encoding and decoding system, generally referenced
10
, known in the art.
System
10
includes an encoding device
20
and a decoding device
40
. The encoding device
20
includes an audio encoder
12
, a video encoder
22
and a multiplexor
18
. The audio encoder
12
includes an audio analog to digital converter (A/D)
14
and an audio compressor
16
. The video encoder
22
includes a video A/D
24
and a video compressor
26
. The audio compressor
16
is connected to the audio A/D
14
and to the multiplexor
18
. The video compressor is connected to the video A/D
24
and to the multiplexor
18
. An A/D converter is also known as a digitizer.
The decoding section
40
includes an audio decoder
32
, a video decoder
42
and a de-multiplexor
38
. The audio decoder
32
includes an audio digital to analog converter (D/A)
34
and an audio decompressor
36
. The video decoder
42
includes a video D/A
44
and a video decompressor
46
. The audio decompressor
36
is connected to the audio D/A
34
and to the de-multiplexor
38
. The video decompressor
46
is connected to the video D/A
44
and to the de-multiplexor
38
.
Each of the A/D converters
14
and
24
is driven by an independent sampling clock. The origin of this clock differs in audio and video encoders
12
and
22
. Each of the respective compressors
16
and
26
is affected by the sampling clock of the A/D converter connected thereto.
Analog audio is a continuous, one-dimensional function of time. Digitization of analog audio amounts to temporal sampling and the quantization of each sampled value. It will be appreciated by those skilled in the art that the audio digitizer clock is not derived from the analog source signal.
Analog video is a two dimensional function of time, temporally sampled to give frames (or fields) and spatially sampled to give lines. The broadcasting standard of the analog video source signal (e.g. PAL, NTSC), defines the number of frames/fields per second and the number of lines in each frame/field.
Analog video is therefore a discrete collection of lines which are, like analog audio signals, one-dimensional functions of time. Timing information is modulated into the analog video signal to mark the start of fields/frames and the start of lines. The timing of the pixel samples within each line is left to the digitizer, but the digitizer must begin sampling lines at the times indicated by the signal.
Video digitizers typically feed analog timing information into a phase locked loop to filter out noise on the video signal and divide the clock accordingly to derive the pixel clock for digitizing each line. Thus the timing of video sampling is derived from the analog source signal. In the case of video, digitization refers only to the quantization of pixels and CCIR 601 is an example of a video digitizing standard that describes such a process.
The input of a video or audio compression module, such as compressors
16
and
26
, is samples or sets of samples. The output is a compressed bit-stream.
As the compressor consumes the samples produced by its respective digitizer, its timing is slaved to that digitizer. In a simple model of the system, the compressor has no clock of its own. Instead, it uses the bit-rate specification to calculate the number of bits required per sample or set of samples. As samples appear at the input of the encoder, they are compressed and the compressed bits appear at the output.
It will be appreciated by those skilled in the art that the actual timing of audio or video compressed bit emission by an encoder is determined by the digitizer clock which times the arrival of samples at the compressor input.
The timing of the video digitizer
24
is derived from the video analog source and the video compressor
26
derives its own timing from the digitizer
24
. Thus the timing of the video compressor is derived from the analog video source. If the timing information in the analog source is missing or incomplete, then the compressor
26
will be subject to abnormal timing constraints.
The following are examples of problematic video input sources:
The analog source is not a professional one (cheap VCR).
Noise is present on the line that carries the video signal.
The source is detached from the input for some time.
The video source is a VCR without a TBC (Time Base Corrector) and fast forward or rewind are applied.
The effects of problematic video input sources on the compressed stream depends on the nature of the problem and the implementation of the encoder.
Among the timing information present in the analog video signal are pulses that indicate the start of a field, the start of a frame and the start of a line.
If, for instance, noise is interpreted by the digitizer as a spurious pulse marking the start of a field, such that the pulse is not followed by a complete set of lines, then the timing information will become inconsistent.
One encoder might interpret the pulse as an extra field, somehow producing a complete compressed field. Another encoder might react to the glitch by discarding the field it was encoding. In both these cases, the ratio between the number of bits in the stream and compressed frames in the stream may be correct, but one encoder will have produced more frames than the other within the same interval and from the same source.
To an observer at the output of the encoders, this would appear to be caused by a variance between the clocks that drive the video encoders.
As will be appreciated by those skilled in the art, each video and audio encoder may be driven by its own clock. Decoders may also be driven by independent clocks.
As an example of the operation of system
10
the video encoder and audio encoder are fed from the same PAL source (analog video combined with analog audio).
The number of frames that are compressed within a given time interval can be calculated by multiplying the interval measured in seconds by twenty five (according to the PAL broadcasting standard).
In this example, the clocks of the video and audio decoders and the clock of the audio encoder have identical timing. The clock of the video encoder is running slightly fast with respect to the others.
Thus, within a given interval measured by the video decoder clock, the video encoder will produce more frames than the number calculated from the duration of that interval. The video decoder will play the compressed stream at a slower rate than the rate at which the video encoder produces that stream. The result will be that over any given interval, the video display will be slightly delayed.
As the timing of the audio encoder and audio decoder are identical, audio decoding will progress at the same rate as audio encoding. The result will be a loss of audio video synchronization at the decoder display.
It is a basic requirement to be able to guarantee that the decoded audio and video at the output of MPEG decoders are synchronized with each other despite the relative independence of the timings of the units in the system.
One of the methods known in the art to synchronize audio and video streams is called end-to-end synchronization. This
Elmaliach Yehuda
Sackstein David
An Shawn S.
Eitan, Pearl, Latzer & Cohen Zedek LLP
Optibase Ltd.
LandOfFree
Method for synchronizing audio and video streams does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for synchronizing audio and video streams, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for synchronizing audio and video streams will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3360217