Speech coder using an orthogonal search and an orthogonal...

Data processing: speech signal processing – linguistics – language – Speech signal processing – For storage or transmission

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704SE19035

Reexamination Certificate

active

07925501

ABSTRACT:
Speech is coded using an orthogonal search by calculating a search reference value. An adaptive codevector representing a pitch component is generated. A random codevector representing a random component is also generated. The orthogonal search further includes generating a synthetic speech signal by a synthesis filter being excited by the adaptive codevector and the random codevector. A distortion between the input speech signal and the synthetic speech signal is calculated. One random codevector that minimizes the distortion is selected.

REFERENCES:
patent: 4868867 (1989-09-01), Davidson et al.
patent: 5195137 (1993-03-01), Swaminathan
patent: 5245662 (1993-09-01), Taniguchi et al.
patent: 5307441 (1994-04-01), Tzeng
patent: 5327519 (1994-07-01), Haggvist et al.
patent: 5444816 (1995-08-01), Adoul et al.
patent: 5680507 (1997-10-01), Chen
patent: 5699477 (1997-12-01), McCree
patent: 5701392 (1997-12-01), Adoul et al.
patent: 5734790 (1998-03-01), Taguchi
patent: 5826226 (1998-10-01), Ozawa
patent: 5963896 (1999-10-01), Ozawa
patent: 6029125 (2000-02-01), Hagen et al.
patent: 6058359 (2000-05-01), Hagen et al.
patent: 6122608 (2000-09-01), McCree
patent: 6266632 (2001-07-01), Kato et al.
patent: 6301556 (2001-10-01), Hagen et al.
patent: 6415254 (2002-07-01), Yasunaga et al.
patent: 6453288 (2002-09-01), Yasunaga et al.
patent: 6564183 (2003-05-01), Hagen et al.
patent: 7546239 (2009-06-01), Yasunaga et al.
patent: 7580834 (2009-08-01), Ehara et al.
patent: 7590527 (2009-09-01), Yasunaga et al.
patent: 2004/0143432 (2004-07-01), Yasunaga et al.
patent: 2005/0203734 (2005-09-01), Yasunaga et al.
patent: 0577488 (1994-01-01), None
patent: 0684702 (1995-11-01), None
patent: 0714089 (1996-05-01), None
patent: 0778561 (1997-06-01), None
patent: 2238696 (1991-06-01), None
patent: 2-280200 (1990-11-01), None
patent: 2-282800 (1990-11-01), None
patent: 4-051200 (1992-02-01), None
patent: 5-108098 (1993-04-01), None
patent: 6-202699 (1994-07-01), None
patent: 7-28497 (1995-01-01), None
patent: 8-8753 (1996-01-01), None
patent: 9-34498 (1997-02-01), None
patent: 9-160596 (1997-06-01), None
patent: 10-63300 (1998-03-01), None
English Language Abstract of JP 2-280200.
English Language Abstract of JP 2-282800.
English Language Abstract of JP 5-108098.
English Language Abstract of JP 6-202699.
English Language Abstract of JP 7-28497.
English Language Abstract of JP 8-8753.
English Language Abstract of JP 9-34498.
English Language Abstract of JP 9-160596.
English Language Abstract of JP 10-63300.
Linde et al., “An Algorithm for Vector Quantizer Design”, IEEE Transactions on Communications, vol. Com-28, No. 1, pp. 84-95, (Jan. 1980).
Schroeder et al., “Code Excited Linear Prediction (CELP): High Quality Speech At Very Low Bit Rates”, Proc. ICASSP, pp. 937-940, (1985).
Johnson et al., “Pitch-Orthogonal Code-Excited LPC,” IEEE, pp. 0542-0546, (1990).
Gerso et al., “Vector Sum Excited Linear Prediction (VSELP) Speech Coding At 8 KBPS”, IEEE, pp. 461-464, (1990).
Laflamme et al., “On Reducing Computational Complexity of Codebook Search in CELP Coder Through the Use of Algebraic Codes” IEEE, pp. 177-180, (1990).
Atal et al., “Advances in Speech Coding”, pp. 138-139, 145-147, 160-161, 172-173, 180-181 and 192-195, (1991).
Lee, “Study for QCELP Algorithm Performance”, pp. 25-26 and 33-36, (Dec. 1993), and an English language translation thereof.
Salami et al., “8 K Bit/s ACELP Coding of Speech With 10 MS Speech Frame: A Candidate for CCITT Standardization”, ICASSP pp. II-97-II-100, (1994).
Kim et al., “A Complexity Reduction Method for VSELP Coding Using Overlapped Sparse Basis Vectors”, Proceedings of the International Conference on Signal Processing Applications and Technology, XX, XX, vol. 2, pp. 1578-1582, (Oct. 1994).
Kataoka et al., “Improved CS-CELP Speech Coding in a Noisy Environment Using a Trained Sparse Conjugate Codebook,” Proc. of ICASSP-95, vol. 1, pp. 29-32 , (May 1995).
Sluijter et al., “State of the Art and Trends in Speech Coding”, Philips Journal of Research, vol. 49, No. 4, pp. 455-488, Elsevier, Amsterdam NL, (1995).
Ikedo et al., “Low Complexity Speech Coder for Personal Multimedia Communication,” IEEE, vol. CONF 4, pp. 808-812, (Nov. 1995).
Kataoka et al., “An 8-kb/s Conjugate Structure CELP (CS-CELP) Speech Coder,” IEEE Transactions on Speech and Audio Processing, vol. 4, No. 6, pp. 401-411, (Nov. 1996).
Ikeda et al., “Error-Protected Twin VQ Audio-Coding Method,” IEICI, vol. J80, No. 5, pp. 1016-1025, (1997), together with an English language translation of the same.
Ikedo et al., “Low Complexity CELP Speech Coder Using Orthogonalized Search of Algebraic Code”, p. 255, together with a partial English translation thereof, (1995).
Yasunaga et al., “ACELP Coding with Dispersed-Pulse Codebook,” Proc. of IEICE Conf., p. 253, (Mar. 1997).
Skoglund et al., “Predictive VQ for Noisy Channel Spectrum Coding: AR or MA?” 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE Comput. Soc., U.S., vol. 2, pp. 1351-1354, (Apr. 1997).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Speech coder using an orthogonal search and an orthogonal... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Speech coder using an orthogonal search and an orthogonal..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Speech coder using an orthogonal search and an orthogonal... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2739115

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.