Automobile speech-recognition interface

Data processing: vehicles – navigation – and relative location – Vehicle control – guidance – operation – or indication – Vehicle subsystem or accessory control

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S200000, C704S231000, C704S246000, C704S273000, C704S275000, C715S728000

Reexamination Certificate

active

07826945

ABSTRACT:
An automotive system provides an integrated user interface for control and communication functions in an automobile or other type of vehicle. The user interface supports voice enabled interactions, as well as other modes of interaction, such as manual interactions using controls such as dashboard or steering wheel mounted controls. The system also includes interfaces to devices in the vehicle, such as wireless interfaces to mobile devices that are brought into the vehicle. The system also provides interfaces to information sources such as a remote server, for example, for accessing information.

REFERENCES:
patent: 5212764 (1993-05-01), Ariyoshi
patent: 5452397 (1995-09-01), Ittycheriah et al.
patent: 5640485 (1997-06-01), Ranta
patent: 6061647 (2000-05-01), Barrett
patent: 6073101 (2000-06-01), Maes
patent: 6260012 (2001-07-01), Park
patent: 6587824 (2003-07-01), Everhart et al.
patent: 6622083 (2003-09-01), Knockeart et al.
patent: 6707421 (2004-03-01), Drury et al.
patent: 6711543 (2004-03-01), Cameron
patent: 6978237 (2005-12-01), Tachimori et al.
patent: 2002/0069071 (2002-06-01), Knockeart et al.
patent: 2002/0091518 (2002-07-01), Baruch et al.
patent: 2002/0143548 (2002-10-01), Korall et al.
patent: 2002/0152264 (2002-10-01), Yamasaki
patent: 2002/0184030 (2002-12-01), Brittan et al.
patent: 2003/0018475 (2003-01-01), Basu et al.
patent: 2003/0023371 (2003-01-01), Stephens
patent: 2003/0110057 (2003-06-01), Pisz
patent: 2003/0125943 (2003-07-01), Koshiba
patent: 2004/0049375 (2004-03-01), Brittan et al.
patent: 2005/0080613 (2005-04-01), Colledge et al.
patent: 2005/0135573 (2005-06-01), Harwood et al.
patent: 2006/0058947 (2006-03-01), Schalk
patent: 2007/0005206 (2007-01-01), Zhang et al.
patent: 1 054 387 (2000-11-01), None
patent: 1 054 387 (2001-11-01), None
patent: 1 246 086 (2002-10-01), None
patent: 1 403 852 (2005-03-01), None
patent: 2001-401615 (2001-12-01), None
patent: 99/14928 (1999-03-01), None
patent: WO 03/030148 (2003-04-01), None
Takao Watanabe, et al., “Unknown Utterance Rejection Using Likelihood Normalization Based on Syllable Recognition”, IEICE, DII, vol. J75-D-II, No. 12, Dec. 1992, pp. 2002-2009.
McAuley, “Optimum Speech Classification and Its Application to Adaptive Noise Cancellation”, M.I.T. Lincoln Lab, Lexington, Massachusetts, pp. 425-426.
Atal et al, “A Pattern Recognition Approach to Voice-Unvoiced-Silence Classification with Applications to speech Recognition” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-24, No. 3, Jun. 1976, pp. 201-202.
Rabiner et al, “Evaluation of a Statistical Approach to Voiced-Silence Analysis for Telephone-Quality Speech”, The Bell System Technical Journal, vol. 56, No. 3, Mar. 1997, pp. 455-481.
A robust adaptive speech enhancement system for vehicular applications; Jwu-Sheng Hu; Chieh-Chcng Cheng; Wei-Han Liu; Chia-Hsing Yang; Consumer Electronics, IEEE Transactions on; vol. 52 , Issue: 3; Digital Object Identifier: 10.1109/TCE.2006.1706509; Publication Year: 2006 , pp. 1069-1077.
Environmental Sniffing: Noise Knowledge Estimation for Robust Speech Systems; Akbacak, M.; Hansen, J. H. L.; Audio, Speech, and Language Processing, IEEE Transactions on; vol. 15 , Issue: 2 ; Digital Object Identifier: 10.1109/TASL.2006.881694; Publication Year: 2007 , pp. 465-477.
Assessment of speech dialog systems using multi-modal cognitive load analysis and driving performance metrics; Kleinschmidt et al. ; Vehicular Electronics and Safety (ICVES), 2009 IEEE International Conference on; Digital Object Identifier: 10.1109/ICVES.2009.5400232; Publication Year: 2009 ,pp. 162-167.
Improved Histogram Equalzaiton (HEQ) for Robust Speech Recogntion; Shih-Hsiang Lin et al.; Multimedia and Expo, 2007 IEEE International Conference on;Digital Object Identifier: 10.1109/ICME.2007.4285130; Publication Year: 2007 , pp. 2234-2237.
Missing-Feature Reconstruction by Leveraging Temporal Spectral Correlation for Robust Speech Recognition in Background Noise Conditions; Kim, W.; Hansen, J.; Audio, Speech, and Language Processing, IEEE Transactions on; vol. 18 , Issue: 8 Digital Object Identifier: 10.1109/TASL.2010.2041698; Publication Year: 2010 , pp. 2111-2120.
On-Line Feature and Acoustic Model Space Compensation for Robust Speech Recognition in Car Environment; Miguel, A. et al.; Intelligent Vehicles Symposium, 2007 IEEE; Digital Object Identifier: 10.1109/IVS.2007.4290250; Publication Year: 2007 , pp. 1019-1024.
FPGA implementation of spectral subtraction for in-car speech enhancement and recognition; Whittington, J. et al.; Signal Processing and Communication Systems, 2008. ICSPCS 2008. 2nd International Conference on; Digital Object Identifier: 10.1109/ICSPCS.2008.4813714; Publication Year: 2008 , pp. 1-8.
Speech based emotion classification framework for driver assistance system; Tawari, Ashish et al.; Intelligent Vehicles Symposium (IV), 2010 IEEE; Digital Object Identifier: 10.1109/IVS.2010.5547956; Publication Year: 2010 , pp. 174-178.
The Australian English Speech Corpus for In-Car Speech processing; Kleinschmidt, T. et al; Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on; Digital Object Identifier: 10.1109/ICASSP.2009.4960549 Publication Year: 2009 , pp. 4177-4180.
Joint Acoustic-Video Fingerprinting of Vehicles, Part II; Cevher, V.; Guo, F. et al; Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on; vol. 2; Digital Object Identifier: 10.1109/ICASSP.2007.366344 Publication Year: 2007 , pp. II-749-II-752.
Implementation of speech recognition on MCS51 microcontroller for controlling wheelchair; Thiang; Intelligent and Advanced Systems, 2007. ICIAS 2007. International Conference on; Digital Object Identifier: 10.1109/ICIAS.2007.4658573 Publication Year: 2007 , pp. 1193-1198.
“User's Guide for Nokia 7610” [Online] Nokia, copyright 2004, retrieved from the Internet: <URL:http:/
ds2.nokia.com/files/support/apac/phones/guides/Nokia—7610—APAC—UG—en—v2.pdf>.
European Search Report dated May 14, 2008 from European Application No. 06116015.6.
Sharon Oviatt and Philip Cohen, MultiModal Interfaces That Process What Comes Naturally, Communications of the ACM, Mar. 2000/vol. 43 No. 3, pp. 45-53.
Stephanie Seneff and Chao Wang, Automatic Acquisition of Names Using Speak and Spell Mode in Spoken Dialogue Systems, MIT Laboratory for Computer Science, Mar. 2003, pp. 495-496.
Christian Gehrmann, Bluetooth Security White Paper, Bluetooth SIG Security Expert Group, Apr. 19, 2002, Revision 1.00, pp. 1-46.
Roberto Pieraccini, et al., A Multimodal Conversational interface for a Concept Vehicle, SpeechWorks International, 55 Broad Street, NY, NY, Eurospeech 2003.
Kris Demuynck and Tom Laureys, A Comparison of Different Approaches to Automatic Speech Segmentation, K.U. Leuven ESAT/PSI, Kasteelpark Arenbwerg 10, B-3001 Leuven, Belgium, http://www.esat.kuleuven.ac.be/˜spch.
Search Report from counterpart European application No. 06116015.6 dated Mar. 19, 2008.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automobile speech-recognition interface does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automobile speech-recognition interface, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automobile speech-recognition interface will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4209508

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.