Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition
Reexamination Certificate
1997-04-04
2002-05-14
{haeck over (S)}mits, T{overscore (a)}livaldis Ivars (Department: 2641)
Data processing: speech signal processing, linguistics, language
Speech signal processing
Recognition
C704S242000, C704S256000
Reexamination Certificate
active
06389395
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to speech processing and in particular to speech recognition.
2. Description of Related Art
Developers of speech recognition apparatus have the ultimate aim of producing machines with which a person can interact in a completely natural manner, without constraints. The interface between man and machine would ideally be completely seamless.
This is a vision that is getting closer to achievement but full fluency between man and machine has not yet been achieved. For fluency, an automated recogniser would require an infinite vocabulary of words and would need to be able to understand the speech of every user, irrespective of their accent, enunciation etc. Present technology and our limited understanding of how human beings understand speech make this unfeasible.
Current speech recognition apparatus includes data which relates to the limited vocabulary that the apparatus is capable of recognising. The data generally relates to statistical models or templates representing the words of the limited vocabulary. During recognition an input signal is compared with the stored data to determine the similarity between the input signal and the stored data. If a close enough match is found the input signal is generally deemed to be recognised as that model or template (or sequence of models or templates) which provides the closest match.
The templates or models are generally formed by measuring particular features of input speech. The feature measurements are usually the output of some form of spectral analysis technique, such as a filter bank analyser, a linear predictive coding analysis or a discrete transform analysis. The feature measurements of one or more training inputs corresponding to the same speech sound (i.e. a particular word, phrase etc. are typically used to create one or more reference patterns representative of the features of that sound. The reference pattern can be a template, derived from some type of averaging technique, or it can be a model that characterises the statistics of the features of the training inputs for a particular sound.
An unknown input is then compared with the reference pattern for each sound of the recognition vocabulary and a measure of similarity between the unknown input and each reference pattern is computed. This pattern classification step can include a global time alignment procedure (known as dynamic time warping DTW) which compensates for different rates of speaking. The similarity measures are then used to decide which reference pattern best matches the unknown input and hence what is deemed to be recognised.
The intended use of the speech recogniser can also determine the characteristics of the system. For instance a system that is designed to be speaker dependent only requires training inputs from a single speaker. Thus the models or templates represent the input speech of a particular speaker rather than the average speech for a number of users. Whilst such a system has a good recognition rate for the speaker from whom the training inputs were received, such a system is obviously not suitable for use by other users.
Speaker independent recognition relies on word models being formed from the speech signals of a plurality of speakers. Statistical models or templates representing all the training speech signals of each particular speech input are formed for subsequent recognition purposes. Whilst speaker independent systems perform relatively well for a large number of users, the performance of a speaker independent system is likely to be low for a user having an accent, intonation, enunciation etc. that differs significantly from the training samples.
In order to extend the acceptable vocabulary, sufficient training samples of the additional vocabulary have to be obtained. This is a time consuming operation, which may not be justified if the vocabulary is changing repeatedly.
It is known to provide speech recognition systems in which the vocabulary that a system is to be able to recognise may be extended by a service provider inputting the additional vocabulary in text form. An example of such a system is Flexword from AT&T. In such a system words are converted from text form into their phonetic transcriptions according to linguistic rules. It is these transcriptions that are used in a recogniser which has acoustic models of each of the phonemes.
The number of phonemes in a language is often a matter of judgement and may depend upon the particular linguist involved. In the English language there are around 40 phonemes as shown in Table 1.
TABLE 1
Tran-
Phoneme
Transcription
Example
Phoneme
scription
Example
/i/
IY
beat
/&eegr;/
G
sing
/I/
IH
bit
/p/
P
pet
/e′(e′)
EY
bait
/t/
T
ten
/&egr;/
EH
bet
/k/
K
kit
/æ/
AE
bat
/b/
B
bet
/a/
AA
Bob
/d/
D
debt
/A/
AH
but
/g/
G
get
/2/
AO
bought
/h/
HH
hat
/o/(o′)
OW
boat
/f/
F
fat
/U/
UH
book
/&thgr;/
TH
thing
/u/
UW
boot
/s/
S
sat
/2/
AX
about
/s/(sh)
SH
shut
/3′/
ER
bird
/v/
V
vat
/a″/
AW
down
/&dgr;/
DH
that
/a′/
AY
buy
/z/
Z
zoo
/3/
OY
boy
/_/(zh)
ZH
azure
/y/
Y
you
/_/(tsh)
CH
church
/w/
W
wit
/_(dzh.i)
JH
judge
/f/
R
rent
/m/
M
met
/l/
L
let
/
N
net
A reference herein to phonemes or sub-words relate to any convenient building block of words, for instance phonemes, strings of phonemes, allophones etc. Any references herein to phoneme or sub-word are interchangeable and refer to this broader interpretation.
For recognition purposes, a network of the phonemically transcribed text can then be formed from stored models representing the individual phonemes. During recognition, input speech is compared to the strings of reference models representing each allowable word or phrase. The models representing the individual phonemes may be generated in a speaker independent manner, from the speech signals of a number of different speakers. Any suitable models may be used, such as Hidden Markov Models.
Such a system does not make any allowance for deviations from the standard phonemic transcriptions of words, for instance if a person has a strong accent. Thus, even though a user has spoken a word that is in the vocabulary of the system, the input speech may not be recognised as such.
It is desirable to be able to adapt a speaker independent system so that it is feasible for use by a user with a pronunciation that differs from the modelled speaker. European patent application no. 453649 describes such an apparatus in which the allowed words of the apparatus vocabulary are modelled by a concatenation of models representing sub-units of words e.g. phonemes. The “word” models i.e. the stored concatenations, are then trained to a particular user's speech by estimating new parameters for the word model from the user's speech. Thus known, predefined word models (formed from a concatenation of phoneme models) are adapted to suit a particular user.
Similarly European patent application no. 508225 describes a speech recognition apparatus in which words to be recognised are stored together with a phoneme sequence representing the word. During training a user speaks the words of the vocabulary and the parameters of the phoneme models are adapted to the user's input.
In both of these known systems, a predefined vocabulary is required in the form of concatenated sequences of phonemes. However in many cases it would be desirable for a user to add words to the vocabulary, such words being specific to that users. One known means for providing an actual user with this flexibility involves using speaker dependent technology to form new word models which are then stored in a separate lexicon. The user has to speak each word one or more times to train the system. These speaker dependent models are usually formed using DTW or similar techniques which require relatively large amounts of memory to store each user's templates. Typically, each word for each user would occupy at least 125 bytes (and possibly over 2 kilobytes). This means that with a 20 word vocabulary, between 2.5 and 40 kilobytes must be
British Telecommunications public limited company
{haeck over (S)}mits T{overscore (a)}livaldis Ivars
LandOfFree
System and method for generating a phonetic baseform for a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method for generating a phonetic baseform for a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for generating a phonetic baseform for a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2866741