Method and apparatus for improved duration modeling of phonemes

Data processing: speech signal processing – linguistics – language – Speech signal processing – Synthesis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S267000, C704S269000

Reexamination Certificate

active

06785652

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to speech synthesis systems. More particularly, this invention relates to the modeling of phoneme duration in speech synthesis.
BACKGROUND OF THE INVENTION
Speech is used to communicate information from a speaker to a listener. Human speech production involves thought conveyance through a series of neurological processes and muscular movements to produce an acoustic sound pressure wave. To achieve speech, a speaker converts an idea into a linguistic structure by choosing appropriate words or phrases to represent the idea, orders the words or phrases based on grammatical rules of a language, and adds any additional local or global characteristics such as pitch intonation, duration, and stress to emphasize aspects important for overall meaning. Therefore, once a speaker has formed a thought to be communicated to a listener, they construct a phrase or sentence by choosing from a collection of finite mutually exclusive sounds, or phonemes. Following phrase or sentence construction, the human brain produces a sequence of motor commands that move the various muscles of the vocal system to produce the desired sound pressure wave.
Speech can be characterized in terms of acoustic-phonetics and articulatory phonetics. Acoustic-phonetics are described as the frequency structure, time waveform characteristics of speech. Acoustic-phonetics show the spectral characteristics of the speech wave to be time-varying, or nonstationary, since the physical system changes rapidly over time. Consequently, speech can be divided into sound segments that possess similar acoustic properties over short periods of time. A time waveform of a speech. signal is used to determine signal periodicities, intensities, durations, and boundaries of individual speech sounds. This time waveform indicates that speech is not a string of discrete well-formed sounds, but rather a series of steady-state or target sounds with intermediate transitions. The preceding and succeeding sound in a string can grossly affect whether a target is reached completely, how long it is held, and other finer details of the sound. As the string of sounds forming a particular utterance are continuous, there exists an interplay between the sounds of the utterance called coarticulation. Coarticulation is the term used to refer to the change in phoneme articulation and acoustics caused by the influence of another sound in the same utterance.
Articulatory phonetics are described as the manner or place of articulation or the manner or place of adjustment and movement of speech organs involved in pronouncing an utterance. Changes found in the speech waveform are a direct consequence of movements of the speech system articulators, which rarely remain fixed for any sustained period of time. The speech system articulators are defined as the finer human anatomical components that move to different positions to produce various speech sounds. The speech system articulators comprise the vocal folds or vocal cords, the soft palate or velum, the tongue, the teeth, the lips, the uvula, and the mandible or jaw. These articulators determine the properties of the speech system because they are responsible for regions of emphasis, or resonances, and deemphasis, or antiresonances, for each sound in a speech signal spectrum. These resonances are a consequence of the articulators having formed various acoustical cavities and subcavities out of the vocal tract cavities. Therefore, each vocal tract shape is characterized by a set of resonant frequencies. Since these resonances tend to “form” the overall spectrum they are referred to as formants.
One prior art approach to speech synthesis is the formant synthesis approach. The formant synthesis approach is based on a mathematical model of the human vocal tract in which a time domain speech signal is Fourier transformed. The transformed signal is evaluated for each formant, and the speech synthesis system is programmed to recreate the formants associated with particular sounds. The problem with the formant synthesis approach is that the transition between individual sounds is difficult to recreate. This results in synthetic speech that sounds contrived and unnatural.
While speech production involves a complex sequence of articulatory movements timed so that vocal tract shapes occur in a desired phoneme sequence order, expressive uses of speech depend on tonal patterns of pitch, syllable stresses, and timing to form rhythmic speech patterns. Timing and rhythms of speech provide a significant contribution to the formal linguistic structure of speech communication. The tonal and rhythmic aspects of speech are referred to as the prosodic features. The acoustic patterns of prosodic features are heard in changes in duration, intensity, fundamental frequency, and spectral patterns of the individual phonemes.
A phoneme is the basic theoretical unit for describing how speech conveys linguistic meaning. As such, the phonemes of a language comprise a minimal theoretical set of units that are sufficient to convey all meaning in the language; this is to be compared with the actual sounds that are produced in speaking, which speech scientists call allophones. For American English, there are approximately 50 phonemes which are made up of vowels, semivowels, diphthongs, and consonants. Each phoneme can be considered to be a code that consists of a unique set of articulatory gestures. If speakers could exactly and consistently produce these phoneme sounds, speech would amount to a stream of discrete codes. However, because of many different factors including, for example, accents, gender, and coarticulatory effects, every phoneme has a variety of acoustic manifestations in the course of flowing speech. Thus, from an acoustical point of view, the phoneme actually represents a class of sounds that convey the same meaning.
The most abstract problem involved in speech synthesis is enabling the speech synthesis system with the appropriate language constraints. Whether phones, phonemes, syllables, or words are viewed as the basic unit of speech, language, or linguistic, constraints are generally concerned with how these fundamental units may be concatenated, in what order, in what context, and with what intended meaning. For example, if a speaker is asked to voice a phoneme in isolation, the phoneme will be clearly identifiable in the acoustic waveform. However, when spoken in context, phoneme boundaries become difficult to label because of the physical properties of the speech articulators. Since the vocal tract articulators consist of human tissue, their positioning from one phoneme to the next is executed by movement of muscles that control articulator movement. As such, the duration of a phoneme and the transition between phonemes can modify the manner in which a phoneme is produced. Therefore, associated with each phoneme is a collection of allophones, or variations on phones, that represent acoustic variations of the basic phoneme unit. Allophones represent the permissible freedom allowed within a particular language in producing a phoneme, and this flexibility is dependent on the phoneme as well as on the phoneme position within an utterance.
Another prior art approach to speech synthesis is the concatenation approach. The concatenation approach is more flexible than the formant synthesis approach because, in combining diphone sounds from different stored words to form new words, the concatenation approach better handles the transition between phoneme sounds. The concatenation approach is also advantageous because it eliminates the decision on which formant or which portion of the frequency band of a particular sound is to be used in the synthesis of the sound. The disadvantage of the concatenation approach is that discontinuities occur when the diphones from different words are combined to form new words. These discontinuities are the result of slight differences in frequency, magnitude, and phase between different diphones.
In using the concatenation approach for speech synthesis, four

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for improved duration modeling of phonemes does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for improved duration modeling of phonemes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for improved duration modeling of phonemes will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3347317

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.