Electrical audio signal processing systems and devices – Hearing aids – electrical – Noise compensation circuit
Reexamination Certificate
2002-03-04
2003-11-11
Kuntz, Curtis (Department: 2643)
Electrical audio signal processing systems and devices
Hearing aids, electrical
Noise compensation circuit
C381S312000, C381S313000, C381S317000, C381S320000, C381S321000, C704S209000, C704S225000, C704S234000, C379S008000
Reexamination Certificate
active
06647123
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to an electro-acoustic processing circuit for increasing speech intelligibility. More specifically, this invention relates to an audio device having signal processing capabilities for amplifying selected voice frequency bands without circuit instability and oscillation thereby increasing speech intelligibility of persons with a sensory neural hearing disorder.
BACKGROUND OF THE INVENTION
Persons with a sensory neural hearing disorder find the speech of others to be less intelligible in a variety of circumstances where those with normal hearing would find the same speech to be intelligible. Many persons with sensory neural hearing disorder find that they can satisfactorily increase the intelligibility of speech of others by cupping their auricle with their hand or using an ear trumpet directed into the external auditory-canal.
Many patients with sensory neural hearing disorder have normal or near normal pure tone sensitivity to some of the speech frequencies below about 1000 Hz. These frequencies generally comprise the first speech formant. Associated with their sensory neural hearing disorder is many patient's diminished absolute sensitivity for the pure tone frequencies that are higher than the first speech formant. This reduced sensitivity generally signifies a loss of perception of the second speech formant that occupies the voice spectrum between about 1000 Hz and 2800 Hz. Not only is the patient's absolute sensitivity lost for the frequencies of the second formant but the normal loudness relationship between the frequencies of the first and second formants is altered, with those of the second formant being less loud at ordinary supra threshold speech levels of 40-60 phons. Thus when electro-acoustical hearing aids amplify both formants by an approximately equal amount at normal speech input levels, the loudness of the second formant relative to the first is lacking and voices sound unintelligible, muffled, and basso.
Patients with sensory neural hearing disorder often have difficulty following the spoken message of a given speaker in the presence of irrelevant speech or other sounds in the lower speech spectrum. They may hear constant or intermittent head sounds, tinnitus; they may have a reduced range of comfortable loudness, recruitment; they may hear a differently pitched sound from the same tone presented to each ear, diplacusis binuralis; or they may mishear what has been said to them.
It is well established that for those with normal hearing, the first and second speech formants which together occupy the audio frequency band of about 250 Hz to 2800 Hz, are both necessary and sufficient for satisfactory speech intelligibility of a spoken message. This is demonstrated in telephonic communication equipment, i.e. the EE8a field telephone, of WWII vintage, and by the development of the “vocoder” and its incorporation into voice encryption means of WWII (U.S. Pat. No. 3,967,067 to Potter and U.S. Pat. No. 3,967,066 to Mathes, as described by Kahn, IEEE Spectrum, September 1984, pp. 70-80).
The vocoding and encryption process analyzed the speech signal into a plurality of contiguous bands, each about 250-300 Hz wide. After rectification and digitization, and combination with a random digital code supplied for each band, the combined digitized signals were transmitted to a distant decoding and re-synthesizing system. This system first subtracted the random code using a recorded duplicate of the code. It then reconstituted the voice by separately modulating the output of each of the plurality of channels, that were supplied from a single “buzz” source, rich in the harmonics of a variable frequency fundamental centered on 60 Hz (if the voice were that of a male).
At no point in this voice transmission was any of the original (analogue) speech signal transmitted. The resynthesis of the speech signal was accomplished with a non-vocally produced fundamental frequency and its harmonics, that was used to produce voiced sounds. The unvoiced speech sounds were derived from an appropriately supplied “hiss” source, also modulated and used to produce the voice fricative sounds. Because of the limitations imposed by the number of channels and their widths, the synthesized voice contained information (frequencies) from the first and second reconstituted speech formants. Although sounding robot-like, to those with normal hearing, the reconstituted speech was entirely intelligible and because there was no transmitted analogue signal could be used with perfect security.
It is also important to note that the content of each of the plurality of bands that make up vocoder speech are derived from the same harmonic rich buzz source. Thus the harmonic matrix forms the basis of an intercorrelated system of voice sounds throughout the speech range which comprise the first and second formants. Intelligibility depends therefore, among other things, upon maintaining the integrity of the first and second speech formants in appropriate loudness relationship to one and the other. These relationships were preserved in the encrypted vocoding process and in the subsequent resynthesizing process.
The diminished capability to decipher the speech of others is the principle reason that sensory-neural patients seek hearing assistance. Prior to the development of electro-acoustical hearing aides, hearing assistance was obtained largely by an extension of the auricle either with a “louder please” gesture (ear cupping) or an ear trumpet. Both of these means are effective for many sensory-neural patients but have the disadvantage that they are highly conspicuous and not readily acceptable, as means of assistance, to the patients who can be aided by them. Modern electro-acoustical hearing aids, in contrast, are much less conspicuous but bring with them undesirable features, which make them objectionable to many patients.
The results of modern hearing aid speech signal processing differ greatly from the horn-like acoustical processing characteristics provided by either the passive device of an ear trumpet or a hand used for ear cupping. Especially for the frequencies of the second speech formant, the latter provide significant acoustic gain in the form of enhanced impedance matching between the air medium outside the ear and the outer ear canal. The passive devices moreover provide less gain for the first speech formant frequencies and do not create intrinsic extraneous hearing aid-generated sounds in the signals that are passed to the patient's eardrum. They also provide a signal absent of ringing and of oscillation or the tendency to oscillate at audible frequencies, which is usually at about 2900 Hz and called “howl” or “whistle” in the prior art. Moreover, passive devices, being intrinsically linear, in an amplitude sense, convey their signals without extraneous intermodulation products. As stable systems, passive devices have excellent transient response characteristics, are free of the tendency to ring, have stable acoustic gain, and have stable bandwidth characteristics.
An electro-acoustic hearing aid, in contrast, consists basically of a microphone, an earphone or loud speaker and an electronic amplifier between the two which are all connected together in one portable unit. Such electro-acoustical aids inevitably provide a short air path between the microphone and the earphone or loudspeaker, whether or not the two are housed in a single casing. If the unit is an in-the-ear type electro-acoustic hearing aid, there is almost inevitably provided a narrow vent channel or passageway through which the output of the earphone or loudspeaker may pass to the input microphone. This passageway provides a second pathway for the voice of the person speaking to the aid wearer whereby audio signals traveling in this passageway reaches the patient's auditory system (eardrum) unmodified by the aid.
Significant acoustic coupling between the microphone and the earphone render the entire electronic system marginally stable with the potential for
Kandel Gillray L.
Ostrander Lee E.
Bioinstco Corp
Harvey Dionne N.
Kuntz Curtis
LandOfFree
Signal processing circuit and method for increasing speech... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Signal processing circuit and method for increasing speech..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Signal processing circuit and method for increasing speech... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3183518