Automatic construction of conditional exponential models...

Data processing: speech signal processing – linguistics – language – Linguistics – Translation machine

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C705S001100

Reexamination Certificate

active

06304841

ABSTRACT:

BACKGROUND OF THE INVENTION
The invention relates to computerized language translation, such as computerized translation of a French sentence into an English sentence.
In U.S. Pat. No. 5,477,451, issued Dec. 19, 1995 entitled “Method and System For Natural Language Translation” by Peter F. Brown et al. (the entire content of which is incorporated herein by reference), there is described a computerized language translation system for translating a text F in a source language to a text E in a target language. The system described therein evaluates, for each a number of hypothesized target language texts E, the conditional probability P(E|F) of the target language text E given the source language text F. The hypothesized target language text E having the highest conditional-probability P(E|F) is selected as the translation of the source language text F.
Using Bayes' theorem, the conditional probability P(E|F) of the target language text E given the source language text F can be written as
P

(
E

F
)
=
P

(
F

E
)

P

(
E
)
P

(
F
)
(1a)
Since the probability P(F) of the source language text F in the denominator of Equation 1a is independent of the target language text Ê ;, the target language text E having the highest conditional probability P(E|F) will also have the highest product P(F|E) P(E). We therefore arrive at
E
^
=
argmax
E

P

(
F

E
)

P

(
E
)
(2a)
In Equation 2a, the probability P(E) of the target language text E is a language model match score and may be estimated from a target language model. While any known language model may be used to estimate the probability P(E) of the target language text E, Brown et al. describe an n-gram language model comprising a 1-gram model, a 2-gram model, send a 3-gram model combined by parameters whose values are obtained by interpolated estimation.
The conditional probability P(F|E) in Equation 2a is a translation match score. As described by Brown et al., the translation match score P(F|E) for a source text F comprising a series of source words, given a target hypothesis E comprising a series of target words, may be estimated by finding all possible alignments connecting the source words in the source text F, with the target words in the target text E, including alignments in which one or more source words are not connected to any target words, but not including alignments where a source word is connected to more than one target word. For each alignment and each target word e in the target text E connected to &phgr; source words in the source text F, there is estimated the fertility probability n(&phgr;|e) that the target word e is connected to the &phgr; source words in the alignment. There is also estimated for each source word f in the source text F and each target word e in the target text E connected to the source word f by the alignment, the lexical probability t(f|e) that the source word f would occur given the occurrence of the connected target word e.
For each alignment and each source word f, Brown et al. further estimate the distortion probability a(j|a
j
, m) that the source word f is located in position j of the source text F, given that the target word e connected to the source word f is located in position a
j
in the target text E, and given that there are m words in the source text F.
By combining the fertility probabilities for an alignment and for all target words e in the target text E, and multiplying the result by the probability n
o
(&phgr;
o
|&Sgr;&phgr;
i
) of the number &phgr;
o
of target words not connected with any source words in the alignment, given the sum of the fertilities &phgr; of all of the target words in the target text E in the alignment, a fertility score for the target text E and the alignment is obtained.
By combining the lexical probabilities for an alignment and for all source words in the source text F, a lexical score for the alignment is obtained.
By combining the distortion probabilities for an alignment and for all source words in the source text F which are connected to a target word in the alignment, and by multiplying the result by 1/&phgr;
o
! (where &phgr;
0
is the number of target words in the target text E that are not connected with any source words), a distortion score for the alignment is obtained.
Finally, by combining the fertility, lexical, and distortion scores for the alignment, and multiplying the result by the combinatorial factor ∥(&phgr;
i
!), a translation match score for the alignment is obtained. (See Brown, et al., Section 8.2).
The translation match score P(F|E) for the source text F and the target hypothesis E may be the sum of the translation match scores for all permitted alignments between the source text F and the target hypothesis E. Preferably, the translation match score P(F|E) for the source text F and the target hypothesis E is the translation match score for the alignment estimated to be most probable.
Equation 2a may be used to directly estimate the target hypothesis match score P(F|E)P(E) for a hypothesized target language text E and a source language text F. However, in order to simplify the language model P(E) and the translation model P(F|E), and in order to estimate the parameters of these models from a manageable amount of training data, Brown et al. estimate the target hypothesis match score P(F|E)P(E) for simplified intermediate forms E′ and F′ of the target language text E and the source language text F, respectively. Each intermediate target language word e′ represents a class of related target language words. Each intermediate source language word f′ represents a class of related source language words. A source language transducer converts the source language text F to the intermediate form F′. The hypothesized intermediate form target language text Ê ;′ having the highest hypothesis match score P(F′|E′)P(E′) is estimated from Equation 2a. A target language transducer converts the best matched intermediate target language text Ê ;′ to the target language text Ê ;.
In their language translation system, Brown et al. estimate the lexical probability of each source word f as the conditional probability t(f|e) of each source word f given solely the target words e connected to the source word in an alignment. Consequently, the lexical probability provides only a coarse estimate of the probability of the source word f.
STATISTICAL MODELLING
Statistical modelling addresses the problem of constructing a parameterized model to predict the future behavior of a random process. In constructing this model, we typically have at our disposal a sample of an output from the process. This sample output embodies our incomplete state of knowledge of the process; so the modelling problem is to parlay this incomplete knowledge into an accurate representation of the process. Statistical modelling is rarely an end in itself, but rather a tool to aid in decision-making.
Baseball managers (who rank among the better paid statistical modellers) employ batting average, compiled from a history of at-bats, to gauge the likelihood that a player will succeed in his next appearance at the plate. Thus informed, they manipulate their lineups accordingly. Wall Street speculators (who rank among the best paid statistical modellers) build models based on past stock price movements to predict tomorrow's fluctuations and alter their portfolios to capitalize on the predicted future. Natural language researchers, who reside at the other end of the pay scale, design language, translation and acoustic models for use in speech recognition, machine translation and natural language processing.
The past few decades have witnessed significant progress toward increasing the predictive capacity of statistical models of natural language.
These efforts, while varied in specifics, all conf

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automatic construction of conditional exponential models... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automatic construction of conditional exponential models..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automatic construction of conditional exponential models... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2573640

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.