Word boundary probability estimating, probabilistic language...

Data processing: speech signal processing – linguistics – language – Linguistics – Translation machine

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S008000, C704S009000, C704S010000

Reexamination Certificate

active

07917350

ABSTRACT:
Calculates a word n-gram probability with high accuracy in a situation where a first corpus), which is a relatively small corpus containing manually segmented word information, and a second corpus, which is a relatively large corpus, are given as a training corpus that is storage containing vast quantities of sample sentences. Vocabulary including contextual information is expanded from words occurring in first corpus of relatively small size to words occurring in second corpus of relatively large size by using a word n-gram probability estimated from an unknown word model and the raw corpus. The first corpus (word-segmented) is used for calculating n-grams and the probability that the word boundary between two adjacent characters will be the boundary of two words (segmentation probability). The second corpus (word-unsegmented), in which probabilistic word boundaries are assigned based on information in the first corpus (word-segmented), is used for calculating a word n-grams.

REFERENCES:
patent: 5806021 (1998-09-01), Chen et al.
patent: 5987409 (1999-11-01), Tran et al.
patent: 6092038 (2000-07-01), Kanevsky et al.
patent: 6185524 (2001-02-01), Carus et al.
patent: 6363342 (2002-03-01), Shaw et al.
patent: 6411932 (2002-06-01), Molnar et al.
patent: 6738741 (2004-05-01), Emam et al.
patent: 6983248 (2006-01-01), Tahara et al.
patent: 7349839 (2008-03-01), Moore
patent: 2002/0111793 (2002-08-01), Luo et al.
patent: 2003/0093263 (2003-05-01), Chen et al.
patent: 2003/0097252 (2003-05-01), Mackie
patent: 2003/0152261 (2003-08-01), Hiroe et al.
patent: 2005/0071148 (2005-03-01), Huang et al.
patent: 2005/0091030 (2005-04-01), Jessee et al.
Luo et al, “An Iterative Algorithm to Build Chinese Language Models”, InProc. of the 34th Annual meeting of the Association for Computational Linguistics, pp. 139-143.
Nagata, “A stochastic Morphological Analyzer Using a Forward-DP Backward- N-Best Search Algorithm”,1994, InProc. COLING'94, pp. 201-207.
Nagata, “Japanese OCR Error Correction using Character Shape Similarity and Statistical Language Model”, 1998, InProc. COLING'98, Montreal, Canada, pp. 922-928.
Teahan et al, “A Compression-Based Algorithm for Chinese Word Segmentation”, 2000, Association for Computational Linguistics, vol. 26, No. 3, pp. 375-393.
Xue, “Chinese Word Segmentation as Character Tagging”, Feb. 2003, Computational Linguistics and Chinese Language Processing, vol. 8, No. 1, pp. 29-48.
Brent et al, “Chinese Text segmentation with MBDP-!: Making the most of training Corpora”, 2001, In Proc. of the ACL 2001, pp. 90-97.
Lee et al, “Language Model Based Arabic Word Segmentation”, Jul. 2003, In Proc. of the 41st Annual meeting of the ACL, pp. 399-406.
Nagata, Masaaki; “A Self-Organizing Japanese Word Segmenter using Heuristic Word Identification adn Re-Estimation”; 1997; pp. 203-215.
W.J. Teahan and John G. Cleary, 1996; “The Entropy of English usin PPM-Based Models” ; 1996; pp. 53-62.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Word boundary probability estimating, probabilistic language... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Word boundary probability estimating, probabilistic language..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Word boundary probability estimating, probabilistic language... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2646382

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.