Customized tokenization of domain specific text via rules...

Data processing: speech signal processing – linguistics – language – Linguistics – Natural language

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S010000, C704S257000

Reexamination Certificate

active

06327561

ABSTRACT:

CROSS REFERENCE TO RELATED APPLICATIONS
(Not Applicable)
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
(Not Applicable)
BACKGROUND OF THE INVENTION
1. Technical Field
This invention relates to the field of text tokenization and more particularly to a method and system for supporting customized tokenization of domain-specific text.
2. Description of the Related Art
Tokenization is the process of separating text into words, punctuation and optionally, phrases. Tokenization can include case folding of words at the beginning of a sentence and special formatting modifications to an input string, such as is sometimes done with numbers. Tokenization plays a critical role in the building of speech recognition vocabularies. Tokenization can also be used in coordination with other components of a speech recognition system, for instance with a speech correction tool or a speech analysis tool for updating a system language model. To ensure consistency, it is essential to have one common tokenizer for all applications needing a particular type of token processing so that the concept of what makes up a word remains in agreement.
The inherent difficulty associated with processing a variety of electronic text can cause the expansion of the complexity of the source code forming the tokenization program. Typically, several hundred lines of source code are needed to form a tokenization program able to convert written forms to spoken forms, dividing character streams at logical word boundaries. Tokenization code can become particularly complex and troublesome in view of the multiple uses for common symbols, such as the apostrophe, comma, period and numbers. Since general purpose tokenizers cannot correctly process text in all domains, it is essential that vocabulary builders have the flexibility to customize this process. Specifically, when building a new vocabulary, it is common to make minor modifications to the general purpose tokenizer in order to correctly tokenize domain-specific strings.
Present systems address the need for context-specific tokenization, also referred to as domain-specific tokenization, in two ways. First, a vocabulary requiring special tokenization can be distributed without a vocabulary-specific tokenizer. The general-purpose tokenizer can then be used by the speech recognition system when the vocabulary becomes active, for instance during correction. In consequence, the tokenization used in building the vocabulary can differ from the tokenization used for updating the system language model. Second, as an alternative, a vocabulary requiring special tokenization can be distributed with a vocabulary-specific tokenizer which includes general purpose rules in addition to domain-specific rules.
Where a vocabulary requiring special tokenization is distributed without a vocabulary-specific tokenizer, inconsistencies can arise between the vocabulary and the personal language model. In the alternative case, where a vocabulary is distributed with a vocabulary-specific tokenizer, improvements or bug fixes directed toward future versions of the general purpose tokenizer will require rebuilding and redistributing the vocabulary-specific tokenizer of the domain-specific vocabulary. Furthermore, external software developers building vocabularies will not be able to develop vocabulary-specific tokenizers because the external developers lack the proprietary knowledge of the speech recognition system necessary for the development of the general-purpose tokenizer. Thus, no present system provides for a flexible customized tokenization system, capable of processing vocabulary-specific tokenization schemes while treating all vocabularies uniformly. Accordingly, there is a long-felt need for a flexible tokenization system, capable of processing vocabulary-specific tokenization schemes while treating all vocabularies uniformly.
SUMMARY OF THE INVENTION
A flexible tokenization system, in accordance with the inventive arrangement, satisfies the long-felt need of the prior art by processing vocabulary-specific tokenization schemes while treating all vocabularies uniformly. Thus, the inventive arrangements provide a method and system for supporting customized tokenization of domain-specific text. The inventive arrangements have advantages over all known tokenization methods, and provide a novel and nonobvious system, including apparatus and method, for supporting customized tokenization of domain-specific text. A method for supporting customized tokenization of domain-specific text in accordance with the inventive arrangements comprises the steps of: loading domain-specific tokenization rules corresponding to the customized tokenization of the domain-specific text; tokenizing the domain-specific text using the loaded domain-specific tokenization rules; and, further tokenizing the domain-specific text using general purpose tokenization rules.
The loading step can comprise: loading a speech recognition vocabulary; and, loading domain-specific tokenization rules corresponding to the speech recognition vocabulary. In addition, the tokenizing step can comprise identifying each substring in the domain-specific text matching a regular expression having a corresponding replacement pattern in the loaded domain-specific tokenization rules, and replacing each substring identified in the identifying step with the replacement pattern corresponding to the matched regular expression. However, to accomodate the circumstance where the exclusion of particular domain-specific text from domain-specific tokenization is desired, the tokenizing step can comprise identifying substrings in the domain-specific text matching a regular expression having a corresponding replacement pattern in the second loaded domain-specific tokenization rules; excluding from further processing the identified substrings having a do-not-replace marker associated with the identified substring; and, replacing each non-excluded identified substring with the replacement pattern corresponding to the matched regular expression.
Alternatively, the loading step can comprise first loading an active vocabulary, identifying domain-specific tokenization rules corresponding to the active vocabulary, and second loading the domain-specific tokenization rules identified in the identifying step. In the alternative case, the tokenizing step can comprise checking for the second loaded domain-specific tokenization rules, and processing the domain-specific text using the second loaded domain-specific tokenization rules only if the second loaded domain-specific tokenization rules are identified in the checking step. Moreover, the processing step can comprise identifying each substring in the domain-specific text matching a regular expression having a corresponding replacement pattern in the second loaded domain-specific tokenization rules, and replacing each substring identified in the identifying step with the replacement pattern corresponding to the matched regular expression. As before, to accomodate the circumstance where the exclusion of particular domain-specific text from domain-specific tokenization is desired, the tokenizing step can comprise identifying substrings in the domain-specific text matching a regular expression having a corresponding replacement pattern in the second loaded domain-specific tokenization rules; excluding from further processing the identified substrings having a do-not-replace marker associated with the identified substring; and, replacing each non-excluded identified substring with the replacement pattern corresponding to the matched regular expression.


REFERENCES:
patent: 4991094 (1991-02-01), Fagan et al.
patent: 5687384 (1997-11-01), Nagese
patent: 5721939 (1998-02-01), Kaplan
patent: 5774888 (1998-06-01), Light
patent: 5890103 (1999-03-01), Carus
patent: 5937422 (1999-08-01), Nelson et al.
patent: 5960384 (1999-09-01), Brash
patent: 6125377 (2000-09-01), Razin
patent: 287 310 A2 (1988-10-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Customized tokenization of domain specific text via rules... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Customized tokenization of domain specific text via rules..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Customized tokenization of domain specific text via rules... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2575726

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.