System and method for interpreting document contents

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C707S793000, C707S793000, C707S793000, C707S793000, C707S793000

Reexamination Certificate

active

06772170

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to a method for automatically analyzing and characterizing the content of electronically formatted natural language based documents.
BACKGROUND OF THE INVENTION
As the global economy has become increasingly driven by the skillful synthesis of information across all disciplines, be they scientific, economic, or otherwise, the sheer volume of information available for use in such a synthesis has rapidly expanded. This has resulted in an ever increasing value for systems or methods which are able to analyze information and separate information relevant to a particular problem or useful in a particular inquiry from information that is not relevant or useful. The vast majority of information available for such synthesis, 95% according to estimates by the National Institute for Science and Technology (NIST), is in the form of written natural language. The traditional method of analyzing and characterizing information in the form of written natural language is to simply read it. However, this approach is increasingly unsatisfactory as the sheer volume of information outpaces the time available for manual review. Thus, several methodologies for automating the analysis and characterization of such information have arisen. Typical for such schemes is the requirement that the information is presented, or converted, to an electronic form or database, thereby allowing the database to be manipulated by a computer system according to a variety of algorithms designed to analyze and/or characterize the information available in the database. For example, vector based systems using first order statistics have been developed which attempt to define relationships between documents based upon simple characteristics of the documents, such as word counts.
The simplest of these methodologies is a simple search wherein a word or a word form is entered into the computer as a query and the computer compares the query to words contained in the documents in the database to determine if matches exist. If there are matches, the computer then returns a list of those documents within the database which contain a word or word form which matches the query. This simple search methodology may be expanded by the addition of other Boolean operators into the query. For example, the computer may be asked to search for documents which contain both a first query and a second query, or a second query within a predetermined number of words from the first query, or for documents containing a query which consist of a series of terms, of for documents which contain a particular query but not another query. Whatever the particular parameters, the computer searches the database for documents which fit the required parameters, and those documents are then returned to the user.
Among the drawbacks of such schemes is the possibility that in a large database, even a very specific query may match a number of documents that is too large to be effectively reviewed by the user. Additionally, given any particular query, there exists the possibility that documents which would be relevant to the user may be overlooked because the documents do not contain the specific query term identified by the user; in other words, these systems often ignore word to word relationships, and thus require exacting queries to insure meaningful search results. Because these systems tend to require exacting queries, these methods suffer from the drawback that the user must have some concept of the contents of the documents in order to draft a query which will generate the desired results. This presents the users of such systems with a fundamental paradox: In order to become familiar with a database, the user must ask the right questions or enter relevant queries; however, to ask the right questions or enter relevant queries, the user must already be familiar with the database.
To overcome these and other drawbacks, a number of methods have arisen which are intended to compare the contents of documents in an electronic database and thereby determine relationships between the documents. In this manner, documents that address similar subject matter but do not share common key words may be linked, and queries to the database are able to generate resulting relevant documents without requiring exacting specificity in the query parameters. For example, vector based systems using higher order statistics may be characterized by the generation of vectors which can be used to compare documents. By measuring conditional probabilities between and among words contained within the database, different terms may be linked together. However, these systems suffer from the drawback that they are unable to discern words which provide insight into the meaning of the documents which contain them. Other systems have sought to overcome this limitation by utilizing neural networks or other methods to capture the higher order statistics required to compress the vector space. These systems suffer from considerable computational lag due to the large amount of information that they are processing. Thus, there exists a need for an automated system which will analyze and characterize a database of electronically formatted natural language based documents in a manner wherein the system output correlates documents within the database according to the meaning of the documents and required system resources are minimized.
SUMMARY OF THE INVENTION
Accordingly, the present invention is a System for Information Discovery (SID). The overriding intent of the present invention is to provide a system for analyzing and characterizing a database of electronically formatted natural language based documents wherein the output is information concerning the content and structure of the underlying database in a form that correlates the meaning of the individual documents within the database. A sequence of word filters are used to eliminate terms in the database which do not discriminate document content, resulting in a filtered word set whose members are highly predictive of content. The filtered word set is then further reduced to determine a subset of topic words which are characterized as the set of filtered words which best discriminate the content of the documents which contain them. These two word sets, the filtered word set and the topic set, are then formed into a two dimensional matrix. Matrix entries are then calculated as the conditional probability that a document will contain a word in a row given that it contains the word in the column of the matrix. The number of word correlations which is computed is thus significantly reduced because each word in the filtered set is only related to the topic words, with the topic word set being smaller than the filtered word set. The matrix representation thus captures the context of the filtered words and allows the resultant vectors to be utilized to interpret document contents with a wide variety of querying schemes. Surprisingly, while computational efficiency gains are realized by utilizing the reduced topic word set (as compared with creating a matrix with only the filtered word set forming both the columns and the rows), the ability of the resultant vectors to predict content is comparable or superior to approaches which consider word sets which have not been reduced either in the number of terms considered or by the number of correlations between terms.
The first step of the system is to compress the vocabulary of the database through a series of filters. Three filters are employed, the frequency filter, the topicality filter and the overlap filter. The frequency filter first measures the absolute number of occurrences of each of the words in the database and eliminates those which fall outside of a predetermined upper and lower frequency range.
The topicality filter then compares the placement of each word within the database with the expected placement assuming the word was randomly distributed throughout the database. By expressing the ratio between a value representing the actual pla

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for interpreting document contents does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for interpreting document contents, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for interpreting document contents will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3269285

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.