Data processing: database and file management or data structures – Database design – Data structure types
Reexamination Certificate
2001-03-01
2003-11-04
Metjahic, Safet (Department: 2171)
Data processing: database and file management or data structures
Database design
Data structure types
Reexamination Certificate
active
06643646
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention is related to the general field of database processing and in particular is directed to analysis, interpretation and comparison of data of large databases or data warehouse environments. Specifically, a major objective of the invention is to provide a method for automatically and efficiently interpreting such data.
Recent years has seen a proliferation of data collection. Due in large part to the growth of use of computers in every facet of the business world, massive amounts of data can now be assembled and maintained in very large databases and data warehouses. (A data warehouse is collection of data designed to support management decision making together with a database system for maintaining the data, and systems for extracting desired portions of the data.) Unfortunately, as capable as today's computing technology may be at gathering, organizing, and maintaining such large accumulations of data, they fail miserably at being able to process and analyze such massive amounts of data. This inability to be able to unearth and pull relevant business insights, interesting patterns, and like observations from large quantities of data makes collecting such data less useful. Even searching such databases and data warehouses for relevant relationships in an effort to gain some insight into and observations about buying patterns, for example, can be a daunting task due not only to the huge amounts data available for review and analysis, but also to the lack of capability available to today's search technology.
There are available today tools and techniques, e.g., association rules designed to find relationships between millions of database records, to assist business and data analysts gain a better understanding of their data. But, it is not intuitive or obvious as to how an analysis of such a large amount of data should be focused, what new knowledge should be extracted from the database, or how to then interpret and evaluate this new knowledge.
Among the tools available are those employing various “data mining.” algorithms that operate to sift through a large database or data warehouse in an effort to locate patterns. Data mining techniques typically classify and/or cluster data, find associations between data items, or perform time series analysis. (See, for example, Agrawal, R. et al.,
In Advances in Knowledge Discovery and Data Mining,
AAAI/MIT Press, Ch 12:307-328 (1996); Ashok Sarasere et al.,
In
21
st Int'l Conf on Very Large Databases
(
VLDB
), Zurich, Switzerland (September 1995). For example, a data mining technique may be used to sift through a database maintained by a retail outlet for records relating to expensive purchases in an effort to market to a specific customer segment. However, use of data mining tools requires careful selection of search variables in order to obtain meaningful data and/or data relationships. Lack of a key variable in a search can result in an output that may be incorrectly interpreted, or just undecipherable.
One data mining technique, termed the “Patient Rule Induction Method” or “PRIM,” is structured to find a high average region within a very large collection of data records. Typically, a data record will consist of variables. To employ PRIM, a user selects certain of the variables to form a set of input variables and one output variable. The user will also select a minimum size of the desired region. The selected variables and region size are then input to PRIM. PRIM then finds regions where the output variable has a high average value compared to the average value for the entire set of records. PRIM could also be used to find regions with minimum average value by maximizing the negative values of the output variable. The region found by PRIM is defined by a subset of attribute values. For an analytical description of PRIM and the algorithms it employs, see Friedman, J. et al.,
Statistics and Computing,
9:2, pp. 123-143 (April 1999).
Another data mining tool is Weighted Item Sets (WIS), a type of association rule. This tool finds relationships between various attributes in a database; some of the attributes can be derived measures. The relationships are defined in terms of if-then rules that show the frequency of records appearing in the database that satisfies the rule. An example of WIS can be found in U.S. Pat. No. 5,173,280.
Another analysis tool for databases or data warehouses with massive amounts of data items or records is the On-line Analytical Processing (“OLAP”) technique. A number of commercially available products have been built to employ this technique, e.g., Cognos' Enterprise OLAP and PowerPlay, Business Objects Inc.'s Business Objects, Informix's MetaCube, Platinum's InfoBeacon, MicroStrategy's DSS Agent, Oracle's Express, etc. All of these products offer similar functionality.
OLAP typically includes the following kinds of analyses: simple (view one or more measures which can be sorted and totaled), comparison or cross-tab (view one measure and sort or total based upon two dimensions), trend (view a measure over time), variance (compare one measure at different times such as sales and sales a year ago), and ranking (top 10 or bottom 10 products sold) [Peterson, T. et al., SAMS Publishing (1999)]. OLAP enables users to drill down within a dimension to see more detailed data at various levels of aggregation.
Users can also filter data with the OLAP technique, i.e., focus their analysis on a subset of records in the database. For example, if a user is interacting with a retail chain store database then he/she may only be interested in “West Coast” stores. Users need to know which attribute or attributes they want to set-up filter conditions. Users also need to know how to define the filtering conditions; OLAP enables users to filter records based upon only arithmetic conditions of one or more database attributes or a “where” clause in a SQL statement.
In addition to the analysis tools and techniques described above, there is also what is known as the Knowledge Discovery in Databases (KDD) Process. KDD and data mining conferences have been held since 1989. This new field has produced a widely followed and accepted KDD process, capable of selecting data, pre-processing or editing data, transforming data, performing data mining, and evaluating/interpreting the findings. See Fayyad, U. et al., “The KDD Process for Extracting Useful Knowledge from Volumes of Data,”
Communications of the ACM
39, 11, pp. 27-34 (November 1996). The KDD process is “The nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data.” (Fayyad, U. et al.,
supra
.) This process assumes that a knowledge engineer or domain expert will evaluate and interpret the findings.
There is a recent trend in the industry to integrate data mining techniques with OLAP tools. See, for example, “OLAP Vendors Increasingly See Data Mining Integration as Potent Differentiator”, http://www.idagroup.com/v2n0701.html; “OLAP and Data Mining: Bridging the Gap”, http://www.dbpd.com/vault/parsfeb.html. The purpose of the integration of these tools is to give analysts the flexibility to choose whether to use OLAP to view and aggregate data, or data mining techniques to better analyze attributes. Users can use these tools in any combination.
There are limitations to this integration approach. Typically, these tools are packaged in a software product, but little or no guidance is given to users on how they should use the tools in conjunction with one another. These tools solve different types of problems so it is difficult to use them to support one another. Also, these tools do not always present results in an easy to understand manner. For example, a user can look at a WIS rule or PRIM region definition and understand the attributes and values. However, users may miss the meaning of the pattern or an explanation for its occurrence. That is, a user cannot easily look at a SQL statement describing a PRIM region and in
Dolins Steven
Herman Charles
Sakata Taiki
Su Hua-Ching
Al-Hashemi Sana
Metjahic Safet
Townsend and Townsend / and Crew LLP
LandOfFree
Analysis of massive data accumulations using patient rule... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Analysis of massive data accumulations using patient rule..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Analysis of massive data accumulations using patient rule... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3171504