Method for performing clustering in very large databases

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06496834

ABSTRACT:

FIELD OF THE INVENTION
This invention relates in general to a relational database management system, and in particular, to an analytic algorithm implemented in SQL for performing cluster analysis in very large databases.
BACKGROUND OF THE INVENTION
Relational databases are the predominate form of database management systems used in computer systems. Relational database management systems are often used in so-called “data warehouse” applications where enormous amounts of data are stored and processed. In recent years, several trends have converged to create a new class of data warehousing applications known as data mining applications. Data mining is the process of identifying and interpreting patterns in databases, and can be generalized into three stages.
Stage one is the reporting stage, which analyzes the data to determine what happened. Generally, most data warehouse implementations start with a focused application in a specific functional area of the business. These applications usually focus on reporting historical snap shots of business information that was previously difficult or impossible to access. Examples include Sales Revenue Reporting, Production Reporting and Inventory Reporting to name a few.
Stage two is the analyzing stage, which analyzes the data to determine why it happened. As stage one end-users gain previously unseen views of their business, they quickly seek to understand why certain events occurred; for example a decline in sales revenue. After discovering a reported decline in sales, data warehouse users will then obviously ask, “Why did sales go down?” Learning the answer to this question typically involves probing the database through an iterative series of ad hoc or multidimensional queries until the root cause of the condition is discovered. Examples include Sales Analysis, Inventory Analysis or Production Analysis.
Stage three is the predicting stage, which tries to determine what will happen. As stage two users become more sophisticated, they begin to extend their analysis to include prediction of unknown events. For example, “Which end-users are likely to buy a particular product”, or “Who is at risk of leaving for the competition?” It is difficult for humans to see or interpret subtle relationships in data, hence as data warehouse users evolve to sophisticated predictive analysis they soon reach the limits of traditional query and reporting tools. Data mining helps end-users break through these limitations by leveraging intelligent software tools to shift some of the analysis burden from the human to the machine, enabling the discovery of relationships that were previously unknown.
Many data mining technologies are available, from single algorithm solutions to complete tool suites. Most of these technologies, however, are used in a desktop environment where little data is captured and maintained. Therefore, most data mining tools are used to analyze small data samples, which were gathered from various sources into proprietary data structures or flat files. On the other hand, organizations are beginning to amass very large databases and end-users are asking more complex questions requiring access to these large databases.
Unfortunately, most data mining technologies cannot be used with large volumes of data. Further, most analytical techniques used in data mining are algorithmic-based rather than data-driven, and as such, there are currently little synergy between data mining and data warehouses. Moreover, from a usability perspective, traditional data mining techniques are too complex for use by database administrators and application programmers, and are too difficult to change for a different industry or a different customer.
One analytic algorithm that performs the task of modeling multidimensional data is “cluster analysis.” Cluster analysis finds groupings in the data, and identifies homogenous ones of the groupings as clusters. If the database is large, then the cluster analysis must be scalable, so that it can be completed within a practical time limit.
In the prior art, cluster analysis typically does not work well with large databases due to memory limitations and the execution times required. Often, the solution to finding clusters from massive amounts of detailed data has been addressed by data reduction or sampling, because of the inability to handle large volumes of data. However, data reduction or sampling results in the potential loss of information.
Thus, there is a need in the art for data mining applications that directly operate against data warehouses, and that allow non-statisticians to benefit from advanced mathematical techniques available in a relational environment.
SUMMARY OF THE INVENTION
To overcome the limitations in the prior art described above, and to overcome other limitations that will become apparent upon reading and understanding the present specification, the present invention discloses a method for performing cluster analysis in a relational database management system utilizing an analytic algorithm implemented in SQL. The method defines a plurality of tables for the storage of data points and Gaussian mixture parameters and executes a series of SQL statements implementing a clustering algorithm to update the Gaussian mixture parameters stored within the tables.
In the described embodiment, the method is applied to perform clustering within a relational database management system to group a set of n data points into a set of k clusters, each data point having a dimensionality p. A first table, C, having p columns and k rows, is established for the storage of means values, each column of C representing a cluster; a second table, R, having p columns and p rows, is established for the storage of covariance values; and a third table, W, having k columns and 1 row, is established for the storage of weight values; where n represents the number of data points, k is the number of clusters, and p represents the dimensionality of each data point. SQL commands implementing an Expectation-Maximization clustering algorithm are executed to iteratively update the means values, covariance values and weight values stored in the first, second and third tables.
The above and other objects, features, and advantages of the present invention will become apparent from the following description and the attached drawings.


REFERENCES:
patent: 5872850 (1999-02-01), Klein et al.
patent: 6012058 (2000-01-01), Fayyad et al.
patent: 6049777 (2000-04-01), Sheena et al.
patent: 6421665 (2002-07-01), Brye et al.
patent: 6438552 (2002-08-01), Tate

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for performing clustering in very large databases does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for performing clustering in very large databases, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for performing clustering in very large databases will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2971387

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.