Data processing: measuring – calibrating – or testing – Measurement system – Performance or efficiency evaluation
Reexamination Certificate
2003-02-26
2004-07-06
Hoff, Marc S. (Department: 2857)
Data processing: measuring, calibrating, or testing
Measurement system
Performance or efficiency evaluation
C702S108000, C702S113000, C702S118000, C702S183000, C702S184000
Reexamination Certificate
active
06760684
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the sizing of a computer system for a computer application or activity.
2. Background
Since the introduction of computer systems, there has been a need for a method of predicting the hardware investment needed to run software applications. For example, there has been a need to determine the central processing unit (CPU) requirements, volatile memory requirements (e.g., cache memory or random access memory (RAM)), and mass storage requirements (e.g., hard disk capability) of a computing system that is capable of running a given software application at acceptable performance levels. The process of determining the appropriate hardware is referred to as “sizing” the system.
If the system is not adequately equipped, or is sized too small, the system may suffer from performance problems or be nonfunctional. On the other hand, if the system is oversized, then investments are being made for superfluous hardware and performance capability.
Specific component benchmarks can be used to test the speed of components in a system. These benchmark tests can include tests for processor, disk, and memory performance. However, the benchmarks only test the raw speed of components, but do not test the speed of a system as a whole. The performance of software applications is often constrained by the interaction between the components in a system. Thus, component benchmarks by themselves do not provide an accurate measure to size a system.
One approach for sizing a system is to use benchmark simulations to determine how a particular system would handle a potential or projected system workload. In this approach, a benchmark system is constructed having a particular configuration of components, such as CPU, RAM, and hard disk components. The benchmark system is run at operating parameters corresponding to the projected workload. This process may encompass the construction and testing of multiple benchmark system at various levels of operating capacity. By observing the operation of the system, it can be determined whether any of the benchmark systems would be suitable for the projected workload. The operating efficiencies of the multiple systems are compared so that an appropriate system configuration can be selected.
Although such an approach may eventually identify a computing system that is appropriate for the needs of the projected workload, such benchmark simulations are typically costly in terms of time and budget. That is, the equipment and labor costs of constructing multiple systems at different capability levels may be prohibitively expensive. Moreover, such a process is inefficient to perform, since each variation of component combinations must be built if it is to be benchmarked. If a customer seeks to determine, for example, how a particular brand or type of CPU would function with the projected workload, a new system having that brand or type of CPU would have to be constructed and tested. In addition, the information derived from this type of benchmark quickly becomes obsolete as the hardware used to derive the information becomes obsolete.
As the cost of candidate systems decreases, the cost effectiveness of benchmark simulations dramatically diminishes. For example, if a candidate system costs several million dollars, then spending several hundred thousand dollars on a benchmark simulation can be justified. However, if a candidate system costs several tens of thousands dollars, then spending that same amount is no longer feasible. Thus, benchmark simulations may not be suitable for many system purchasers.
Another approach is to size a system based upon a comparison with existing computing systems. In this approach, a determination is made whether other systems exist that run workloads equivalent to the projected workload. If such similar systems exist, then a comparison is made of the performance and operating requirements between the workload of the existing systems and that of the projected workload. Based upon these comparisons, the configuration of the existing system is modified in an attempt to meet the requirements of the projected workload.
This approach also suffers from drawbacks. First, it may be extremely difficult to find existing systems whose applications, organizations, and workloads are even remotely similar to the needs of the present workload application. Furthermore, attempting to match or modify a system used by others to meet a particular need of the current workload is quite challenging and inexact.
Another approach is to characterize a workload in terms of the quantity of hardware elements that would be used by that workload. In this approach, the workload is analyzed and characterized as a set of hardware components or units, and the quantity of hardware units for that workload then determines the configuration of a system capable of operating that workload. The analysis is performed with respect to a specific set of known hardware capabilities, so that an accurate characterization can be performed. For example, consider a candidate CPU that operates at 300 MHz. By breaking the workload into fundamental CPU elements (i.e., the quantity of 300 MHz CPUs needed to perform the workload), the sizing method may determine that two such CPUs are needed for the workload.
The drawback to this approach is that this analysis is directly tied to the capabilities of the particular hardware components that are used to derive the hardware units. Since the analysis is tied to a particular set of hardware components, changing the candidate hardware system requires the entire analysis of the workload to be repeated. This is particularly inefficient if minor changes to the systems are needed.
Standard benchmarks are performed for certain software applications. For example, TPC benchmarks can be performed to determine the relative performance of hardware systems to perform database operations. There are benchmarks for different kinds of database workloads. TPCC benchmarks can be performed to simulate online transaction processing (“OLTP”) workloads and TPCD benchmarks can be performed to simulate decision support services (“DSS”) workloads. These benchmarks provide an indication of the performance of a system to a specific benchmark. However real world workloads may be significantly different from these standard benchmarks; thus, sizing a system using these benchmark test results may result in a faulty sizing analysis. This approach is suitable for comparing different types of hardware for general database operations. It is not particularly well suited for matching a specific system with a specific workload.
Application specific benchmarks may be used to size specific software applications. However, there is a large overload associated with creating and implementing these specific benchmarks. Thus, it is not economically feasible to implement these benchmarks for all software applications.
Therefore, there is a need for a method and mechanism to address these and other issues regarding the sizing of a computer system.
SUMMARY OF THE INVENTION
The invention is directed to a method and mechanism for sizing a hardware system for a software workload. An aspect of the invention is directed to modeling a workload into a set of generic system activities, in which the generic system activities are not directly tied to a specific hardware platform. Another aspect of the invention is directed to profiling hardware systems or hardware components into the generic system activities. Suitable hardware systems or components are selected by analyzing the workload and hardware profiles in terms of the generic system activities.
REFERENCES:
patent: 3905230 (1975-09-01), Calvet et al.
patent: 4862351 (1989-08-01), Green et al.
patent: 5072376 (1991-12-01), Ellsworth
patent: 5361353 (1994-11-01), Carr et al.
patent: 5365514 (1994-11-01), Hershey et al.
patent: 5375070 (1994-12-01), Hershey et al.
patent: 5504887 (1996-04-01), Malhotra et al.
patent: 5734884 (1998-03-01), Eberhard et al.
patent: 5745749 (1998-04-
Rist Andrew
Yang Tao-Heng
Bingham & McCutchen LLP
Hoff Marc S.
Oracle International Corporation
Tsai Carol S. W.
LandOfFree
Method and mechanism for profiling a system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and mechanism for profiling a system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and mechanism for profiling a system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3218344