Data processing: structural design – modeling – simulation – and em – Simulating electronic device or electrical system – Software program
Reexamination Certificate
2001-02-16
2004-10-19
Frejd, Russell (Department: 2128)
Data processing: structural design, modeling, simulation, and em
Simulating electronic device or electrical system
Software program
C703S002000, C709S224000, C718S105000
Reexamination Certificate
active
06807522
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field of the Invention
This invention relates generally to methods of predicting the instruction execution efficiency in a computer system, and more specifically, to methods of predicting the instruction execution efficiency in a proposed computer system having a proposed computer system architecture and configuration.
2. Description of the Prior Art
Modern computer systems can have a wide variety of computer architectures and configurations. To optimize efficiency, a computer system should have an architecture and configuration that is suitable for an expected load. If the architecture or configuration is excessive for a particular load, some of the computer resources will be wasted. If the architecture or configuration is not sufficiently robust for a particular load, the computer system will not provide adequate performance.
A high performance desktop computer designed for multi-media or graphical applications often have a standard PC architecture, with a relatively large amount of Random Access Memory (RAM), large hard drives, and one or more processors with fairly high clock rates. Multi-media and graphical applications are often computational and/or memory intensive, thereby requiring relatively large amounts of memory and processing capability. In contrast, a desktop computer system designed for office use may have a standard PC architecture, but will often have far less RAM, a smaller hard drive and a single processor with less performance. The reduced computer resources of office type systems is appropriate because of the fairly light load of many office applications such as word processing.
For more complex computer systems, such as on-line transaction processing systems, both the architecture and the configuration of the computer system are often designed to accommodate the expected load. The overall throughput of such systems is often dependent on a number of inter-related factors including, for example, the overall architecture of the computer system, the configuration of the computer resources with the architecture, and the expected load and load type.
The architecture of a computer system may include, for example, the location of cache memory, the number of cache memory levels, the location of main memory, the location of processors within the system, the internal bus structure, the I/O structure, as well as other architectural details. The configuration of computer resources within the architecture may include, for example, the size and speed of each level of cache memory, and the number and speed of the processors.
The expected load should be taken into account when designing a computer system, and in particular, when selecting an architecture and/or configuration for the computer system. During the development of a computer system, the developer typically has some idea of the expected load for the system. Often, the expected load for the computer system is estimated by examining the software that will be run on the system. To help design a robust computer system that can efficiently handle the expected loads, it is important for the developer to have some way of evaluating the performance of a proposed computer system based on the expected load, before the system is actually completely developed. This allows the developer to evaluate many different computer architectures and/or configurations before selecting a particular architecture and/or configuration for the particular application.
One measure of a computer system's performance is the computation time required to process a transaction. This can be derived from the computer system's minimum latency period and its queuing time, sometimes using an analytical model as described below. The queuing time can be computed from the resource utilization, which, in turn, can be computed from the computer system's speed in processing the transactions.
Two elements play a key role in determining how efficiently a computer system executes user queries. These include the amount of memory and the processing capability of the system. The amount of memory effects how many instructions are required to retrieve the information necessary to complete a transaction. If, for example, the amount of memory in the computer system is relatively large, the information necessary to complete a transaction is more likely to be present in memory, and therefore it is less likely that the operating system will have to generate and submit additional instructions to access a disk or the like via an I/O channel.
The number and speed of the processors in the computer system can also effect how fast and efficiently a computer system executes user queries. As more processors are added, the instructions necessary to complete a particular transaction are executed faster. However, adding more processors increases the load on the memory, which increases the chance that the requested information will not be in the memory. This can increase the chance that the operating system will have to generate and submit additional instructions to access a disk or the like via an I/O channel.
In addition, as more processors are added, more instruction cycles tend to be dedicated to overhead because of conflicts or other interactions between processors. For example, as more processors are added, more interrupts, dispatches, conflicts resulting in spin/lock loops, I/O locking conflicts, etc. are typically encountered, all of which reduce the efficiency of the computer system. Thus, there is an interplay between the amount of memory and the processing capability of a computer system that effects how fast and efficiently the computer system can execute user queries.
A primary way for a developer to evaluate and predict computer system performance is to develop computer performance models. Such models have traditionally been developed using either probabilistic evaluation (analytic models) or discrete event simulation programs (simulation models).
An analytic model is often defined to be a model that accepts moment estimators (such as mean arrival and service times) as its input and, using a closed form or iterative method, produces moment estimators for the desired statistics (such as average wait time). Analytic modeling has proven to be applicable in a wide range of computer system performance evaluation problems, and is the primary method used commercially today.
There are some fundamental drawbacks to analytic modeling. One drawback is that each analytical model is based on actual design specifications of a computer system. Thus, the computer system must already be sufficiently designed before any meaningful simulations can be performed. In addition, direct measurements have shown that many computer systems seriously violate the underlying assumptions of analytic models, and the actual distributions of the analytic modeling parameters must often be simplified; both tending to compromise the accuracy of the results. Finally, significant time and expense are required to develop an analytical model, which as indicated above, is typically designed for a particular computer system and configuration. To calculate the performance for another computer system or configuration, the analytical model must typically be redesigned to fit the characteristics of the new system. This can be a time consuming, tedious and expensive task.
Simulation models are primarily useful in studying computer performance at a high level of detail. A simulation model may be defined to be a model which accepts a set of measured or generated events (such as arrival or service requests) as its input and produces performance data corresponding thereto. Unfortunately, the processing requirements needed to run the simulations are related to the level of detail of such models. Because many of today's systems are very large and complex, detailed simulations are often impractical because of the inordinate amount of processing time required to produce performance data.
Statistical techniques have also been used to augment and assist conventiona
Crompton Seager & Tufte LLC
Frejd Russell
Johnson Charles A.
Starr Mark T.
Unisys Corporation
LandOfFree
Methods for predicting instruction execution efficiency in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods for predicting instruction execution efficiency in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods for predicting instruction execution efficiency in a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3329305