Restructuring of executable computer code and large data sets

Data processing: software development – installation – and managem – Software program development tool – Testing or debugging

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C717S151000

Reexamination Certificate

active

06742179

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to optimization of computer programs and more particularly, the present invention relates to ordering portions of a computer program and data used therein for improved execution order.
2. Background Description
A typical state of the art computer, whether a large mainframe computer or small personal computer (PC), includes a hierarchical memory system. The memory system may include nonvolatile storage, main memory and cache memory. Non-volatile storage, typically is disk storage or a hard disk. Main memory is typically relatively low cost, dense dynamic random access memory (DRAM). Usually cache memory is faster, more expensive, less dense static random access memory (SRAM). Typically, cache memory is a small portion of the entire memory system, e.g., 1-10%.
Memory paging systems are designed to exploit spatial and temporal locality. Temporal locality refers to the tendency of programs to execute instructions repeatedly; thus the performance of fetching instructions from main memory can be improved by saving recently executed instructions in a small high-speed cache. Instructions are said to exhibit good spatial locality in a program if execution of an instruction tends to be followed quickly by execution of instructions packaged nearby. A program with poor spatial locality will cause unneeded instructions to be fetched into the cache, preventing cache operation at its full potential. For these hierarchical memory systems, volatile memory may be thought of as a medium-speed cache for low-speed persistent memory, such as a disk. Recently used pages are kept in memory to take advantage of temporal locality. Again, good spatial locality is required to avoid bringing unneeded instructions and data into memory. Poor spatial locality thus reduces the efficiency of memory paging.
Further, processor performance is increasing much more rapidly than the performance of their attached memory subsystems. So, it is increasingly difficult to feed data and instructions to processors rapidly enough to keep the processors utilized to their maximum capacity. As a result, a great deal of ingenuity has been expended on hardware solutions to improve the access time and memory reference throughput, including caches, prefetch buffers, branch prediction hardware, memory module interleaving, very wide buses, and so forth. Also, software may be optimized to take the best possible advantage of these hardware advances.
Unfortunately, “naive” code generation often results in programs that have poorer spatial locality than is achievable. It is typical, for example, to generate code that branches around infrequently executed error paths. This results in poor utilization of the instruction cache, since some of the error path code will usually be fetched into the cache along with the branch that bypasses it. It is also typical for computational procedures to be packaged without consideration for locality, so that although procedure A frequently calls procedure B, A and B are located in different memory pages. Accordingly, it is becoming more common to use profiling information (profiling) to analyze program behavior during execution.
Optimization procedures have been developed to optimize program code selecting segments that are most likely to be used. Those selected code segments are typically stored in cache memory. Also, large data sets may be used by a program which itself fits in cache, but the large data sets may be so large as to not fit completely into a cache memory. Program execution from each of these examples can be improved by more efficient segment caching.
With the introduction of instruction caches, which have been designed to exploit temporal and spatial locality, profiling focus was shifted to reordering code at a finer granularity. Most successful approaches to improving instruction cache performance have used profile data to predict branch outcomes. In contrast to most of the foregoing work on virtual memory performance, these techniques were implemented within the framework of optimizing compilers. Profiling gathers data about the frequencies with which different execution paths in a program are traversed. These profile data can then be fed back into the compiler to guide optimization of the code. One of the proven uses of profile data is in determining the order in which instructions should be packaged. By discovering the “hot traces” through a procedure, the optimizer can pack the instructions in those traces consecutively into cache lines, resulting in greater cache utilization and fewer cache misses. Similarly, profile data can help determine which procedures call other procedures most frequently, permitting the called procedures to be reordered in memory to reduce page faults. Thus, profile information has been used to reduce conflict misses in set-associative caches, for example. Also a reduction in conflict misses has been achieved using only static analysis of the program. Further, basic blocks have been traced to reduce the number of unexecuted instructions brought into the instruction cache (cache pollution), and to order basic blocks. It also is known that infrequently executed traces can be separated entirely from the main procedure body for additional efficiency. Other methods of reordering instructions are based on the presence of loops, branch points, and join points in the control flow graph, as well as based directly on the control dependence graph.
Thus, it is a goal of a good cache management procedure to take advantage of locality properties. Ideally, blocks of instructions or data that are expected to be used together are stored together within the cache memory. Program slow downs occur when code currently being executed by the computer need code that is outside of the cache or, even worse, outside of main memory, stored in non-volatile storage, e.g. on disk. This can happen for example with a branch or a call or when the calculation being done on data in a database that may be partially stored in cache requires data from that database that is not stored in the cache. Each branch to code out of the cache and, even more so, to code out of the main memory slows execution.
Accordingly, optimizing compilers have been developed which convert source code into object code and to what is hoped to be an optimum instruction order such that execution is maintained within code in cache at the particular point in time. These optimizing compilers typically attempt to group code into manageable groups that are compartmentalized or contained within a reasonably sized segment such that the segment may be maintained in cache memory while it is being executed.
However, the cache optimization program cannot improve the code itself, i.e., if blocks of instructions are not organized such that related blocks are in close execution proximity to each other, the cache optimization program cannot guess which blocks are more likely to be executed and what is the optimum execution order. R. R. Heisch in “FDPR for AIX Executables,” AIXpert, No. 4 (August 1994), pp. 16-20 and “Trace-Directed Program Restructuring for AIX Executables,” IBM Journal of Research and Development 38, No. 5, 595-603 (September 1994) teaches that instruction cache performance can be maximized by considering it as a whole-program optimization. Heisch's methods differ from previous approaches by operating as a post-processor on executable program objects and by allowing basic blocks to migrate without being constrained by procedure boundaries. The reordered code was appended to the original executable program objects, resulting in reported growth in executable file size of between 5 and 41 percent. This growth had negligible impact on performance. I. Nahshon and D. Bernstein, “FDPR-A Post-Pass Object Code Optimization Tool,” Proceedings of the Poster Session of CC '96—International Conference on Compiler Construction, Sweden (April 1996), pp. 97-104 produced an improved algorithm that required less code growth. A FDPR (fee

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Restructuring of executable computer code and large data sets does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Restructuring of executable computer code and large data sets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Restructuring of executable computer code and large data sets will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3195521

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.