Trace ranking in a dynamic translation system

Data processing: software development – installation – and managem – Software program development tool – Translation of code

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S133000, C711S134000, C711S135000, C711S136000

Reexamination Certificate

active

06219827

ABSTRACT:

BACKGROUND OF THE INVENTION
Dynamic translation systems are a relatively new field of endeavor in the computer and software art. A technique used in dynamic translation systems is the analysis of traces.
A trace is a sequence of “blocks” or “basic blocks” of machine code. A block, or basic block, is a sequence of non-branching machine instructions ending with an exit to another block. The exit instruction may be a branch, or a conditional branch, or a fall-through to the next basic block. A program consists of multiple blocks.
During program execution, a program executes a series of basic blocks in paths. A trace records the path followed in the series. For example, with reference to
FIG. 1
, a program fragment is illustrated in which the programs may execute along different paths. In
FIG. 1
, the program may execute either block 3 or block 4, depending on a conditional test in block 2. Two possible traces included in the fragment illustrated in
FIG. 1
are blocks “1-2-3-5” or blocks “1-2-4-5.”
A dynamic translation system monitors the execution of an application to identify the traces that the application executes, and then translates those traces into more effective code.
Of course, the dynamic translation techniques as described must take place in an environment of a computer architecture and data structure of a fixed size. Cache memory is always advantageous to expedite processing, but in a fixed environment, cache devoted to storing traces must be managed as yet another limited and potentially scarce resource.
The preferred embodiment of the invention described herein is implemented using software cache (or “code cache”) rather than hardware cache. It will be appreciated that a code cache is simply a section of memory allocated to the job of being a cache, and is typically bigger than a machine cache. However, for performance reasons, a code cache must still be limited in size and therefore managed like any other resource.
The code cache may be limited by the amount of physical memory in the system, or by the amount of hardware cache available. If a code cache is large compared to the physical memory, the system performance is likely to suffer since less memory would be available to run other applications. Depending on the system, for example, if the hardware cache is small and access to physical memory is slow, the code cache should not be too much larger than the hardware cache either.
Traces stored in cache will ideally execute faster than traces that are stored in main memory. Selection of the traces to store in cache as the program executes may be according to any of numerous memory protocols specific to various architectures and operating systems. Once placed in cache, however, a cache retention and discard protocol becomes highly advantageous in order to make room for new traces to be stored in the cache as program execution continues.
Ideally, traces will be retained in cache memory according to their retention value to the execution of the program. When the program executes a trace frequently, it is advantageous to keep it in cache memory. A trace executed infrequently need not be retained in cache, and in fact should be discarded to make room for incoming traces that may potentially be used more frequently.
Various methods are known in the art for selecting items to be retained in cache and items to be discarded. One fairly common method is simply to empty the entire cache when it is full and then start over. Although easy to implement, this method ignores the value of frequently-used items. After emptying the cache, processing must repopulate the cache with these items even though they were already in cache perhaps only moments before. This method is therefore like “throwing out the baby with the bath water.”
A better system would rank items stored in cache to determine which ones to keep and which ones to discard.
Ranking is used with advantage in other related fields of endeavor. For example, Operating System (OS) kernels must decide which virtual pages to keep in physical memory, and which to throw out. It is known to rank virtual pages for this purpose, typically on a “Most Recently Used” (MRU) basis. Whenever there is an access to an address in a particular virtual page, that page's rank is promoted to the top. Frequently-accessed pages thus stay near the top, and lesser-accessed ones fall to the bottom. The top of the list is generally kept in physical memory.
A related ranking system uses a “Least Recently Used” (LRU) system, where the time between accesses is monitored and forms the basis for a ranking. Those items used less frequently (i.e. time between accesses is high) get a lower rank. Although perhaps more accurately predictive, the LRU method is less favored over the MRU method because it requires a higher processing overhead to maintain.
A further ranking system known in the art simply counts the number of times an item is used. The higher the number, the higher the rank.
All of the foregoing methods of ranking are inappropriate for cache memory management in dynamic translation systems. By its nature, the behavior of dynamic translation systems may change over time. In particular, traces may vary in size, or go “dormant” for a period before becoming very active again. The “empty out and start over” method is just too inherently inefficient to account for this dynamic behavior. It simply has no mechanism to determine what is important and what is not. The MRU, LRU and “counting” methods are too inflexible to account for this dynamic behavior. For example, assume a trace in a dynamic system executes very frequently in “bursts.” An MRU or LRU algorithm may assign a low rank to a trace when the trace is in its dormant phase, even though it is shortly to become “hot.” A “counting” method may assign a false high ranking to a trace that is “hot” at the time but shortly never (or rarely) execute again.
There is therefore a need in the dynamic translation systems art for a trace ranking system which, via accurate predictive ranking, enables cache management to determine which traces to keep in cache and which to discard.
SUMMARY OF THE INVENTION
These and other objects, features and technical advantages are achieved by a method and system of trace ranking that applies artificial life principles to determine a predictively accurate ranking for cache retention or discard of traces. In a preferred embodiment, the ranking system is based upon biological modeling of ant trails. The system can enable a trace ranking policy where valuable traces can be identified to be kept in the cache as much as possible, while less valuable traces can be marked to discard. The system is further dynamic over time, whereby ranking emphasis changes in accordance with the dynamic behavior of the program.
Although other artificial life principles may be applicable to trace ranking, a preferred embodiment applies the behavior of ants in forming and strengthening food source trails to develop the inventive trace ranking system. Ants carrying food leave a strong pheromone trail. If an ant encounters a trail that has a strong pheromone signature, it will follow the trail to the food source. The pheromone chemical nonetheless dissipates into the air over time, and so if the pheromone trail is not reinforced by a succession of ants carrying food, it will weaken and make it less likely to be followed by a later ant.
According to a preferred embodiment of the inventive trace ranking system, and analogous to the ant trail model, each trace contains counting code so that each time it is executed it increments a ranking counter by one. At predetermined intervals a dissipation factor is applied uniformly to reduce the value of all counters. By tuning the period between successive dissipation adjustments, and by tuning to a function the amount that the counters are reduced at selected dissipation operations, ranking becomes tunably predictive of the dynamic behavior of the program, analogous to the way in which the strength of a pheromone trail is predictive of the route followed by

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Trace ranking in a dynamic translation system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Trace ranking in a dynamic translation system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Trace ranking in a dynamic translation system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2482916

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.