Data structure partitioning to optimize cache utilization

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S129000, C711S128000

Reexamination Certificate

active

06330556

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to the field of computer memory management and in particular to optimizing cache utilization by modifying data structures.
COPYRIGHT NOTICE/PERMISSION
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawing hereto: Copyright © 1999, Microsoft Corporation, All Rights Reserved.
BACKGROUND
Users are demanding increased performance of their applications running on their computers. Computer hardware, including central processing units (CPUs), are becoming increasingly faster. However, their performance is limited by the speed at which data is available to be processed. There are several devices that provide the data. Disk drives, compact disks and other secondary storage devices can store great amounts of data cost effectively, but have great delays in providing data because the physical media on which the data is stored must be moved to a position where it can be read. This type of physical motion requires great amounts of time when compared to the cycle times of processors. The next fastest common data storage device is referred to as random access memory (RAM) which is much faster. However, processor speeds have increased, and even RAM cannot provide data fast enough to keep up with them.
In a typical computer, Level
1
(L
1
) and Level
2
(L
2
) cache memories are similar to RAM, but are even faster, and are physically close to a processor to provide data at very high rates. The cache memory is typically divided into 32, 64, or 128 byte cache lines. The size of a cache line normally corresponds to a common unit of data retrieved from memory. When data required by a processor is not available in L
1
cache, a cache line fault occurs and the data must be loaded from lower speed L
2
cache memory, or relatively slow RAM. The application is often effectively stalled during the loading of this data, and until such time as the data is available to the CPU. By decreasing the number of cache faults, an application will run faster. There is a need to reduce the number of cache line faults and provide data to processors even faster to keep applications from waiting.
Computer applications utilize data structures which are sometimes implemented as classes which are instantiated as objects. Classes define containers of data or information and code which operates on the data in response to method calls from other users or classes. Some classes can be very large, and take up several cache lines. The amount of each class actually used may be significantly less than the amount of data stored in the class. If the entire class is recalled from storage, even though only a small part of it is actually needed, many cache line misses will occur due to them containing unneeded data from the objects. Since there are a limited number of cache lines available for use by an application, it is important to use them efficiently. If there is insufficient space available for the desired data, time is spent in obtaining the data from slower storage and then populating the cache lines so the desired data is more quickly available to the processor.
There is a need for a better way to manage the cache lines so that data commonly needed by applications is available with a minimal amount of cache line misses.
SUMMARY OF THE INVENTION
Data structures are partitioned into heavily referenced and less heavily referenced portions. The partitioning is based on profile information about field access counts. In one embodiment, the most heavily referenced portions or hot portions of an object are kept in a hot object, while the remaining portions of the object are placed in a subordinate or cold object which is referenced by the original hot object as needed.
In a further aspect of the invention, the heavily referenced portions of a hot object are then placed next to each other in memory so that they are likely combined into common cache lines. The “cold” portions which were extracted from the class are placed in a new class that can be referenced from the original class. Accesses to hot portions remain unchanged. One aspect of the invention involves the selection of classes that show the greatest potential and least compatibility problems. A further aspect of the invention involves the use of a cache-conscious object co-location scheme designed to minimize cache misses. Yet a further aspect of the invention involves the application of the partitioning to Java programs.


REFERENCES:
patent: 5133061 (1992-07-01), Melton et al.
patent: 5471602 (1995-11-01), Delano
patent: 5752038 (1998-05-01), Blake et al.
patent: 5896517 (1999-04-01), Wilson
patent: 5897651 (1999-04-01), Cheong et al.
patent: 5978888 (1999-11-01), Arimilli et al.
patent: 5987460 (1999-11-01), Niwa et al.
patent: 6058456 (2000-05-01), Arimilli et al.
patent: 6128613 (2000-10-01), Wong et al.
patent: 6178482 (2001-01-01), Sollars
patent: 6182194 (2001-01-01), Uemura et al.
patent: 6205519 (2001-03-01), Aglietti et al.
Fraser, C.W., et al., “A Retargetable C. Compiler: Design and Implementation”, Benjamin/Cummings, Redwood City, California, (1995).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data structure partitioning to optimize cache utilization does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data structure partitioning to optimize cache utilization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data structure partitioning to optimize cache utilization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2588309

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.