Electrical computers and digital processing systems: memory – Storage accessing and control – Memory configuring
Reexamination Certificate
2000-04-28
2002-12-10
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Memory configuring
C711S130000, C711S147000
Reexamination Certificate
active
06493810
ABSTRACT:
TECHNICAL FIELD OF THE INVENTION
This invention relates generally to capacity planning for computer systems, and more particularly to the allocation of cache memory for a network database service, such as a directory service.
BACKGROUND OF THE INVENTION
A computer system typically has limited resources, such as random-access memory (RAM), storage disk space, processing speed, communication bandwidth, etc. Moreover, at any given time, the system resources may have to be shared by multiple applications. To ensure the most efficient use of the system resources, an application should be allowed to occupy only as much system resources as necessary for it to accomplish its tasks with acceptable performance. Allocating too much resources to an application not only results in a waste of valuable resources but also may interfere with the needs of other applications for the resources. On the other hand, not giving an application sufficient system resources can significantly hinder its operation, resulting in unacceptably poor performance.
It is therefore important to give careful considerations to how much resources should be allocated for various applications in a computer system. The process of estimating the resource requirements that meet the business objectives of a computer system, commonly referred to as “capacity planning,” typically involves the prediction of CPU, I/O, memory and network resources required for a given set of application profiles. Predicting the computing resource requirements based on application profiles is central to the process of capacity planning.
In particular, predicting the memory requirements of an application for optimal performance has been a long-standing problem. Computer memory is one of the most fundamental types of system resources. Many applications require a large amount of memory to achieve adequate performance. For instance, network database services, such as directory services, often require a significant amount of memory for use as a cache for storing entries retrieved in response to database queries. Caching the query results is necessary to ensure adequate response performance of a directory service or the like, because the database is typically stored on a mass storage device, such as a disk, that is significantly slower in data retrieval than processor memory. If the clients of the network database service often make requests for the same databases entries, as in the case of a directory service, caching query results in the computer memory can avoid many slow disk I/O operations, thereby significantly enhancing the performance of the service.
As with any cache management problem, the focus is “cache miss,” i.e., the instance that the requested entry cannot be found in the cache memory. In the event of a cache miss, the service must retrieve the entry from the disk. It is well known that in virtual memory based systems inadequate memory usually results in increased page faults and, as a result, increased disk operations. The question is how to determine the optimal amount of computer memory that is needed to keep the cache-miss rate acceptably low to ensure acceptable performance. Allocating as much memory as available for use as cache memory is obviously not a solution. Moreover, in some environments such as Internet service providers, there can be potentially millions of users, and it is neither possible nor advisable to fit all of their records in memory. How much cache memory is adequate for a network database service, such as a directory service, is often a quite complicated issue, and the answer may differ significantly from application to application.
In the past, a typical way to address the question of memory requirements is to provide empirical data of memory usage from selected typical environments and let the customer make their conclusions. This approach is obviously inadequate. Network applications such as a directory service operate under widely varying environments, including small businesses, Internet service providers (ISPs), and large enterprises. Each environment has its own operating conditions and performance requirements. Since the simple conventional approach cannot take those important differences into account, it is unlikely able to provide satisfactory estimates of the memory requirements. There have been several models to relate the memory size and the page faults proposed in the literature. Most of these memory estimation efforts, however, either require extensive operation information that cannot be realistically obtained, such as record reference strings or patterns, locality, etc., or are based on poor assumptions that are too generic to provide meaningful results. As a result, they are of limited usefulness for estimating the optimal cache memory size for a network database service such as a directory service.
SUMMARY OF THE INVENTION
In view of the foregoing, the present invention provides an effective method for estimating and allocating the amount of memory required for a network database service, such as a directory service, to provide optimal performance. The method in accordance with the invention involves an iterative process. In this process, the memory size N for best-case performance (i.e., the memory size that is sufficiently large to cache all query results for a peak number of users so as to avoid any disk I/O re-reading operations) is first estimated. The allocated cache memory size is then given a starting value. The probability (p) of cache-miss is then estimated for that memory size. Another probability (q), which is the probability that a record requested by a frequent user of the service is not in the cache, is also estimated for the memory size. The performance impact due to the disk I/O rate as determined by p and q is then evaluated, such as by analytic modeling or other performance modeling methods. If the performance is not adequate, the cache memory size is adjusted to a different value. The miss probabilities p and q are again estimated, the performance impact is evaluated, and the allocated cache size is again adjusted if the performance is still not adequate. This iterative process is continued until the adequate estimated performance is achieved.
Additional features and advantages of the invention will be made apparent from the following detailed description of illustrative embodiments, which proceeds with reference to the accompanying figures.
REFERENCES:
patent: 5590308 (1996-12-01), Shih
patent: 5802600 (1998-09-01), Smith et al.
patent: 5835928 (1998-11-01), Auslander et al.
patent: 6098152 (2000-08-01), Mounes-Toussi
patent: 6154767 (2000-11-01), Altschuler et al.
patent: 6282613 (2001-08-01), Hsu et al.
Sari L. Coumeri and Donald E. Thomas, “Memory Modeling for systems synthesis”; Proceedings 1998 International Symposium on Low Power Electronics and Design, 1998, pp. 179-184.
Voelker, GM et al., “Managing Server Load in Global Memory Systems”, Jun. 1997, Performance Evaluation Review 25(1): 127-138.
“A Probabilistic Method for Calculating Hit Ratios in Direct Mapped Caches”, Jul. 1996, Journal of Network and Computer Applications, vol. 19 No. 3 pp. 309-19.
Pang Jee Fung
Raghuraman Melur K.
Tay Yong Chiang
Microsoft Corporation
Peugh Brian R.
LandOfFree
Method and system for allocating cache memory for a network... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for allocating cache memory for a network..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for allocating cache memory for a network... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2994726