Computing system for implementing a shared cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Shared memory area

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S137000

Reexamination Certificate

active

06324623

ABSTRACT:

BACKGROUND
In a large scale computer system, such as a database management system (DBMS), it is important to be able to support a number of different users concurrently. Without such a capability, the system would be little more than a standalone computer. To implement multi-user support, several different processing models have been utilized. One model that has been used is the multi-processing model. In multi-processing, each time a new user requests access to the system, a separate process is started. This process is in essence a separate execution of the software. Once started, the process services all of the requests from the user that spawned it. Under the multi-processing model, each process has its own separate memory space for use in storing and processing data.
Multi-processing is effective for supporting multiple users concurrently; however, it has severe scalability limitations. This is due mainly to two factors. First, spawning and maintaining a process involves a significant amount of overhead. Because of the high cost, only a small number of processes can be maintained at any one time. Second, the same set of data, used by multiple processes, may be stored redundantly; once in each process+ memory space. This redundancy can waste a significant amount of system resources.
To overcome some of the limitations of multi-processing, the multi-thread model was developed. According to the multi-thread model, there is only one execution of the software. That is, only one process is spawned. From this one process, multiple threads can be spawned to perform the work necessary to service user requests.
Multi-threading has several advantages over multi-processing. First, because only one process is spawned, overhead is kept to a minimum. It is true that each thread carries with it some overhead cost, but this cost is negligible when compared with the cost of maintaining an entire process. Because multi-threading significantly reduces system overhead, many more users can be supported. Another advantage of multi-threading is that it minimizes the redundant storage of data. Because all of the threads are part of the same process, all of the threads can share the same memory space. This in turn makes it easier to implement a shared cache.
With a shared cache, it is only necessary to store a set of data once. After the data is cached, all of the threads can access it. By reducing redundant storage of data, multi-threading makes more efficient use of system resources.
Implementing a shared cache gives rise to increased efficiencies. However, a shared cache is not without its disadvantages. One of the drawbacks of a shared cache is that it can be difficult to render a set of data in the cache visible to only one user (i.e. to make the entry “private” to the user). As noted above, once an entry is stored in the shared cache, that entry is accessible to all threads. In certain applications, it is important to be able to make a cache entry private to a single user. One application in which this ability is important is in on-line analytical processing (OLAP).
An OLAP system is typically used to provide decision support services, such as forecasting, financial modeling, and what-if analysis. In performing what if analysis, an OLAP system typically performs at least three operations: (1) it retrieves historical data from a databases; (2) it changes the data in accordance with a what-if scenario posed by the user; and (3) based on the changed data, it determines what other data is changed. What-if analysis is a powerful tool because it allows a user to forecast how a change in one area may affect another. For example, a user may use what-if analysis to predict how sales in a region may change if the sales force is increased by ten percent.
As noted above, one of the operations performed in a what-if analysis is to change the retrieved historical data. This change typically is not an actual change but a proposed “what-if” change. Because it is not an actual change to the data in the database, only the user making the change should see it. All other users should still see the original data. In such a situation, there is a need to make the proposed change private to the user making the change. In a typical shared cache, however, the only mechanism for making a cache entry private is to employ locks, which typically require additional overhead and can result in deadlocking between threads competing for the resource. Hence, there exists a need for a mechanism that can support private data without hindering performance.
SUMMARY OF THE INVENTION
In accordance with a preferred embodiment of the invention, a public memory structure is utilized to store data which is sharable between a plurality of users in a multi-threaded computing environment. In contrast to the prior art, a cache memory area on a server is used to store public, sharable data and private, non-sharable data without using locks to negotiate resource ownership. Consequently, there are public and private pages stored in global memory. The private pages are those that are modifiable by a user and the public pages are those that are only readable by one or more users.
One aspect of the invention is to manage memory on a computer. From the memory there are a plurality of cache memory blocks cooperatively shared by processing threads executing on the computer. These processing threads include user sessions and resource managers.
The user threads consume page data stored on the cache memory blocks. Each user thread has a public view of unmodified cached pager and can have modified cached pages in a private view. During on-line analytical processing (OLAP), the user threads process the cached pages. For pages that are only read by the user thread, the public view is used to access the necessary cache memory block, which may be read by multiple users. When an analysis requires modifying data, however, access through a public view is inappropriate. Instead, the cache memory block pointed to by the public view is copied to a new cache memory block. The user thread is then assigned a private pointer to the copied pages, and can modify the data in this private view without affecting data viewed by other threads.
The resource managers ensure that the user threads cooperate to function effectively. In particular, a paging manager interfaces the user threads with the cache memory space to retrieve pages from disk.
In accordance with a preferred embodiment of the invention, a computer-implemented program manages memory in a computer having a plurality of memory blocks. These memory blocks are preferably a cache memory area. Data is stored in memory blocks, including a first memory block and a second memory block. First and second user sessions or user threads execute in the computer, with the first user session having a global view of the first memory block data and the second user session having a global view of the first memory block data and a private view of the second memory block data. In a particular, the first and second user sessions are threads in a multi-threaded computer system.
The user threads preferably execute resource manager instructions to map data stored in a cache memory block with a location of the cache memory block in the computer. The resource manager also transfers data from a database into a cache memory block and stores generational views of the data. Preferably, the data is retrieved from a multi-dimensional database.
A particular method facilitates simultaneous analysis of data in multiple sessions in a computer. First, data is retrieved from storage into public blocks of a shared memory space. These public blocks store data for global read access by a plurality of user sessions. Second, public blocks of data are selectively copied into private blocks of the shared memory space. Each private block stores data for private read and write access by a single user session. Upon read access to a data item by a user session, the data item is read if present from a private block accessible by the user session. If the da

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Computing system for implementing a shared cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Computing system for implementing a shared cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computing system for implementing a shared cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2610067

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.