Electrical computers and digital processing systems: memory – Storage accessing and control – Shared memory area
Reexamination Certificate
2002-05-30
2004-09-28
Sparks, Donald (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Shared memory area
C711S132000, C711S148000, C707S793000, C707S793000
Reexamination Certificate
active
06799253
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of Invention
The present invention relates generally to database systems More particularly, the present invention relates to a storage management technique which reduces the path lengths associated with allocating and freeing storage by implementing a push-only stack.
2. Description of the Related Art
Within any computing system, the amount of memory available for data storage purposes is typically limited. As such, storage management components or subsystems are often included within the computing system to manage the allocation and deallocation of memory space. Such storage management components provide general-purpose support for arbitrary usages of storage in which the storage management components have substantially no knowledge of the semantics of the data that consumes the storage.
A storage management system may generally be contained within an application server or system, a database system, or an operating system.
FIG. 1
is a diagrammatic representation of a typical system which uses a storage management system. A system
100
includes a client
104
, an application server
108
, and a database system
110
. Client
104
, which may include a web browser
106
, communicates with application server
108
, or a middle tier, which may contain a web server (not shown). When client
104
communicates with application server
108
, application server
108
may access information associated with database system
110
. Typically, client
104
, application server
108
, and database system
1
10
are in communication over a network.
While client
104
and application server
108
may each have an associated operating system (not shown) within which a general-purpose storage management system may be contained, such storage management systems may allow storage to be consumed within client
104
and application server
108
. Typically, the performance of such storage management systems may greatly affect the overall performance of their associated systems, i.e., client
104
and application server
108
.
For procedural computing languages such as the C language, storage is typically allocated dynamically for data that is not understood at compile time, e.g., through calls to an operating system service or an application programming interface. The dynamically allocated storage may be allocated from application address space, which may be part of a run-time heap, as will be appreciated by those skilled in the art
FIG. 2
is a diagrammatic representation of an application address space. An application address space
200
, which maybe managed by an operating system, is typically contained within virtual storage and is partitioned. Virtual storage is generally a resource that is available to a process at run-time, and is located in real storage, e.g., random access memory (RAM), and partially located in auxiliary storage, e.g., disk space.
Executable code
204
, which includes threads
220
, may be included in application address space
200
. Threads
220
are generally associated with concurrent independent tasks
224
or work each with their own storage requirements in a storage heap
208
. As will be appreciated by those skilled in the art, dynamic storage occurs in storage heap
208
. Application address space
200
also includes stack storage
212
and static storage
216
.
When storage heap
208
fills up, i.e., when it is no longer possible to allocate storage in storage heap
208
, then a garbage collection process is typically performed to free space within storage heap
208
. In order for garbage collection to occur, tasks
224
include accounting information relating to garbage collection processes.
In a typical system, an average of approximately
100
storage allocations or deallocations per second may be considered to be a relatively low number of storage allocations and deallocations. However, at a cost of approximately five hundred to approximately a thousand instructions per storage allocation or deallocation, the number of instructions which execute each second with respect to storage allocations and deallocations may be substantial. That is, the path length associated with each storage allocation or deallocation may be in the range of approximately five hundred to approximately one thousand instructions. As such, when an operating system is required to grow virtual storage through additional page and segment table entries, the number of instructions to be executed may result in such a process being relatively expensive. In high volume transaction processing environments where path length is critical, the high number of instructions to be executed may significantly degrade the overall performance of a system. By way of example, the performance associated with Java Database Conductivity (JDBC) systems such as an Oracle JDBC Driver, available commercially from Oracle Corporation of Redwood Shores, Calif., may be compromised when the number of allocations or deallocations per unit of time is substantial.
Reducing the path length associated with storage allocations and deallocations may improve the overall performance of a system, particularly when the system is a high volume transaction processing system. One approach to reducing the path length associated with storage allocations and deallocations may include implementing an overall storage pool system.
FIG. 3
is a diagrammatic representation of an overall storage pool system. An overall storage pool system
300
, which is located in virtual memory, includes storage pools
308
. A first storage pool
308
a
is arranged to store data of a first type and, hence, has semantics which support data of the first type. Similarly, a second storage pool
308
b
is arranged to store data of a second type, and has semantics which support data of the second type.
By providing different storage pools, the number of machine instructions for each storage type may be substantially optimized with respect to an appropriate storage pool
308
. For example, if it is known that data of a first type is to be stored within first storage pool
308
a
, then the number of instructions needed to store data of the first type may be optimized accordingly. When data is needed dynamically by code
304
, a storage management algorithm may access storage pools
308
to access the data.
Storage pools
308
may often be of a fixed size, i.e., storage pool
308
a
is often of substantially the same size as storage pool
308
b
, although the size of individual storage pools
308
may differ based upon the size of data items to be stored. The use of storage pools
308
of a fixed size is generally inefficient, as one storage pool
308
may be completely full, while another storage pool
308
may be relatively empty. Once a storage pool
308
for a data type is full, e.g., when storage pool
308
a
for a first data type is full, another storage pool
308
is typically allocated to store data of the first type. Allocating an additional storage pool
308
may be relatively expensive. In addition, allocating an additional storage pool
308
while other storage pools
308
such as storage pool
308
b
for a second data type remain relatively unused is generally a waste of storage.
At least some of the instructions associated with storage allocation and deallocation are related to tracking which allows ownership of data to be determined. For instance, allocating storage associated with procedural languages substantially requires that the allocated storage be accounted for by the owner of the storage such that the allocated storage may be deallocated, or freed, at a later time. That is, information relating to an owner of data that is stored, a location at which the data is stored, and garbage collection is tracked to enable the owner to later free the used storage space for other uses. The overhead associated with tracking, i.e., maintaining tracking information, generally creates a significant performance penalty.
While the use of overall storage pool system
300
may reduce the number of instructions associated with st
Dinh Ngoc
Oracle Corporation
Ritter Lang & Kaplan LLP
Sparks Donald
LandOfFree
Stochastic scratchpad storage management technique does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Stochastic scratchpad storage management technique, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Stochastic scratchpad storage management technique will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3259642