Caching dynamically allocated objects

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S119000

Reexamination Certificate

active

06446188

ABSTRACT:

BACKGROUND
A. Technical Field
The present invention relates generally to computer memory allocation and management, and more particularly to efficiently managing the dynamic allocation, access, and release of memory used in a computational environment.
B. Background of the Invention
Historically, memory used in a computational environment, such as a computer, has been expensive and of questionable reliability. The general belief was that this memory should be utilized or “packed” as fully as possible. Methods for the efficient (here used in the sense of utilized) use of memory became standard, and have not been seriously questioned before this invention, though attempts have been made to reduce the impact on performance of such usage, and to make the operations more deterministic.
U.S. Pat. No. 5,687,368 (“the '368 patent”) teaches the conventional view of the methods for efficient memory implementation. The '368 patent addresses a major shortcoming of the prior art, which is loss of computational performance due to the need for memory management, also called housekeeping, to achieve efficient use of memory. The '368 patent teaches the use of a hardware implementation to alleviate the problem of loss of performance in the computational unit. However, the '368 patent does not teach reducing or eliminating housekeeping functions or mapping large, sparsely populated logical memory address space onto smaller, denser physical memory address space as in this invention. The '368 patent also does not teach making housekeeping functions more deterministic in the way or to the extent that the present invention does.
Traditional methods in the prior art, such as the '368 patent, copy data from memory location to memory location in order to compact and “garbage collect” the data. Garbage collection is a term used to describe the processes in a computer which recover previously used memory space when it is not longer in use. Garbage collection also consists of re-organizing memory to reduce the unused spaces created within the stored information when unused memory space is recovered, a condition known as fragmentation. The prior art inherently reduces the performance of the computational unit, due to the need to perform these operations and the time consumed thereby. Further, these operations are inherently not substantially deterministic, since the iterative steps required have no easily determinable limit in the number of iterations.
Basic assumptions in the prior art have been that memory should be optimized with respect to the utilization of the memory address space, rather than of the actual memory itself. Reliability was also considered to be a factor in utilizing available memory space as efficiently as possible. As a consequence, the atomic memory management data size was set in small blocks; usually 1024 bytes. Memory management systems (MMS) of the prior art then searched for memory not in use, often down to the individual block, so that memory space could be freed as expeditiously and to as small a unit size as possible.
The small size of the atomic memory unit often causes small pieces of memory, which are being used, to be interspersed with unused, or “garbage” locations, a process known as “fragmentation” of memory. Since this could result in significant problems in accessing streams of data due to the necessity to access small locations which are not contiguous, a technique known as “compaction” or “defragmentation” has been employed. This causes special commands and routines to be required and frequently used. In the UNIX operating system environment, when programming in ANSI C, for example. Function calls that directly or indirectly invoke these representative routines by allocating and releasing dynamic memory are known as “malloc( )”, “calloc( )”, “realloc( )”, and “free( )”. Again, these functions and the directly or indirectly invoked representative routines require a substantially indefinite number of iterations, and are substantially not deterministic.
Additionally, to aid the functions above and to better utilize available memory, various concepts such as “relocatable memory” were developed and implemented, thereby allowing for more efficient routines for memory management functions such as compaction and defragmentation. Memory management functions, using relocatable memory, work by copying memory atomic units (objects) from one location in memory to another, to allow garbage fragments between valid objects to be combined into larger free memory areas. However, while improving the flexibility of the allocation process, relocatable memory also requires indefinite numbers of iterations, and further makes the time required for housekeeping functions substantially not deterministic. Accordingly, it is desirable to provide a system and method for a dynamic memory manager to overcome these and other limitations in the prior art.
Additionally, prior art memory management systems require extensive memory resources. None of the memory management systems in the prior art employ a caching technique. Caching is a process that stores frequently accessed data and programs in high speed memory local (or internal) to a computer processing unit for improved access time resulting in enhanced system performance. Caching relies on “locality of reference,” the statistical probability that if a computer is accessing one area of memory that future accesses will be to nearby addresses. A cache gains much of its performance advantage from the statistical probability that if a computer is accessing one part of an object that future accesses will be to other parts of the same object. Cache memories are classified by the type of association used to access the data (e.g. direct mapped, set associative, or fully associative), the replacement algorithm (e.g. Least Recently Used (“LRU”) or Least Frequently Used (“LFU”), and the write algorithm (e.g. write back or write through). Cache memories are typically much smaller than the main system memory. The size of a cache memory, type of association, and access statistics of the program(s) executing determine the probability that a piece of data is in the cache when an access to that data occurs. This “hit rate” is a key determinant of system performance.
Accordingly it is desirable to provide a system and method for dynamic memory management technology in conjunction with caching techniques to reduce on chip memory requirements for dynamic memory management.
SUMMARY OF THE INVENTION
The present invention overcomes the deficiencies and limitations of the prior art with a novel system and method for dynamic memory management technology. A system for dynamic memory management maps a sparsely populated virtual address space of memory objects to a more densely populated physical address space of fixed size memory elements for use by a host processor. In one aspect, the system comprises an object cache for caching frequently accessed memory elements and an object manager for managing the memory objects used by the host processor. The object manager may further comprise an address translation table for translating virtual space addresses for a memory object received from the host processor to a physical space address for a memory element, and a management table for storing data associated with the memory objects and memory elements. In one embodiment, the address translation table and the management table are stored in the physical system memory. In another embodiment, the present invention further comprises an address translation table cache for caching the most recently or most frequently used address translation table entries. In yet another embodiment, the present invention further comprises a management table cache for caching the most recently or most frequently used management table entries.
In another aspect, a method for mapping a memory object used by a host processor to a memory element stored in physical memory comprises the steps of receiving a virtual space address for a memory object used by a host processor, d

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Caching dynamically allocated objects does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Caching dynamically allocated objects, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching dynamically allocated objects will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2875872

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.