Data processing: database and file management or data structures – Database design – Data structure types
Reexamination Certificate
1999-10-13
2003-09-30
Bragdon, Reginald G. (Department: 2188)
Data processing: database and file management or data structures
Database design
Data structure types
C711S171000, C711S203000
Reexamination Certificate
active
06629111
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to memory allocation. More particularly, the present invention relates to methods and apparatus for allocating portions of pages, thereby conserving physical memory and virtual memory.
BACKGROUND OF THE INVENTION
Computer software designers and programmers often find it desirable to re-use modules of code or data within multiple applications, Moreover, there are a variety of circumstances in which it is desirable to share code, text, or data among multiple processes. A segment of memory that stores this shared code, text, or data is typically referred to as a shared memory segment. When memory is allocated, the memory is associated with a physical address. However, when an application accessing a shared memory segment is running on a virtual memory system, the CPU accesses a virtual address associated with the shared memory segment rather than a physical address.
Virtual address space in virtual memory is associated with an address range that is typically much larger than that of physical memory. The virtual memory address range starts at a base address and ends at an upper boundary address. This virtual memory address range is divided into pages, which may correspond during the execution of an application to various physical addresses. In other words, the same virtual address may be mapped to many different physical addresses over time. In this manner, virtual memory allows for very effective multiprogramming and relieves the user of unnecessarily tight constraints of main memory.
FIG. 1A
is a diagram illustrating a prior art implementation of a conventional virtual memory system. During run-time, a shared memory segment is accessed by its virtual address. More particularly, CPU
102
generates a virtual address which is converted to a physical address using a translation look-aside buffer (TLB)
104
. More particularly, the virtual page number is mapped by a memory management module
105
to a physical page number using the TLB
104
. Thus, the TLB
104
is maintained to translate virtual addresses to physical addresses. In a typical virtual memory system, memory is allocated in units of pages. Thus, each entry
106
in the TLB is required by hardware to correspond to a page. In addition, the number of entries in the TLB is restricted by hardware. For instance, the maximum number of entries in a TLB may be 96.
During the execution of an application
108
, the application may access code
110
, text
112
, and/or data
114
that are grouped within a library
116
. Each shared memory segment containing an accessed portion of the library
116
is loaded and stored in one or more pages in physical memory. Once loaded, each page is accessed by a virtual address via a corresponding entry in the TLB
104
.
In order to retrieve data from memory, the virtual-to-physical address translation is performed via the TLB
104
. As shown in
FIG. 1B
, each entry
106
in the TLB
104
typically associates a virtual address
118
of a page in virtual memory with a corresponding physical address
120
of a page in physical memory. In addition, a page size
122
may be optionally provided. Moreover, attributes
124
associated with the page are typically specified. For instance, attributes of the page may include caching attributes (e.g., cached, uncached) as well as indicate whether a page can be read, written, and/or executed (e.g., read/write, read/execute, read/write/execute, read only).
A page table (not shown to simplify illustration) is used to translate a virtual address, consisting of a page number and offset, into a physical address, consisting of a frame number and offset. The page number of a virtual address is used to index the page table. Each entry in the TLB
104
represents a page table entry that has been most recently used and, thus, comprises a subset of the complete page table.
When all entries in the TLB have been used and a new entry in the TLB is needed, an existing entry is overwritten and allocated for use as the new entry. As a result, when the overwritten entry cannot be accessed by the associated virtual address, a TLB miss results. The TLB miss generates a trap, which interrupts execution of the calling application until the application is resumed. The virtual-to-physical mapping associated with the needed page is then loaded into the TLB. Thus, the TLB entry for that page is updated or created. Due to the limited capacity of a TLB, it is therefore typically necessary to limit the number of libraries that are simultaneously accessed to reduce the occurrence of TLB cache misses.
In addition to the limited size of the TLB, the size of a page is architecture specific. In the past, these page sizes were relatively small (e.g., on the order of 512 bytes). However, in newer systems, the page sizes are typically much larger. For instance, page sizes may range from 4K to 16M. Since the TLB requires that virtual addresses be mapped to physical addresses in units of pages, the memory for the new memory segment must also be allocated in units of pages. In other words, it is not possible to allocate a portion of a page. Thus, while the allocation of memory in units of pages makes the task of memory page management more scalable for very large memory systems, this also tends to contribute to wasted memory when an object (e.g., library portion, code, data, text) stored in memory occupies much less than a single page or only a small portion of one of the pages allocated for its storage. This is particularly problematic in embedded systems, where memory resources are limited, as will be shown and described with reference to FIG.
2
.
FIG. 2
is a diagram illustrating the allocation of memory when a library is loaded according to conventional methods. As shown, a memory
202
stores a first application
204
and a second application
206
. During execution, the first application
204
accesses a first library
208
such as a dynamically loaded library (DLL), “LIBINFRA”, and the second application
206
accesses a second library
210
, “LIBC”. When the first library
208
is loaded into memory, a new page
212
is allocated. Even where the first library
208
requires only a small portion
214
of the newly allocated page
212
, the entire page is allocated and dedicated in its entirety to the first library
208
. Similarly, when the second library
210
is loaded into the memory
202
, a second page
216
is allocated, even where the second library
210
occupies only a small portion
218
of the second page
216
. As one example, a library may occupy only 128 bytes of a 4K page. Due to the limitations of current memory allocation schemes, the remaining memory within this page would remain unused. As another example, a library may occupy 3.05 pages. Since 4 pages must be allocated according to current memory allocation schemes, almost an entire page is wasted. Thus, allocation of memory for shared memory segments (e.g., libraries) according to conventional systems results in the wasting of a substantial amount of memory. As shown, this unnecessary memory consumption is particularly problematic where the page sizes are much larger than the libraries being stored in memory.
In view of the above, it would be desirable to optimize the use of available memory in a system such as an embedded system. Moreover, it would be beneficial if the probability of a cache miss could be reduced without limiting the simultaneous access of multiple libraries, therefore maximizing the number of shared memory segments that can be simultaneously accessed.
SUMMARY OF THE INVENTION
An invention is described herein that provides methods and apparatus for allocating memory. According to one embodiment, this is accomplished, in part, through the use of a memory manager that accesses a memory segment list of a plurality of memory segments. The memory segment list and memory manager are designed so that regions of allocated memory pages that might normally go unused may actually be used to store data or other information. In this manner, physical
Belair Stephen P.
Kathail Pradeep K.
Stine Arthur B.
Beyer Weaver & Thomas LLP
Bragdon Reginald G.
Cisco Technology Inc.
LandOfFree
Memory allocation system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Memory allocation system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory allocation system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3005358