Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1997-03-31
2001-09-11
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S169000, C711S220000
Reexamination Certificate
active
06289418
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to computer systems and, in particular, to caching of stack memory architectures.
2. Discussion of Related Art
A typical computing system includes a processing unit and a memory unit. Most computing systems use random access memory architectures for the memory unit. Typically, fast memory circuits cost significantly more than slower memory circuits. Therefore, most memory units include a small but fast memory buffer called a cache and a slower main memory buffer. Various caching architectures for random access memory are well known in the art.
However, some computing systems use a stack architecture for the memory unit. A classical stack memory unit uses a last in first out access model. Conceptually, new data entering a stack memory unit is placed on top of the existing data, i.e., in the next available memory location. If data is requested from the stack, the last piece of data “on top of” the stack comes out first. For certain applications, stack-based memory architectures provide several advantages over random access memory architectures. For example, a stack memory architecture is well suited for a calculator using RPN notation.
Like random access memory based computing systems, many stack-based computing systems, including those implementing the JAVA virtual machine, use relatively slow memory devices to store the stack. In general, adding a cache for slow memory devices increases overall memory performance only if the vast majority of memory requests result in cache hits, i.e. the requested memory address is within the cache. Conventional cache designs are designed for random access memory architectures and do not perform well with stack-based memory architectures. Therefore, a caching method and a caching apparatus targeted to improve stack-based memory architectures are desirable.
SUMMARY OF THE INVENTION
Accordingly, the present invention provides a stack management unit including a stack cache to accelerate data retrieval from a stack and data storage into the stack. In one embodiment, the stack management unit includes a stack cache, a dribble manager unit, and a stack control unit. The dribble manager unit maintains a cached stack portion, typically a top portion of the stack in the stack cache. Specifically, when the stack-based computing system is pushing data onto the stack and the stack cache is almost full, the dribble manager unit transfers data from the bottom of the stack cache to the stack. When the stack-based computing system is popping data off the stack and the stack cache is becoming empty, the dribble manager unit transfers data from the stack to the bottom of the stack cache.
The stack cache includes a stack cache memory circuit, one or more read ports, and one or more write ports. The stack cache memory circuit contains a plurality of memory locations, each of which can contain one data word. In one embodiment the stack cache memory circuit is a register file configured with a circular buffer memory architecture. For the circular buffer architecture, the registers can be addressed using modulo addressing. Typically, an optop pointer is used to define and point to the first free memory location in the stack cache memory circuit and a bottom pointer is used to define and point to the bottom memory location in the stack cache memory circuit. As data words are pushed onto or popped off of the stack, the optop pointer is incremented or decremented, respectively. Similarly, as data words are spilled or filled between the stack cache memory circuit and the stack, the bottom pointer is incremented or decremented, respectively.
Some embodiments of the stack management unit include an overflow/underflow unit. The overflow/underflow unit detects and resolves overflow conditions, i.e., when the number of used data words required in the stack cache exceeds a overflow threshold or the capacity of the stack cache, and underflow conditions, i.e., when the number of used data words in the stack cache appears to be negative. If an overflow occurs the overflow/underflow unit suspends operation of the stack cache and causes the spill control unit to store the valid data words in the slow memory unit or data cache unit. Typically, overflows and underflows are caused by a large change in the value of the optop pointer or many frequent changes in the value of the optop pointer. Therefore, some embodiments of the overflow/underflow unit maintain the old value of the optop pointer in an old optop register to determine the amount of valid data in the stack cache after an overflow. After the valid data in the stack cache are spilled to the stack, the overflow/underflow unit equates the cache bottom pointer to the optop pointer. The overflow/underflow unit then resumes normal operation of the stack cache.
If an underflow condition occurs, the overflow/underflow unit suspends operation of the stack cache. In most underflow conditions, the data in stack the cache are no longer valid and are not saved. Therefore, the overflow/underflow unit equates the cache bottom pointer to the optop pointer and resumes operation of the stack cache. However, for underflows caused by context switches, the data in the stack cache must be saved. Therefore, on context switched underflows, the overflow/underflow unit suspends operation of the stack cache and causes the spill control unit to store the valid data words in the stack. After the valid data in the stack cache are saved, the overflow/underflow unit equates the cache bottom pointer to the optop pointer.
Furthermore, some embodiments of the stack management unit includes an address pipeline to transfer multiple data words by the spill control unit and the fill control unit to improve the throughput of spill and fill operations. The address pipeline contains an incrementor/decrementor circuit, a first address register and a second address register. An address multiplexer drives either the output signal of the incrementor/decrementor or the cache bottom pointer to the first address register. The output terminals of the first address register are coupled to the input terminals of the second address register. A stack cache multiplexer drives either the address in the first address register or the address in the second address register to the stack cache. A memory multiplexer drives either the address in the address multiplexer or in the first address register to the slow memory unit or a data cache unit of the slow memory unit. Furthermore, the address in the second address register can be used to adjust the value in the cache bottom pointer.
The stack management unit also includes a fill control unit and a spill control unit. If the fill control unit detects a fill condition, the fill control unit transfers data from the stack to the stack cache memory circuit. In one embodiment of the stack management unit, a stack cache status circuit, typically a subtractor, calculates the number of used data words in the stack cache memory circuit from the optop pointer and the cache bottom pointer. A fill condition occurs if the number of used memory locations in the stack cache memory circuit is less than a low cache threshold. Typically, the low cache threshold is stored in programmable registers. In embodiments of the stack cache management unit with an address pipeline, the fill control unit is typically configured to fill multiple data words for each fill condition.
If the spill control unit detects a spill condition, the spill control unit transfers data from the stack cache memory circuit to the stack. In one embodiment, a spill condition occurs if the number of used locations in the stack cache memory circuit is greater than the high cache threshold. Typically, the high cache threshold is stored in programmable registers. In embodiments of the stack management unit with an overflow/underflow unit, the overflow/underflow unit can cause the spill control unit to perform spill operations. Furthermore, in embodiments of the stack cache management unit with
Gunnison Forrest
Gunnison McKay & Hodgson, L.L.P.
Kim Matthew
Sun Microsystems Inc.
Tzeng Fred F.
LandOfFree
Address pipelined stack caching method does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Address pipelined stack caching method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Address pipelined stack caching method will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2524115