System for combining adjacent push/pop stack program...

Electrical computers and digital processing systems: processing – Processing control – Instruction modification based on condition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S225000, C712S202000

Reexamination Certificate

active

06349383

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates in general to the field of data processing in computers, and more particularly to an apparatus and method for performing double push/pop stack accesses with a single micro instruction.
2. Description of the Related Art
Software programs that execute on a microprocessor consist of macro instructions, which together direct the microprocessor to perform a function. Each instruction directs the microprocessor to perform a specific operation, which is part of the function, such as loading data from memory, storing data in a register, or adding the contents of two registers.
In a desktop computer system, a software program is typically stored on a mass storage device such as a hard disk drive. When the software program is executed, its constituent instructions are copied into a portion of random access memory (RAM). Present day memories in computer systems consist primarily of devices utilizing dynamic RAM (DRAM) technologies.
Early microprocessors fetched instructions and accessed associated data directly from DRAM because the speed of these microprocessors was roughly equivalent to the DRAM speed. In more recent years, however, improvements in microprocessor speed have far outpaced improvements in DRAM speed. Consequently, today's typical processing system contains an additional memory structure known as a cache. The cache is used to temporarily store a subset of the instructions or data that are in DRAM. The cache is much faster than the DRAM memory, but it is also much smaller in size. Access to a memory location whose data is also present in the cache is achieved much faster than having to access the memory location in DRAM memory.
Cache memory is typically located between main memory (i.e., DRAM memory) and the microprocessor. In addition, some microprocessors incorporate cache memory on-chip. Wherever the cache resides, its role is to store a subset of the instructions/data that are to be processed by the microprocessor.
When a processing unit in a microprocessor requests data from memory, the cache unit determines if the requested data is present and valid within the cache. If so, then the cache unit provides the data directly to the processing unit. This is known as a cache hit. If the requested data is not present and valid within the cache, then the requested data must be fetched from main memory (i.e., DRAM) and provided to the processing unit. This is known as a cache miss.
Structurally, a cache consists of a number of cache lines, a typical cache line being 32-bytes in length. Each cache line is associated with, or mapped to, a particular region in main memory. Thus, when a cache miss happens, the entire cache line is filled, that is, multiple locations in memory are transferred to the cache to completely fill the cache line. This is because large blocks of memory can be accessed much faster in a single access operation than sequentially accessing smaller blocks.
In addition, typical microprocessor operands range in size from one byte to eight bytes. But, for a cache to provide the capability to selectively address and transfer individual bytes of data would require the addition of complex and costly hardware to a microprocessor design. To simplify cache designs, a present day processing unit within a microprocessor accesses cache lines in subdivisions called cache sub-lines. Thus, when a processing unit accesses an operand at a given memory address, the entire cache sub-line to which the operand is mapped is accessed by the processing unit; data logic in the processing unit places the operand at its specified location within the cache sub-line. Typical cache sub-lines are eight bytes in length. For instance, an instruction directing access to a 4-byte operand would result in access to the operand's associated 8-byte cache sub-line. Hence, by fixing the size of a cache line access to be equal to the size of a cache sub-line, microprocessor designers are able to produce designs which are less complex and costly.
In general, accessing a larger amount of data than what is really required in a single access does not impose a burden on microprocessor performance. Typically, only one cycle of a microprocessor clock is required to access an entire cache sub-line and locate an operand within it. Yet, microprocessor designers continue to be challenged to produce microprocessor designs having execution speed, without having increased power consumption, complexity, or cost.
One approach to improving the overall execution speed of a design is to improve the execution efficiency of particular frequently used instructions. Frequently used instructions are those instructions which are found in significant quantities within a meaningful number of application programs. By bettering performance of frequently used instructions, the overall performance of a microprocessor is notably improved.
Most present day microprocessors provide instructions to store/retrieve data to/from a common data structure known as a stack. These instruction are called stack access instructions. A stack is a data structure occupying a designated location in memory. The stack is used to pass operands between application programs, or between subroutines within a given application program. A stack structure is unique in the sense that locations within the stack are prescribed with reference to a top of stack location, which is maintained by register logic within a microprocessor. Hence, to place operands on the stack or to retrieve operands from the stack, an application need only reference the current top of stack location. When a stack access instruction is executed, a top of stack pointer register in the microprocessor is automatically incremented or decremented to adjust the top of stack accordingly.
Two frequently used stack access instructions are PUSH and POP. A PUSH instruction directs a microprocessor to store an operand at the current top of stack. A POP instruction directs the microprocessor to retrieve an operand from the current top of stack. The PUSH and POP instructions are the instructions most commonly employed to pass operands between application programs as described above. Moreover, in desktop applications, it is highly likely to find sections of a given application program wherein several sequential PUSH or POP instructions are executed. This is because, rather than passing a single operand, desktop applications normally pass several operands on the stack. Furthermore, the architecture for addressing operands within a stack, i.e., with reference to the top of stack, dictates that successive stack access will indeed access adjacently located operands in the stack. Consequently, it is not uncommon to observe instances in a given application program where the execution of sequential stack access instructions results in repeated access to a particular cache sub-line. Although two operands may reside within the particular cache sub-line, the stack access instructions themselves only prescribe a single access to a single operand.
Repeatedly accessing the same cache sub-line to store/retrieve data prescribed by successive stack access instructions truly is an inefficient use of microprocessor resources which manifests itself in unnecessary program delays. In particular, the execution of repeated PUSH/POP instructions in a present day microprocessor wastes a great deal of valuable execution time because it is highly probable that at least two successive PUSH/POP instructions will result in access to the same cache sub-line. One skilled in the art will appreciate that performance of a microprocessor can be significantly improved by combining successive accesses to the same cache sub-line into a single access.
Therefore, what is needed is a microprocessor capable of combining two access operations to access two operand prescribed by two successive stack access instructions into a single access to a cache sub-line to access the two operands.
In addition, what is needed is an apparatus in a microprocessor to combine two

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System for combining adjacent push/pop stack program... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System for combining adjacent push/pop stack program..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for combining adjacent push/pop stack program... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2977355

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.