Linear address extension and mapping to physical memory...

Electrical computers and digital processing systems: processing – Instruction decoding – Decoding instruction to generate an address of a microroutine

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S208000, C711S205000, C711S206000, C711S207000, C711S208000, C711S209000

Reexamination Certificate

active

06349380

ABSTRACT:

FIELD
The present invention relates to microprocessor and computer systems, and more particularly, to virtual memory systems with extended linear address generation and translation.
BACKGROUND
Most microprocessors make use of virtual or demand-paged memory schemes, where sections of a program's execution environment are mapped into physical memory as needed. Virtual memory schemes allow the use of physical memory much smaller in size than the linear address space of the microprocessor, and also provide a mechanism for memory protection so that multiple tasks (programs) sharing the same physical memory do not adversely interfere with each other.
Physical memory is part of a memory hierarchy system, which may be illustrated as part of a computer system shown in FIG.
1
. Microprocessor
102
has a first level cache comprising instruction cache
104
and data cache
106
. Microprocessor
102
communicates with unified second level cache
108
via backside bus
110
. Second level cache
108
contains both instructions and data, and may physically reside on the chip die
102
. Caches
104
and
106
comprise the first level of the memory hierarchy, and cache
108
comprises the second level.
The third level of memory hierarchy for the exemplary computer system of
FIG. 1
is indicated by memory
112
. Microprocessor
102
communicates with memory
112
via host processor (front side) bus
114
and chipset
116
. Chipset
116
may also provide graphics bus
118
for communication with graphics processor
120
, and serves as a bridge to other busses, such as peripheral component bus
122
. Secondary storage, such as disk unit
124
, provides yet another level in the memory hierarchy.
FIG. 2
illustrates some of the functional units within microprocessor
102
, including the instruction and data caches. In microprocessor
102
, fetch unit
202
fetches instructions from instruction cache
104
, and decode unit
206
decodes these instructions. For a CISC (Complex Instruction Set Computer) architecture, decode unit
206
decodes a complex instruction into one or more micro-instructions. Usually, these micro-instructions define a load-store type architecture, so that micro-instructions involving memory operations are simple load or store operations. However, the present invention may be practiced for other architectures, such as for example RISC (Reduced Instruction Set Computer) or VLIW (Very Large Instruction Word) architectures.
For a RISC architecture, instructions are not decoded into micro-instructions. Because the present invention may be practiced for RISC architectures as well as CISC architectures, we shall not make a distinction between instructions and micro-instructions unless otherwise stated, and will simply refer to these as instructions.
Most instructions operate on several source operands and generate results. They name, either explicitly or through an indirection, the source and destination locations where values are read from or written to. A name may be either a logical (architectural) register or a location in memory. Renaming logical registers as physical registers may allow instructions to be executed out of order. In
FIG. 2
, register renaming is performed by renamer unit
208
, where RAT (Register Allocation Table)
210
stores current mappings between logical registers and physical registers. The physical registers are indicated by register file
212
.
Every logical register has a mapping to a physical register in physical register file
212
, where the mapping is stored in RAT
210
as an entry. An entry in RAT
210
is indexed by a logical register and contains a pointer to a physical register in physical register file
212
. Some registers in physical register file
212
may be dedicated for integers whereas others may be dedicated for floating point numbers, but for simplicity these distinctions are not indicated in FIG.
2
.
During renaming of an instruction, the current RAT provides the required mapping for renaming the source logical register(s) of the instruction, and a new mapping is created for the destination logical register of the instruction. This new mapping evicts the old mapping in the RAT.
Renamed instructions are placed in instruction window buffer
216
. All instructions “in-flight” have an entry in instruction window buffer
216
, which operates as a circular buffer. Instruction window buffer
216
allows for memory disambiguation so that memory references are made correctly, and allows for instruction retirement in original program order. (For CISC architectures, a complex instruction is retired when all micro-instructions making up the complex instruction are retired together.)
For an instruction that writes its result to a memory location, data cache
106
(part of the memory hierarchy) is updated upon instruction retirement. For an instruction that writes its result to a logical register, no write need be done upon retirement because there are no registers dedicated as logical registers. (Physical register file
212
has the result of the retiring instruction in that physical register which the destination logical register was mapped to when the instruction was renamed.)
Scheduler
218
schedules instructions to execution units
220
for execution. For simplicity, only memory execution unit
224
is explicitly indicated in execution units
220
. A load or store instruction is dispatched by scheduler
218
to AGU (Address Generation Unit)
222
for computation of a linear address, and memory execution unit
224
translates the linear address into a physical address and executes the load or store instruction. Memory execution unit may send data to or receive data from a forwarding buffer (not shown) rather than data cache
106
, where a forwarding buffer stores objects that may eventually be written to data cache
106
upon instruction retirement. The scheduling function performed by scheduler
218
may, for example, be realized by reservation stations (not shown) implementing Tomasulo's algorithm (or variations thereof) or by a scoreboard. Execution units
220
may retrieve data from or send data to register file
212
, depending upon the instruction to be executed.
In other embodiments of the present invention, the information content contained in the data structures of physical register field
212
and instruction window buffer
216
may be realized by different functional units. For example, a re-order buffer may replace instruction window buffer
216
and physical register file
212
, so that results are stored in the re-order buffer, and in addition, registers in a register file are dedicated as logical registers. For this type of embodiment, the result of an instruction that writes to a logical register is written to a logical register upon instruction retirement.
With most modern computer systems, a microprocessor refers to a memory location by generating a linear address, but an object is retrieved from a specific memory location by providing its physical address on an address bus, such as bus
114
in FIG.
1
. Linear addresses may be the same as physical addresses, in which case address translation is not required. However, usually a virtual memory scheme is employed in which linear addresses are translated into physical addresses. In this case, a linear address may also be referred to as a virtual address. The linear address space is the set of all linear addresses generated by a microprocessor, whereas the physical address space is the set of all physical addresses.
For some microprocessor architectures, such as Intel® Architecture 32 bit (IA-32) microprocessors (Intel® is a registered trademark of Intel Corporation, Santa Clara, Calif.), there is also another type of address translation in which a logical address is translated into a linear address. For these type of architectures, the instructions provide logical address offsets, which are then translated to linear addresses by AGU
222
in FIG.
2
. This extra stage of address translation may provide additional security, e.g., where application code cannot mod

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Linear address extension and mapping to physical memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Linear address extension and mapping to physical memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Linear address extension and mapping to physical memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2945497

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.