Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-09-28
2004-04-06
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S117000, C711S122000, C711S154000
Reexamination Certificate
active
06718440
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to the field of computer systems. More particularly, the present invention relates to the field of memory access for computer systems.
2. Description of Related Art
A processor typically executes instructions at a faster clock speed relative to that for external memory, such as dynamic random access memory (DRAM) for example. Accessing external memory therefore introduces delays in the execution of instructions by the processor as the processor fetches both instructions to be executed and data to be processed in executing instructions from the memory at a relatively slower clock speed.
A typical processor may help minimize delays due to this memory access latency by processing instructions through a pipeline that fetches instructions from memory, decodes each instruction, executes the instruction, and retires the instruction. The operation of each stage of the pipeline typically overlaps in time those of the other stages to help hide memory access latencies in fetching instructions and data for instruction execution.
By identifying instructions that may be executed regardless of whether one or more prior fetched instructions are executed, a typical processor may also help minimize delays due to memory access latency by executing instructions in parallel, that is overlapping in time the execution of two or more instructions, and/or by executing instructions out of order. In this manner, the processor helps hide memory access latencies by continuing to execute instructions while waiting, for example, to fetch data for other instructions. Regardless of the order in which instructions are executed, the processor retires each instruction in order.
The processor may also help minimize memory latency delays by managing the out of order execution of relatively more instructions at any one time to help widen the window to fetch instructions and/or data from memory without introducing significant delays. The processor may, for example, use a larger instruction reorder buffer to manage at any one time relatively more instructions for out of order execution, a larger memory order buffer to manage at any one time relatively more data requests from memory for out of order data fetching, and/or a larger memory request queue to allow relatively more memory requests to be issued at any one time.
A typical processor may further help minimize memory access latency delays by using one or more relatively larger internal cache memories to store frequently accessed instructions and data. As the processor may then access such instructions and data internally, the processor helps reduce accesses to external memory.
Using larger buffers, queues, and/or cache memories, however, increases the cost and size of the processor.
REFERENCES:
patent: 5325508 (1994-06-01), Parks et al.
patent: 5732242 (1998-03-01), Mowry
patent: 5813030 (1998-09-01), Tubbs
patent: 5822790 (1998-10-01), Mehrotra
patent: 5845101 (1998-12-01), Johnson et al.
patent: 6237064 (2001-05-01), Kumar et al.
patent: 6240488 (2001-05-01), Mowry
patent: 6292871 (2001-09-01), Fuente
patent: 2002/0010838 (2002-01-01), Mowry
Intel Architecture Optimization Reference Manual, Intel Corporation, Chapter 6, “Optimizing Cache Utilization for Pentium III Processors,” pp 6-1 -6-30, 1999.*
Young et al., “On Instruction and Data Prefetch Mechanisms,” pp 239-246, IEEE, 1995.*
Tomkins, et al., “Informed Multi-Process Prefetching and Caching,” pp 100-114, ACM, 1997.*
Intel® Architecture Optimization Manual, Intel® Corporation, Order No. 242816-003, pp. 1-1 to 1-3 and 2-1 to 2-16 (1997).
Intel® Architecture Optimization Reference Manual, Intel® Corporation, Order No. 245127-001, pp. i-xx and 1-1 to 1-16 (1998, 1999).
Intel® Architecture Software Developer's Manual vol. 1: Basic Architecture, Intel® Corporation, Order No. 243190, pp. i-xvi, 1-1 to 1-10, and 2-1 to 2-14 (1999).
Intel® Architecture Software Developer's Manual vol. 3: System Programming, Intel® Corporation, Order No. 243192, pp. i-xxii, 1-1 to 1-10, and 9-1 to 9-40 (1999).
P6 Family of Processors Hardware Developer's Manual, Intel® Corporation, Order No. 244001-001, pp. i-vii, 1-1 to 1-2, and 2-1 to 2-7 (Sep. 1998).
Abdallah Mohammad A.
Garg Vivek
Keshava Jagannath
Maiyuran Subramaniam
Blakely , Sokoloff, Taylor & Zafman LLP
Elmore Stephen
Intel Corporation
Kim Matthew
LandOfFree
Memory access latency hiding with hint buffer does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Memory access latency hiding with hint buffer, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory access latency hiding with hint buffer will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3192097