Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-05-24
2003-09-16
Bragdon, Reginald G. (Department: 2188)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S112000, C711S113000
Reexamination Certificate
active
06622212
ABSTRACT:
BACKGROUND
1. Field of the Invention
This invention relates generally to methods and apparatus for prefetching I/O data blocks. In particular, the present invention relates to methods and apparatus for adaptively prefetching data blocks from the input/output devices of a server.
2. Description of the Related Art
The latency incurred when reading I/O data can greatly diminish performance since the requesting device usually requires the requested data in order to perform some pending process or instruction. It is conventional to attempt to effectively shorten the latency by prefetching the data expected to be requested in a read operation. However, most systems and methods for prefetching data in anticipation of a read operation operate either by design or by mode bit programming. In a prefetch by design, the data is always prefetched. Since a prefetch of data utilizes system resources, it can be extremely disadvantageous to always prefetch in applications where the prefetched data is rarely the data that is requested in the next read operation. In prefetch by mode bit programming, there is a bit in the read request that is programmatically set by processor instructions to indicate whether or not data should be prefetched. For example, a value of “1” for the mode bit indicates prefetch and a bit value of “0” for the mode bit indicates don't prefetch. The condition for setting the prefetch mode bit is static and usually predefined in an I/O interface of the processor requesting the data or in a memory controller for the memory from which the data is read. For example, in the 450GX chipset available from Intel Corporation, Santa Clara, Calif., read cycles for a PCI bus are divided into several different types (memory read, memory read line, memory read multiple, etc.) and prefetching is done only for certain designated types of cycles. When a command is issued, the mode bit is set according to the designation for that command.
Whether the prefetch is by design or by mode bit programming, it is carried out universally for all data requests made by the processor I/O interface or for all read operations handled by the memory controller for the memory. This can be quite disadvantageous, for example, in servers where a large amount of I/O data is frequently transferred between the processor, memory and several different I/O devices in different block sizes and the lack of efficiency in transferring I/O data blocks may have a larger effect on overall performance than the speed of the microprocessor has on overall performance. It also may be that the buses and/or I/O cards connecting the I/O devices to the microprocessor is the bottleneck and the performance of these I/O subsystem components to be improved.
Conventional servers typically have a significant number of I/O devices and a bus master for each I/O device. If there is any prefetch routine, it is carried out in common for all bus masters and in all circumstances. Even in those servers where prefetch is available, it is preset at design time or is set as a user option in response to a prompt during the set-up configuration of the system and is static from that time forward. Neither the user nor a processor in the system can change the prefetch mode bit during operation.
Even though the prefetch option may be selected during configuration, the performance of the server is still less than optimum because the I/O devices in the server may be of radically different types, store different kinds of data and/or vary from each other in the addressing sequence by which the data blocks containing the data are read out. For example, a pre-recorded CD-ROM may store large contiguous blocks of image data and the read out of such image data by an optical disk drive may consist of many sequential addresses. Another I/O device may store heavily fragmented user data and the data readout from such a device rarely consists of sequential addresses. A prefetch system designed for a single microprocessor, such as that described in U.S. Pat. No. 5,537,573, is not suitable for use in a server.
SUMMARY
The present invention is directed to adaptive prefetch of I/O data blocks. In a first aspect of an example embodiment, an adaptive method of prefetching data blocks from an input/output device comprises predicting the address of each read operation reading a data block from the input/output device, the prediction based on the address of the immediately preceding read operation from the input/output device; tracking, for each read operation, whether each read operation reads a data block from the same address of the input/output device predicted for the read operation; and prefetching a data block for a read operation from the input/output device in accordance with the state of a state machine, the state of the state machine depending upon whether immediately preceding read operations read a data block from the same address of the input/output device predicted for the read operations.
REFERENCES:
patent: 5146578 (1992-09-01), Zangenehpour
patent: 5600817 (1997-02-01), Macon, Jr. et al.
patent: 5623608 (1997-04-01), Ng
patent: 5765213 (1998-06-01), Ofer
patent: 5867685 (1999-02-01), Fuld et al.
patent: 6078996 (2000-06-01), Hagersten
patent: 6170030 (2001-01-01), Bell
Antonelli Terry Stout & Kraus LLP
Bragdon Reginald G.
Intel Corp.
LandOfFree
Adaptive prefetch of I/O data blocks does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Adaptive prefetch of I/O data blocks, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Adaptive prefetch of I/O data blocks will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3108419