System and method for managing data in an I/O cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S154000

Reexamination Certificate

active

06542968

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a system and method for fetching data from a system memory to a device, over a Peripheral Component Interconnect (PCI) bus. More specifically, the present invention is directed to a system and method for efficiently fetching data from a system memory to a device communicating over a PCI bus, based upon hints that are observed from PCI bus transactions.
2. Discussion of the Related Art
In computer system design, a principal objective is to continually design faster and more efficient computer systems. In this regard, most conventional high-performance computer systems include cache memories. As is known, a cache memory is a high-speed memory that is positioned between a microprocessor and main memory in a computer system in order to improve system performance. Typically, cache memories (or caches) store copies of portions of main memory data that are actively being used by the central processing unit (CPU) while a program is running. Since the access time of a cache can be faster than that of main memory, the overall access time can be reduced.
Even though cache memories typically increase system performance, further improvements are desired. For example, consider a computer system having separate busses, such as a system bus that interconnects a central processing unit (e.g., a microprocessor), memory, etc., and an I/O bus (e.g., ISA bus, PCI bus, etc). One of the bottlenecks that has limited the performance of personal computers in the past has been the maximum specified speed of the ISA bus. In original IBM PC AT computers manufactured by IBM Corp., the I/O bus operated with a data rate of 8 MHz (BCLK=8 MHz). This was an appropriate data rate at that time since it was approximately equivalent to the highest data rates which the CPUs of that era could operate with on the host bus. CPU data rates are many times faster today, however, so the slow speed of the I/O bus severely limits the throughput of systems today. One solution for this problem has been the development of local bus standards, by which certain devices which were traditionally located on the I/O bus can now be located on the host bus—e.g., the VESA VL-Bus Local Bus Standard.
Another solution to the problem has been the development of another standard, referred to herein as the PCI standard. As is known, PCI is an acronym for Peripheral Component Interconnect. The PCI standard is a set of guidelines that define a way to connect external devices to a computer, and was originally developed to provide a Local Bus standard that would prevent the rapidly increasing numbers of incompatible bus architectures that were being developed in the early 1990s. In this regard, the PCI Bus replaces the ISA, EISA, VL-Local Bus, MicroChannel, NuBus, and other Local Bus architectures as the preferred primary Local Bus in computer systems.
The PCI bus achieves very high performance, in part because its basic data transfer mode is by burst. That is, data is always transferred to or from a PCI device in a known sequence of data units defined by a known sequence of data unit addresses in an address space. In a “linear” burst mode, any number of transfers (including 1) can take place to/from linearly sequential addresses until either the initiator or the target terminates the transaction. The initiator need only specify the starting address because both parties know the sequence of addresses which follow.
The implementation of the PCI bus is well known in the industry and its specifications are available to the public. In transferring data to and from a high speed industry standard common bus, often it is desirable to provide an intermediate local cache buffer for the data to allow the bus to maintain full bandwidth. That is, it is desirable to maintain full utilization of the I/O bus that interfaces the PCI bus to the cache, without overtaxing the system bus. For example, when data is fetched from memory to the cache, it is fetched one cache line at a time.
When data is first requested by a device on the PCI bus, there is an initial latency period (idle I/O clock cycles) while the first cache line of data is retrieved from memory to the cache. If the PCI transfer requires more than one cache line of data, then another latency period is encountered while the next cache line of data is retrieved from system memory to the cache. Intermittent latency periods are encountered each time a new line of data is read from memory to the cache. It would therefore be desirable to eliminate or significantly reduce these latency periods. One way of achieving this goal is to always pre-fetch an additional cache line of data. For example, initially two cache lines of data could be retrieved from memory to the cache. After the first line of data has been transferred from the cache to the PCI bus and the second line is being transferred to the PCI bus, then an additional cache line of data could be fetched from memory into the cache.
While this approach would reduce the idle cycles encountered on the I/O bus, it realizes an inefficient utilization of system resources. The problem with this approach is that it over-fetches data from memory into the cache (by one cache line). Therefore, it unnecessarily consumes bandwidth of the system bus. In addition, it wastes a portion of the cache memory. Such poor utilization of the cache memory space denigrates overall system performance.
Accordingly, there is a desire to provided an improved system and method for interfacing a cache to a PCI bus that overcomes the above-identified and other shortcomings.
SUMMARY OF THE INVENTION
Certain objects, advantages and novel features of the invention will be set forth in part in the description that follows and in part will become apparent to those skilled in the art upon examination of the following or may be learned with the practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
To achieve the advantages and novel features, the present invention is generally directed to a system and method for fetching data from system memory to a device in communication with the system over a PCI bus, via an I/O cache. Broadly, the present invention may be viewed as a novel way to communicate certain fetching hints; namely, hints that specify certain qualities about the data that is to be fetched from the system memory. In operation, the I/O cache may use such hints to more effectively manage the data that passes through it. As one simple example, if, based upon the hints, the controller for the I/O cache knew (or assumed) that the data being fetched was ATM data, then it would also know (based upon the nature of ATM data) that precisely a forty-eight byte data payload was to be sent to the requesting device, and the I/O cache could pre-fetch precisely this amount of data (typically one or two cache lines).
In accordance with one aspect of the invention, such a system includes an input/output (I/O) cache memory interposed between the system memory and the PCI bus, wherein the cache memory has internal memory space in the form of a plurality of data lines within the cache memory. The system further includes a plurality of registers for each PCI master that are configured to define fetching criteria. Finally, the system includes a register selector that is configured to select an active register among the plurality of registers, wherein fetching criteria for the device is specified by the active register.
More particularly, in such a system constructed in accordance with the preferred embodiment of the invention, the registers contain contents that specify certain hints with regard to data fetching. For example, one such hint may be a pre-fetch depth, whereby the registers may contain differing values of pre-fetch depth. A first register may specify a pre-fetch depth of two cache lines, while a second register may specify a pre-fetch depth of thr

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for managing data in an I/O cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for managing data in an I/O cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for managing data in an I/O cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3094922

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.