Performance enhancement implementation through buffer...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S036000, C710S054000, C710S065000, C709S241000

Reexamination Certificate

active

06665753

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to an improved data processing system and, more particularly, to methods of managing bus traffic generated by I/O devices.
2. Description of Related Art
With the recent rapid expansion of the Internet as well as the increased use of networked computers by small, as well as large, businesses, the numbers of computers utilized as servers has increased. Servers is a computer within a network that is shared by multiple users. Servers may be used, for example, as a file server in a small network allowing access to common files to multiple users within a company, or as a web server providing internet content to numerous users who access the information via the Internet.
Because servers may be accessed by numerous users, servers typically include many input/output (I/O) devices to accommodate these users. In many computers, these I/O devices are connected to a central processor and other system resources within the computer via an I/O adapter connected to a peripheral component interconnect (PCI) bus. The PCI bus is connected to a main system I/O bus via PCI-PCI bridges and PCI host bridges. These bridges include circuitry for placing data from the PCI bus onto the system I/O bus and vice versa. The system I/O bus is shared by numerous I/O adapters to carry data between various system resources, such as, for example, the central processing unit (CPU) or main system memory, and the various I/O devices. However, only one I/O device at a time may utilize the system I/O bus. Therefore, other devices must wait until the system I/O bus is not busy to utilize the system I/O bus.
When data is requested by an I/O device, a PCI to PCI bridge prefetches a certain amount of the requested data to provide for the I/O adapter's buffers. Once this data has been provided to the I/O adapter, the next part of the requested data is prefetched. The amount of data prefetched by the PCI to PCI bridge is fixed and independent of the type of I/O adapter. If the adapter has shallow buffers and the PCI to PCI bridge prefetches more data than the adapter can take in due to insufficient adapter buffer space, then the PCI to PCI bridge is forced to throw away the extra data to avoid coherency issues. Then the adapter may ask for the additional data and the PCI to PCI bridge will have to re-request the data from the PCI Host Bridge (PHB). The PHB may already have the next available piece of data, which it will have to throw away to re-gather the previous data again.
For example, if a PCI to PCI bridge prefetches 512 bytes of data, then the PHB will give the PCI to PCI bridge the 512 bytes of data and then gather another 512 bytes of data in anticipation of a request for the next piece of data. The PCI to PCI bridge gives the data to the adapter, but the adapter only takes 128 bytes because that is the limit of its buffer. The PCI to PCI bridge throws away 384 bytes. The adapter then requests the next 128 bytes of data. The PCI to PCI bridge must then go back to the PHB to request the previous data again. Thus, the PHB has to throw away the next 512 bytes so that it can retrieve the previous data again.
This fetching data over and over again generates a great deal of wasted traffic on the system I/O bus thus slowing down the performance of the server. Therefore, a method, system, and apparatus for reducing the amount of traffic on the system I/O bus due to multiple requests of the same data by an I/O adapter would be desirable.
SUMMARY OF THE INVENTION
The present invention provides a method, system, and apparatus for modifying bridges within a data processing system to provide improved performance. In one embodiment, the data processing system determines the number of input/output adapters connected underneath each PCI host bridge. The data processing system also determines the type of each input/output adapter. The size and number of buffers within the PCI host bridge is then modified based on the number of adapters beneath it as well as the type of adapters beneath it to improve data throughput performance as well as prevent trashing of data. The PCI host bridge is also modified to give load and store operations priority over DMA operations. Each PCI-to-PCI bridge is modified based on the type of adapter connected to it such that the PCI-to-PCI bridge prefetches only an amount of data consistent with the type of adapter such that excess data is not trashed, thus requiring extensive repetitive use of the system buses to retrieve the same data more than once.


REFERENCES:
patent: 5983292 (1999-11-01), Nordstrom et al.
patent: 6023736 (2000-02-01), Lambeth et al.
patent: 6078970 (2000-06-01), Nordstrom et al.
patent: 6081851 (2000-06-01), Futral et al.
patent: 6101557 (2000-08-01), Movall et al.
patent: 6223299 (2001-04-01), Bossen et al.
patent: 6477610 (2002-11-01), Willenborg

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Performance enhancement implementation through buffer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Performance enhancement implementation through buffer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Performance enhancement implementation through buffer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3182360

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.