Electrical computers and digital data processing systems: input/ – Input/output data processing – Peripheral adapting
Reexamination Certificate
2000-04-25
2003-04-29
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Peripheral adapting
C710S004000, C710S011000, C710S033000, C710S034000, C710S036000, C710S065000, C707S793000
Reexamination Certificate
active
06557060
ABSTRACT:
BACKGROUND
1. Field of the Invention
This invention relates generally to the transfer of data between a host processing device and a network connection. In particular, the present invention relates to methods for transferring data in a host expansion bridge between a network connection and a host interface or bus.
2. Description of the Related Art
Many computer systems, such as workstations or personal computers (PCs) with a Pentium® microprocessor processing device (manufactured by Intel Corporation), typically use Peripheral Component Interconnect (PCI) buses as an interconnect transport mechanism to transfer data between different internal components, such as one or more processors, memory subsystems and input/output (I/O) devices including, for example, keyboards, input mouses, disk controllers, serial and parallel ports to printers, scanners, and display devices. The PCI buses are high performance 32 or 64 bit synchronous buses with automatic configurability and multiplexed address, control and data lines as described in the latest version of “
PCI Local Bus Specification, Revision
2.2” set forth by the PCI Special Interest Group (SIG) on Dec. 18, 1998. Currently, the PCI architecture provides the most common method used to extend computer systems for add-on arrangements (e.g., expansion cards) with new video, networking, or disk memory storage capabilities.
When PCI buses are used to transfer data in a host processing system such as a server, bridges may be provided to interface and buffer transfers of data between the processor, the memory subsystem, the I/O devices and the PCI buses. Examples of such bridges may include PCI—PCI bridges as described in detail in the “
PCI—PCI Bridge Architecture Specification, Revision
1.1” set forth by the PCI Special Interest Group (SIG) on Apr. 5, 1995. However, the performance of such a host processing system may be burdened by the demands of I/O devices to access processors and memory locations of the processing system during data transfer operations.
When connected to a network, host processing systems may need to be able to serve as a source (initiator) system which initiates a data transfer or as a destination (target) system which participates in a data transfer initiated by another system. Furthermore, the data traffic on a network is usually quite asynchronous and unpredictable. Each physical link of the network may support a number of logical channels. Each channel may be a bidirectional communication path allowing commands and data to flow between a processing system and the network. The data may be transmitted across the network in packet form, often in organized groups of packets according to various communication protocols and often through intermediate nodes.
Each processing system connected to the network has a network interface which acts as the communications intermediary between the asynchronous network traffic and its own, usually synchronous I/O subsystems. In a host processing system, such as a server, there may be a large amount of data storage and communications functionality and the demand for access to the system may be complex. Typically, data transfers between a processing system and a network are highly asynchronous and the bit size of the payload data on the network may not be the same as the bit sizes for host processors, memory subsystems, I/O subsystems, PCI devices behind or on one side of a host bridge such as a PCI—PCI bridge, etc. As a result transfer operations over a PCI bus or other synchronous I/O subsystem may not be optimized for network data, and the wait time for processing data transferred over the network may be unnecessarily lengthened.
Unlike PCI and other I/O buses, some host processor interfaces and host buses require, at the beginning of a data transfer, to know how much data is associated with the transfer. The amount of data must be specified for the specific naturally aligned granularity of the host processor interface and/or bus. In many cases, the hardware of the network interface does not operate at the same alignment and/or granularity as the host bus and it would be convenient to have a simple efficient mechanism for converting data length counts between granularities and aligning the data. Accordingly, there is a need for a scalable solution for converting data bytes received from a network communication link into naturally aligned data formats and pre-counting the data to make it ready for a host processor or bus.
REFERENCES:
patent: 5574923 (1996-11-01), Heeb et al.
patent: 6286005 (2001-09-01), Cannon
patent: 6480913 (2002-11-01), Monteiro
Antonelli Terry Stout & Kraus LLP
Farooq Mohammad O.
Gaffin Jeffrey
LandOfFree
Data transfer in host expansion bridge does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data transfer in host expansion bridge, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data transfer in host expansion bridge will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3087318