Data channel architecture for parallel SCSI host adapters

Electrical computers and digital data processing systems: input/ – Intrasystem connection – Bus interface architecture

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S310000

Reexamination Certificate

active

06408354

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to host adapter integrated circuits for interfacing I/O buses, and more particularly to data channels for a parallel host adapter integrated circuit.
2. Description of Related Art
A variety of parallel host adapter architectures is available. See for example, U.S. Pat. No. 5,655,147 or U.S. Pat. No. 5,659,690. Each parallel host adapter provides connectivity between two I/O buses, e.g., a parallel SCSI bus to a host I/O bus, such as a PCI bus.
Originally, a parallel host adapter typically had a single channel that handled both data and administrative information. Data was either read from a memory of the host computer and written to a SCSI device, or read from a SCSI device and written to the memory of the host computer over the single channel. Administrative information that was transferred to and from the host computer memory using the single channel was used internally by the parallel host adapter in the course of managing data transferred, and included sequencer command blocks (SCBs), scatter/gather information, and command completion status.
Hence, data, as used herein, refers to information that is written to, or read from a storage device. Administrative information is information that is used to control the transfer of data, and to control operation of the parallel host adapter.
The use of a single channel for both data and administrative information limited the data transfer throughput. Consequently, a new architecture was introduced that separated the administrative information flow from the data flow. A high level block diagram of a parallel host adapter
100
that separated the channel architecture into an administrative information channel
101
, and a data channel
102
, is illustrated in FIG.
1
.
Administrative information was transferred to and from the host I/O bus via administrative information channel
101
. Administrative information channel
101
coupled a SCSI array memory
160
to PCI bus
110
. Specifically, in channel
101
, a command direct memory access (DMA) engine
151
coupled PCI bus
110
to SCSI array memory
160
. SCSI array memory
160
could be either memory onboard the parallel host adapter, or memory external to the parallel host adapter.
Data channel
102
coupled SCSI bus
120
to PCI bus
110
so that data could be transferred between the two buses. A SCSI module
130
coupled SCSI bus
120
to a first-in-first-out (FIFO) data buffer
140
. SCSI module
130
transferred data on SCSI bus
120
to FIFO data buffer
140
, and transferred data from FIFO data buffer
140
to SCSI bus
120
.
A data DMA engine
150
, typically included in a host interface circuit within the parallel host adapter, coupled FIFO data buffer
140
to PCI bus
110
. Data DMA engine
150
transferred data on PCI bus
110
to FIFO data buffer
140
, and transferred data from FIFO data buffer
140
to PCI bus
110
. As is known to those of skill in the art, DMA engines
151
and
150
were typically configured by an onboard sequencer(not shown) using administrative information stored in SCSI array
160
.
The channel configuration of
FIG. 1
enabled the concurrent flow of data and administrative information in contrast to the earlier single channel configuration that allowed only the flow of one or the other at a given instant in time. However, both the prior art channel configurations allowed only one data context in the data channel at a time. As used here, data context means data transfers associated with a particular command, e.g., a particular SCB.
FIFO data buffer
140
was designed to minimize the time that parallel host adapter
100
required access to PCI bus
110
, and to accept data from SCSI bus
120
without introducing,delay on SCSI bus
120
. For example, in a receive operation where data was transferred from SCSI bus
120
to PCI bus
110
, data from SCSI bus
120
was collected in FIFO data buffer
140
until there was sufficient data in FIFO data buffer
140
to justify requesting access to PCI bus
110
. Typically, data was burst to the host from FIFO data buffer
140
using the highest speed PCI transfer mode.
As SCSI bus data transfer rates increased, typically, the size of FIFO data buffer
140
also increased to maintain or even improve the PCI efficiency, and to prevent SCSI bus stalls. However, the larger size of FIFO data buffer
140
required a longer time for buffer
140
to complete a transfer to the host, i.e., a longer time to empty, when the SCSI bus data transfer was either suspended or completed.
FIFO data buffer
140
was unavailable for another data transfer until emptying of buffer
140
was completed. Consequently, another data context was allowed access to channel
102
only after the previous data context was completely flushed out of channel
102
. In some cases, the delay introduced by the wait for flushing of channel
102
was five microseconds or more.
During this time delay, another SCSI device could be ready to transfer data to parallel host adapter
100
, but the transfer was held off while buffer
140
was flushing. This resulted in an appreciable time delay on SCSI bus
120
. Hence, while parallel host adapter
100
was an improvement over the single channel parallel host adapter, the data throughput could still introduce significant delays because a new data context was delayed until the old data context was flushed from the data channel. As I/O bus speeds increase, further advances in parallel host adapter data throughput are required, or the parallel host adapter will because a major I/O bottleneck that will limit overall system performance.
SUMMARY OF THE INVENTION
According to the principles of this invention, a new parallel host adapter channel architecture eliminates the I/O bottlenecks of the prior art parallel host adapters. The novel parallel host adapter channel architecture includes a plurality of data channels. In one embodiment, at least one dedicated receive data channel in the plurality supports multiple data contexts at the same time. In another embodiment, each of the plurality of data channels is a bi-directional data channel. The parallel host adapter channel architecture of this invention provides a new level of data throughput that is compatible with, and enhances the performance on high speed I/O buses by eliminating the prior art I/O bottleneck.
According to the principles of this invention, a parallel host adapter that interfaces two I/O buses includes at least two data channels that can be used concurrently as a receive data channel and a send data channel, or alternatively, in one embodiment, as two receive channels. When the two data channels are a dedicated receive data channel and a dedicated send data channel, the receive data channel supports at least two data contexts. This permits the parallel host adapter to transmit an old data context to one of the I/O buses at the same time that the parallel host adapter is receiving a new data context from the other of the I/O buses.
In one embodiment, the parallel host adapter of this invention also includes an administrative information channel that couples one of the I/O buses to a memory where administrative information for the parallel host adapter is stored.
The dedicated send data channel of this invention includes a send buffer memory and a data transfer engine. The data transfer engine is coupled to a first port of the send buffer memory and to a first I/O bus coupled to the parallel host adapter. The send buffer memory is a single data context buffer memory.
Data is transferred from the first I/O bus to the send buffer memory by the data transfer engine. The data stored in the send buffer memory is transferred through a second port of the send buffer memory through a second I/O bus interface circuit to a second I/O bus coupled to the parallel host adapter, or alternatively purged depending on the event or events that occur on the second I/O bus. The second I/O bus interface circuit is connected to the second I/O bus via a single I/O bus data port. Mor

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data channel architecture for parallel SCSI host adapters does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data channel architecture for parallel SCSI host adapters, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data channel architecture for parallel SCSI host adapters will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2957654

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.