Processor for determining physical lane skew order

Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output process timing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S052000, C710S118000, C713S400000, C713S600000

Reexamination Certificate

active

06625675

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to input/output (I/O) data transmission devices, and more particularly to first-in-first-out (FIFO) buffer devices in I/O data transmission paths that compensates for lane skew order.
Description of the Related Art
InfiniBand (registered Trademark of the InfiniB and Trade Association, Portland, Oreg.) architecture is a new common I/O specification to deliver a channel based, switched-fabric technology that the entire hardware and software industry can adopt. A network and components associated with an InfiniBand network
100
are shown in
FIG. 1
a.
InfiniBand based networks are designed to satisfy bandwidth-hungry network applications, such as those combining voice, data, and video on the Internet. InfiniBand architecture is being developed by the InfiniBand Trade Association that includes many hardware and software companies. Its robust layered design enables multiple computer systems and peripherals to work together more easily as a single high-performance and highly available server.
Being a fabric-centric, message-based architecture, InfiniBand is ideally suited for clustering, input/output extension, and native attachment in diverse network applications. InfiniBand technology can be used to build remote card cages
15
or connect to attached hosts
35
, routers
40
, or disk arrays
50
. InfiniBand also features enhanced fault isolation, redundancy support, and built-in failover capabilities to provide high network reliability and availability. Featuring high-performance and reliability, these devices provide solutions for a range of network infrastructure components, including servers and storage area networks.
In
FIG. 1
b,
a block diagram is shown in exemplary form of InfiniBand components in a portion of the network shown in
FIG. 1
a.
These components have input/output interfaces, each forming part of a target channel adapter (TCA)
10
, host channel adapter (HCA)
20
, an interconnect switch device
30
, and routers
40
, each that have application specific integrated circuits (ASIC) core interfaces that include InfiniBand Technology Link Protocol Engine (IBT-LPE) cores that connect ASICs between each of these components through links
25
in an InfiniBand Technology (IBT) network
100
. The IBT-LPE core supports a range of functionality that is required by all IBT devices in the upper levels of the physical layer and the lower link layer. It also handles the complete range of IBT bandwidth requirements, up to and including a 4-wide link operating at 2.5 gigabits per second. The IBT-LPE core (large integrated circuit design) in the upper levels of the physical layer and the link layer core of the ASIC comply with standards established by the InfiniBand Trade Association in the IBTA 1.0 specifications (2001). Such architectures decouple the I/O subsystem from memory by using channel based point to point connections rather than shared bus, load and store configurations.
The TCA
10
provides an interface for InfiniBand-type data storage and communication components. Creating InfiniBand adapters that leverage the performance benefits of the InfiniBand architecture is accomplished through a cooperative, coprocessing approach to the design of an InfiniBand and native I/O adapter. The TCA
10
provides a high-performance interface to the InfiniBand fabric, and the host channel communicates with a host based I/O controller using a far less complex interface consisting of queues, shared memory blocks, and doorbells. Together, the TCA and the I/O controller function as an InfiniBand I/O channel deep adapter. The TCA implements the entire mechanism required to move data between queues and to share memory on the host bus and packets on the InfiniBand network in hardware. The combination of hardware-based data movement with optimized queuing and interconnect switch priority arbitration schemes working in parallel with the host based I/O controller functions maximizes the InfiniBand adapter's performance.
The HCA
20
enables connections from a host bus to a dual 1× or 4× InfiniBand network. This allows an existing server to be connected to an InfiniBand network and communicate with other nodes on the InfiniBand fabric. The host bus to InfiniBand HCA integrates a dual InfiniBand interface adapter (physical, link and transport levels), host bus interface, direct memory target access (DMA) engine, and management support. It implements a layered memory structure in which connection-related information is stored in either channel on-device or off-device memory attached directly to the HCA. It features adapter pipeline header and data processing in both directions. Two embedded InfiniBand microprocessors and separate direct memory access (DMA) engines permit concurrent receive and transmit data-path processing.
The interconnect switch
30
can be an 8-port 4× switch that incorporates eight InfiniBand ports and a management interface. Each port can connect to another switch, the TCA
10
, or the HCA
20
, enabling configuration of multiple servers and peripherals that work together in a high-performance InfiniBand based network. The interconnect switch
30
integrates the physical and link layer for each port and performs filtering, mapping, queuing, and arbitration functions. It includes multicast support, as well as performance and error counters. The management interface connects to a management processor that performs configuration and control functions. The interconnect switch
30
typically can provide a maximum aggregate channel throughput of 64 gigabits, integrates buffer memory, and supports up to four data virtual lanes (VL) and one management VL per port.
FIG. 2
illustrates the core logic
210
that connects an InfiniBand transmission media
280
(the links
25
shown in
FIG. 1
b
) to an application specific integrated circuit (ASIC)
240
(such as the TCA
10
, the HCA
20
, the switch
30
, the router
40
, etc. as shown in
FIG. 1
b
). The core logic
210
illustrated in
FIG. 2
is improved using the invention described below. However, the core logic
210
shown in
FIG. 2
is not prior art and may not be generally known to those ordinarily skilled in the art at the time of filing of this invention. Received data through SERDES
225
over the lanes
200
in the data links can be in data packets.
To accommodate the different speeds of the data signals being handled, the core logic
210
includes a serialization portion
270
that includes the serialization/deserialization (SERDES) units
225
. The structure and operation of such serialization/deserialization units is well known to those ordinarily skilled in the art and such will not be discussed in detail herein so as not to unnecessarily obscure the salient features of the invention.
The InfiniBand transmission media
280
is made up of a large number of byte-striped serial transmission lanes
200
that form the links
25
. The receive serialization/deserialization units
225
deserialize the signals from the transmission media
280
and perform sufficient conversion to a frequency that is acceptable to the core logic
210
. For example, if the serialization/deserialization receive units
225
operate to deserialize 10 bits at a time, a 10-to-1 reduction occurs that reduces the 2.5 gigabit per second speed on the transmission media
280
into a 250 MHz frequency that is acceptable to the core logic
210
.
The core logic
210
also includes a data control unit
260
. The frequency of the signal propagating along the transmission media
280
may not always occur at this wire speed, but instead may be slightly above or below the desired frequency (e.g., by up to 100 parts per million) or control proper data transmit to the upper link layer that is synchronized. This inconsistency in the frequency is transferred through the serialization/deserialization units
225
. The data control unit
260
includes FIFO buffers
261
that buffer the data being received by the serialization/deserialization units (S

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Processor for determining physical lane skew order does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Processor for determining physical lane skew order, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Processor for determining physical lane skew order will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3064374

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.