Method of validation and host buffer allocation for unmapped...

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S412000, C370S474000, C709S214000, C712S001000

Reexamination Certificate

active

06314100

ABSTRACT:

TECHNICAL FIELD
This invention relates to the transferring of data in computer networks, and more particularly to processing and transmitting sequences of non-interlocked frames of data across a computer network boundary.
BACKGROUND
The number of computers and peripherals has mushroomed in recent years. This has created a need for improved methods of interconnecting these devices. A wide variety of networking paradigms have been developed to enable different kinds of computers and peripheral components to communicate with each other.
There exists a bottleneck in the speed with which data can be exchanged along such networks. This is not surprising because increases in network architecture speeds have not kept pace with faster computer processing speeds. The processing power of computer chips has historically doubled about every 18 months, creating increasingly powerful machines and “bandwidth hungry” applications. It has been estimated that one megabit per second of input/output is generally required per “MIPS” (millions of instructions per second) of processing power. With CPUs now easily exceeding 200 MIPS, it is difficult for networks to keep up with these faster speeds.
Area-wide networks and channels are two approaches that have been developed for computer network architectures. Traditional networks (e.g., LAN's and WAN's) offer a great deal of flexibility and relatively large distance capabilities. Channels, such as the Enterprise System Connection (ESCON) and the Small Computer System Interface (SCSI), have been developed for high performance and reliability. Channels typically use dedicated short-distance connections between computers or between computers and peripherals.
Features of both channels and networks have been incorporated into a new network standard known as “Fibre Channel”. Fibre Channel systems combine the speed and reliability of channels with the flexibility and connectivity of networks. Fibre Channel products currently can run at very high data rates, such as 266 Mbps or 1062 Mbps. These speeds are sufficient to handle quite demanding applications, such as uncompressed, full motion, high-quality video.
There are generally three ways to deploy a Fibre Channel network: simple point-to-point connections; arbitrated loops; and switched fabrics. The simplest topology is the point-to-point configuration, which simply connects any two Fibre Channel systems directly. Arbitrated loops are Fibre Channel ring connections that provide shared access to bandwidth via arbitration. Switched Fibre Channel networks, called “fabrics”, yield the highest performance by leveraging the benefits of cross-point switching.
The Fibre Channel fabric works something like a traditional phone system. The fabric can connect varied devices such as work stations, PCS, servers, routers, mainframes, and storage devices that have Fibre Channel interface ports. Each such device can have an origination port that “calls” the fabric by entering the address of a destination port in a header of a frame. The Fibre Channel specification defines the structure of this frame. (This frame structure raises data transfer issues that will be discussed below and addressed by the present invention). The Fibre Channel fabric does all the work of setting up the desired connection, hence the frame originator does not need to be concerned with complex routing algorithms. There are no complicated permanent virtual circuits (PVCs) to set up. Fibre Channel fabrics can handle more than 16 million addresses and thus, are capable of accommodating very large networks. The fabric can be enlarged by simply adding ports. The aggregate data rate of a fully configured Fibre Channel network can be in the tera-bit-per-second range.
Each of the three basic types of Fibre Channel connections are shown in
FIG. 1
, which shows a number of ways of using Fibre Channel technology. In particular, point-to-point connections
100
are shown connecting mainframes to each other. A Fibre Channel arbitrated loop
102
is shown connecting disk storage units. A Fibre Channel switch fabric
104
connects work stations
106
, mainframes
108
, servers
110
, disk drives
112
and local area networks (LANs)
114
. Such LANs include, for example, Ethernet, Token Ring and FDDI networks.
An ANSI specification (X3.230-1994) defines the Fibre Channel network. This specification distributes Fibre Channel functions among five layers. As shown in
FIG. 2
, the five functional layers of the Fibre Channel are: FC-0—the physical media layer; FC-1—the coding and encoding layer; FC-2—the actual transport mechanism, including the framing protocol and flow control between nodes; FC-3—the common services layer; and FC-4—the upper layer protocol.
While the Fibre Channel operates at a relatively high speed, it would be desirable to increase speeds further to meet the needs of faster processors. One way to do this would be to eliminate, or reduce, delays that occur at interface points. One such delay occurs during the transfer of a frame from the FC-1 layer to the FC-2 layer. At this interface, devices linked by a Fibre Channel data link receive Fibre Channel frames serially. A protocol engine receives these frames and processes them at the next layer, the FC-2 layer shown in FIG.
2
. The functions of the protocol engine include validating each frame; queuing up direct memory access (DMA) operations to transfer each frame to the host; and building transmit frames. Each frame includes a header and a payload portion.
Conventional approaches to handling frames generally rely on the involvement of a host CPU on a frame-by-frame basis. For example, the validation of received frames and setting up DMA operations and acknowledgments typically involve the host CPU, which limits frame transmission and reception rates and prevents the host CPU from performing other tasks. Further, a host CPU with software protocol “stacks” may have difficulty keeping up with fast networks such as Fibre Channel.
Typically in Fibre Channel, all received frames are mapped to a context that allows a protocol engine to validate the received frame header against expected values. In particular, in most classes of Fibre Channel service, there is an interlocking frame that allows the transmitter and receiver to map a sequence of frames to an exchange using the “RXID” and “OXID” header fields. However, in certain classes of service and profiles (e.g., Class 3, TCP/IP), sequences of Fibre Channel frames are not interlocked. Thus, received frames have no associated context so the protocol engine cannot validate the header. In conventional designs, the protocol engine must pass both the received frame header and the payload to the host memory, so that the host CPU can validate the header and copy the payload data into the proper host buffer. Each transfer of a frame to the host memory generates an interrupt to the host CPU. This method burdens the host, consumes host buffer space for header storage, and wastes transfer bus bandwidth.
FIG. 3
shows a simplified block diagram of a typical prior art host data structure for unmapped frames. A first frame
300
and a second frame
302
are assembled from serial data received from a network data link. Each frame consists of a header and a payload portion, indicated in the diagram as “HDR
1
” and “PL
1
”, and “HDR
2
” and “PL
2
”, respectively. Since the received frames do not have an associated context, they cannot be validated by a protocol engine. Thus, both the received frame header and payload data of each frame
300
and
302
must be passed to the host memory
304
for header validation and proper buffer storage of the associated payload data. Frames stored in the host memory
304
are simply concatenated, as shown. The host must then serially examine each frame header to determine if the frame is part of the current sequence.
In view of the foregoing, objects of the invention include: increasing data transfer processing speeds in high speed networks such as the Fibre Channel network; providing a technique that can speed up a protoco

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of validation and host buffer allocation for unmapped... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of validation and host buffer allocation for unmapped..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of validation and host buffer allocation for unmapped... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2579407

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.