Fencepost descriptor caching mechanism and method therefor

Electrical computers and digital data processing systems: input/ – Input/output data processing – Direct memory accessing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S024000, C709S213000, C711S129000, C711S147000

Reexamination Certificate

active

06691178

ABSTRACT:

RELATED APPLICATIONS
This application is also related to co-pending U.S. Patent Applications entitled “METHOD AND SYSTEM OF CONTROLLING TRANSFER OF DATA BY UPDATING DESCRIPTORS IN DESCRIPTOR RINGS”; “METHOD AND SYSTEM OF ROUTING NETWORK BASED DATA USING FRAME ADDRESS NOTIFICATION”; and “METHOD AND APPARATUS FOR CONTROLLING NETWORK DATA CONGESTION” all filed Sep. 30, 1998, in the name of the same inventor as this application and assigned to the same assignee. Accordingly, the contents of all the related applications are incorporated by reference into this application.
1. Field of the Invention
This invention relates generally to a method and system for controlling the transfer of network based data arranged in frames between a host and controller having shared memory and, more particularly, to a fencepost caching mechanism for receiving and transmitting descriptor ring updates.
2. Background of the Prior Art
Data networks have become increasingly important in day-to-day activities and business applications. Most of these networks are a packet-switched network, such as the Internet, which uses a transmission control protocol, frequently referred to as TCP/IP. The Transmission Control Protocol (TCP) manages the reliable reception and transmission of network traffic, while the Internet Protocol (IP) is responsible for routing and ensuring that the packets are sent to a correct destination.
In a typical network, a mesh of transmission links are provided, as well as switching nodes and end nodes. End nodes typically ensure that any packet is received and transmitted on the correct outgoing link to reach its destination. The switching nodes are typically referred to as packet switches, or routers, or intermediate systems. The sources and destinations in data traffic (the end nodes) can be referred to as hosts and end systems. These host and end systems typically consist of personal computers, workstations, and other terminals.
The network infrastructure employs routers that can determine optimum paths by using routing algorithms. The routers also switch packets arriving at an input port to an output port based on the routing path for each packet. The routing algorithms (or routing protocol) are used to initialize and maintain routing tables that consist of entries that point to a next router to send a packet with a given destination address. Typically, fixed costs are assigned to each link in the network and the cost reflects link bandwidth and/or cost. The least cost paths can be determined by a router after it exchanges network topology and link cost information with other routers. A traditional router is basically a computer dedicated to the task of moving packets of data from one channel to the next. Like most computers, it consists of a central MPU, bus, and memory but distinguished by its possession of multiple I/O channels usually managed by dedicated communication controllers.
A communication controller relieves a central MPU of many of the tasks associated with transmitting and receiving frames. A frame (sometimes referred to as a packet) is a single communication element which can be used for both link-control and data transfer purposes.
Most controllers include a direct memory access (DMA) device or function which provides access to an external shared memory resource. The controller allows either DMA or non-DMA data transfers. The controller accepts a command from the MPU, executes the command, and provides an interrupt and result back to the MPU.
These command operations often entail movement or control of various data structures. Data structures are used for the temporary storage of frames. They play a key role in the architecture of any successful router. Implementations vary, but generally one finds two species: ring buffers and linked lists.
A ring buffer consists of two components: descriptors and frame data buffers. Descriptors both describe and point to a respective frame data buffer within a shared system memory between the host and a controller. The descriptor ring is a circular queue composed of multiple descriptor entries containing pointers and information describing data buffers. Each descriptor ring is dedicated to a specific memory and mapped to specific channels within the controller. Each two-word descriptor entry within a descriptor ring is associated with one specific buffer in a system memory, such as the shared system memory between a network device, such as a controller and host.
The frame buffers are typically defined as blocks of memory containing frames for transmission or providing space for frame reception. Each transmit channel and each receive channel would use a dedicated descriptor ring. Whenever a frame exceeds the finite capacity of a single frame data buffer, the frame is said to “span” the buffer. An ownership bit in the first word of each descriptor indicates whether the host or controller owns the associated frame data buffer.
Ownership follows a specific protocol that must be adhered to by the controller and the host. Once ownership of a descriptor has been relinquished to the other device or host, no part of the descriptor or its associated buffer may be altered. The host gives the network device ownership of empty frame data buffers for frame transmission. Conversely, the network device passes ownership back to the host for transmit frame data buffers it has used and receive frame data buffers it has filled.
For frame reception, the host is required to provide the controller or other network device with ownership of contiguous descriptors pointing to empty frame data buffers. Once a frame is fully received by the controller, ownership of its constituent descriptors is then reassigned. The host is signaled regarding the event via an interrupt. The host is typically obligated to read a Master Interrupt Register (MIR) in order to surmise the meaning of the signal. Once this is accomplished, the frame may then be dispatched in some fashion and ownership of the relevant descriptors return to the controller.
In typical operation, the host “follows” the controller or other network device around the ring leaving “empty” descriptors in its wake for the controller to use. If the device gets too far ahead of the host, it can wrap around the descriptor ring and encounter descriptors it does not own. As a result, incoming frames could be lost if this were to occur.
For frame transmission, the device “follows” the host round a transmit descriptor ring leaving used descriptors in its wake for the host to reclaim. The host transfers ownership of descriptors to the device when it has one or more frames ready for transmission. Once a frame is fully transmitted by the device, ownership of its constituent descriptors is transferred back to the host for reuse. The host is signaled regarding this event via an interrupt.
In certain applications, the host may elect to use data buffers which are smaller in size than the frames that are actually received or transmitted. A single frame may therefore be forced to span multiple buffers. This type of system would allow frames to be dissected (scattered on reception) or assembled (gathered on transmission) by the controller. Multiple buffers can hold the constituent pieces of a frame by “chaining” or grouping the associated descriptors together. The chained descriptors are consecutive entries in a descriptor ring with the end-of-frame flag set in the terminating descriptor of the chain. In other words, data buffer of a descriptor entry which is owned but whose end-of-frame flag (EOF) is not set is considered to be part of a frame and not an entire frame. Scatter/gather buffering benefits frames crossing multiple layers of communication protocol. Rather than laboriously copying large buffers from one service layer to the next, small pointers are passed instead. These pointers are ultimately converted into or generated from frame descriptors.
During reception of large frames, the device “chains” or groups the descriptors together one by one as it fills each successive frame data buffer. When the end of the frame i

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fencepost descriptor caching mechanism and method therefor does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fencepost descriptor caching mechanism and method therefor, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fencepost descriptor caching mechanism and method therefor will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3292105

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.