Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1999-05-27
2003-12-23
Patel, Ajit (Department: 2664)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S428000
Reexamination Certificate
active
06667983
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the field of communication systems including communication among computer systems that are networked together. More specifically, the present invention relates to computer controlled communication systems having improved message queuing mechanisms for use with a network interface card (NIC).
2. Related Art
Networked communication systems (“networks”) are very popular mechanisms for allowing multiple computers and peripheral systems to communicate with each other within larger computer systems. Local area networks (LANs) are one type of networked communication system and one type of LAN utilizes the Ethernet communication standard (IEEE 802.3). One Ethernet LAN standard is the 10 BaseT system which communicates at a rate of 10 Megabits per second and another Ethernet LAN standard, 100 BaseT, communicates at a rate of 100 Megabits per second. Computer systems can also communicate with coupled peripherals using different bus standards including the Peripheral Component Interconnect (PCI) bus standard and the Industry Standard Architecture (ISA) and Extended Industry Standard Architecture (EISA) bus standards. The IEEE 1394 serial communication standard is also another popular bus standard adopted by manufacturers of computer systems and peripheral components for its high speed and interconnection flexibilities.
FIG. 1A
illustrates a prior art computer system
10
that can communicate data packets (messages) to and from a network of computers and peripherals
20
(a “network”). System
10
contains a processor
30
interfaced with a peripheral components interconnect (PCI) bus
25
which is also interfaced with a NIC device
12
and a volatile memory unit
40
. The NIC
12
provides communication with the network
20
. The NIC
12
provides a single register, called the Tx entry point
14
, for queuing up data packets for transmission onto the network
20
. The Tx entry point
14
contains a pointer to a linked list of data packets
45
a
-
45
n
that reside in the volatile memory unit
40
. Each data packet in the linked list contains a pointer
42
a
-
42
c
to the next data packet for transmission. The NIC
12
reads the data packets of the linked list, in order, from the memory unit
40
and transmits then to network
20
. When all the data packets in the linked list have been transmitted, or when the network
20
is down, the NIC
12
stops processing the data that is indicated by the pointer of the Tx entry point
14
.
FIG. 1B
illustrates a flow diagram
60
of steps performed by the processor
30
of system
10
(
FIG. 1A
) for queuing up a new data packet to NIC
12
for transmission over network
20
. This flow diagram
60
illustrates the latencies attributed to system
10
for queuing up a new data packet. These latencies decrease the overall throughput of PCI bus
25
and degrade the performance of NIC
12
thereby decreasing the quality of service of system
10
. At step
62
of
FIG. 1
B, to queue up a data packet for transmission, the processor
30
constructs the new data packet in a vacant memory space of memory unit
40
. At step
64
, the processor
30
requests access to the PCI bus
25
, waits its turn in the round-robin arbitration scheme for the access grant, and then commands the NIC
12
to stall its current activity. Each of these activities of step
64
introduces unwanted latencies. At step
66
, while the NIC
12
remains stalled, the processor
30
again requests PCI bus access, waits for the grant, and then sorts through the linked list of data packets
45
a
-
45
n
to determine the last data packet in the list. The new data packet is then appended (e.g., linked) to the last data packet,
45
n
. Each of these activities of step
66
introduces more unwanted latencies. Lastly, at step
68
, while the NIC remains stalled, the processor
30
again requests PCI bus access, waits for the grant, and then signals the NIC
12
to resume its activities. Again, each of these activities of step
68
introduces unwanted latencies.
As shown above, the process
60
of queuing the new data packet for transmission requires at least 3 PCI bus requests which introduce unwanted latency because each request is followed by a waiting period for the bus grant and to make matters worse, the first PCI bus request stalls the NIC
12
. The NIC
12
is stalled because it operates independently from the processor
30
, sending and receiving information based on the data's availability and the network's throughput. In other words, at the time the processor
30
wants to append the new data packet to the linked list, the processor
30
does not know which data packet in the linked list that the NIC
12
is processing. Assuming the NIC is not stalled, if the processor
30
appends the new data packet to the linked list just after the NIC
12
processed the last part of the last data packet
45
n
, then the newly appended data packet would never be recognized by the NIC
12
and thereby would never be transmitted to network
20
. This is called a “race” condition because the processor
30
and the NIC
12
are not synchronized and the processor
30
does not know the transmission status of the NIC
12
at all times. Therefore, to eliminate the race condition, the processor
30
stalls the NIC
12
, appends the new data packet to the linked list, and then allows the NIC
12
to resume its activities as shown in FIG.
1
B.
Unfortunately, requesting PCI bus access and NIC stalling, in accordance with the steps
60
of
FIG. 1B
, heavily degrade system performance. Each PCI bus request generated by the processor
30
interrupts and degrades the communication of other components on the PCI bus
25
. Furthermore, while the processor
30
waits for PCI bus access in order to link the new packet to the linked list, the NIC
12
remains stalled, again degrading communication performance.
Moreover, in many new processing environments and architectures, communication systems and computer systems need to process and communicate data packets of different data types. For instance, electronic mail (email) messages are sent and received by the system
10
(FIG.
1
A). Also, voice and image data are sent and received by the system
10
as well as other multi-media content. However, live broadcasts (e.g., voice and data) need high priority transmission without jitter to allow natural conversation and appearance, while other information, such as email messages, can be communicated successfully at lower priorities. Unfortunately, system
10
does not provide any special communication techniques for messages of different priorities.
Accordingly, what is needed is a communication system that reduces the latencies described above for queuing a new data packet for transmission by a NIC. What is needed further is a communication system that provides mechanisms for handling messages (data packets) having different priorities. The present invention provides these advantageous features. These and other advantages of the present invention not specifically mentioned above will become clear within discussions of the present invention presented herein.
SUMMARY OF THE INVENTION
A scaleable priority arbiter is described herein for arbitrating between multiple first-in-first-out (FIFO) entry points of a network interface card (NIC). The circuit provides a separate FIFO entry point circuit within the NIC for each data packet priority type. Exemplary priority types, from highest to lowest, include isochronous, priority
1
, priority
2
, . . . , priority n. A separate set of FIFO entry points are provided for NIC transmitting (Tx) and for NIC receiving (Rx). For each of the Tx FIFO entry points, a single Tx entry point register is seen by the processor and multiple downlist pointers are also maintained. The Tx entry point registers all feed a scaleable priority arbiter which selects the next message for transmission. The scaleable priority arbiter is made of scaleable circuit units that contain a sequential element contro
Lo Burton B.
Pan Anthony L.
Uppunda Krishna
3Com Corporation
Patel Ajit
Shah Chirag
Wagner , Murabito & Hao LLP
LandOfFree
Scaleable priority arbiter for arbitrating between multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Scaleable priority arbiter for arbitrating between multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scaleable priority arbiter for arbitrating between multiple... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3095895