Pooled receive and transmit queues to access a shared bus in...

Multiplex communications – Pathfinding or routing – Through a circuit switch

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S462000, C370S401000

Reexamination Certificate

active

06356548

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to computer network switches and, in particular, to a multi-port switching device architecture of a network switch.
BACKGROUND OF THE INVENTION
A network switch of a data communications network provides a switching function for transferring information, such as data frames, among entities of the network. Typically, the switch is a computer comprising a collection of components (e.g., cards) interconnected by a backplane of wires. Each card may include a plurality of ports that couple the switch to the other network entities over various types of media, such as Ethernet, FDDI or token ring connections. A network entity may consist of any device that “sources” (i.e., transmits) or “sinks” (i.e., receives) data frames over such media.
The switching function provided by the switch typically comprises receiving a frame at a source port from a network entity, processing the frame to determine a destination port, forwarding the frame over the backplane to at least one other destination port and, thereafter, transmitting that data over at least one medium to another entity of the network. When the destination of the frame is a single port, a unicast data transfer takes place over the backplane. In many cases, however, the destination of the frame may be more than one, but less than all of the ports of the switch; this results in a multicast data transfer being employed. Moreover, a typical switching mode implemented by the switch is a store-and-forward mode wherein the entire frame is received before initiating the forwarding operation. Many switches also generally support cut-through switching wherein forward processing begins as soon as a destination address of the frame is recognized.
To facilitate the forwarding of frames within the switch, the backplane is typically implemented as a switching fabric, such as a bus. The bus is generally a multipoint data path that is shared among the switch cards to transport information contained in the frames, such as address, data and control signals, needed by the cards to perform the data switching function. Because the bus is utilized in virtually every operation performed by the switch, it is a key component whose characteristics have a significant impact on the overall performance of the switch. For example, the speed at which the signals are transported over the bus impacts the effective data rate of the switch. This data rate is also effected by the manner in which the source port provides the data frames to the data path, along with the manner in which the destination port receives the frames from that path.
In a typical network switch, each port generally receives (and transmits) only one frame at a time, primarily because the logic associated with each port can only process one frame at a time. Although this arrangement is suitable for cut-through switching, it may lead to networking problems, a classic one of which is called head-of-line blocking. Head-of-line blocking may occur when many source ports attempt to send frames to a port that can only process frames serially. Head-of-line blocking, in turn, leads to congestion in the network, thus requiring extensive buffering of the frames at the source ports. One solution to this problem may be to expand the capability of the port to simultaneously receive (and transmit) a plurality of frames from different sources by replicating the logic of each port.
For example, each port typically includes (i) decode logic circuitry for decoding header information of the frame to determine an intended destination for that frame and (ii) state machine logic circuitry, coupled to the decode logic circuitry, for processing the frame. The logic circuits, which are generally implemented on an application specific integrated circuit (ASIC) “chip” device, cooperate to extract a destination address from the frame. Based on the destination address, a memory is accessed (via a look-up operation) to determine (via a forwarding decision operation) the intended destination.
Replicating the logic of the port n times on an ASIC chip results in n decode/state machine logic sets, each of which requires access to the memory. In order to realize the same memory bandwidth performance as the single port embodiment, n times the amount and width of memory is needed. Such an approach is expensive in terms of cost and chip footprint consumption. Accordingly, the present invention is directed to a low-cost switching device architecture that achieves performance similar to the totally replicated approach.
It is therefore an object of the present invention to provide a low-cost multi-port chip architecture that attains memory bandwidth performance equivalent to that of a single-port chip embodiment.
Another object of the present invention is to provide a multi-port switching device architecture that alleviates head-of-line blocking while improving memory bandwidth utilization.
SUMMARY OF THE INVENTION
The invention comprises a multi-port switching device architecture that decouples decode logic circuitry of each port of a network switch from its respective state machine logic circuitry and, significantly, organizes the state machine logic as pools of transmit/receive engine resources that are shared by each of the decode logic circuits. Since the pooled resources are not closely-coupled with respective ports, multiple engines may be employed to service a heavily-utilized port. Additionally, intermediate priority logic of the switching device cooperates with the decode logic and pooled resources to allocate frames among available resources in accordance with predetermined ordering and fairness policies. These policies prevent misordering of frames from a single source while ensuring that all ports in the device are serviced fairly.
In the illustrative embodiment, the architecture includes a transmit data path comprising a pool of transmit engines for driving inbound frames over a shared bus of the switch and a receive data path comprising a pool of receive engines for receiving out-bound frames from the bus. The pool of receive engines are available to capture multiple frames from the shared bus that are bound for network media via “downstream” ports of the device. Similarly, the pool of transmit engines are available to a single “upstream” port of the switching device when simultaneously sending multiple frames to the shared bus. Notably, these engines are configured to optimize bandwidth into a memory when executing look-up and/or forwarding decision operations.
By decoupling the decode logic from the state machines and pooling the state machines as transmit/receive engine resources, the invention advantageously allows sharing of the resources to satisfy multiple accesses to a single upstream port, particularly in a situation where the other ports are idle. This arrangement improves utilization of those resources, including memory bandwidth utilization, for situations where they would otherwise be idle. In addition, the inventive architecture improves performance of the switch by, inter alia, alleviating head-of-line blocking at a transmitting device.


REFERENCES:
patent: 4598400 (1986-07-01), Hillis
patent: 4709327 (1987-11-01), Hillis et al.
patent: 4756606 (1988-07-01), Rickard
patent: 4773038 (1988-09-01), Hillis et al.
patent: 4791641 (1988-12-01), Hillis
patent: 4805091 (1989-02-01), Thiel et al.
patent: 4809202 (1989-02-01), Wolfram
patent: 4864559 (1989-09-01), Perlman
patent: 4870568 (1989-09-01), Kahle et al.
patent: 4993028 (1991-02-01), Hillis
patent: 5014265 (1991-05-01), Hahne et al.
patent: 5018137 (1991-05-01), Backes et al.
patent: 5027350 (1991-06-01), Marshall
patent: 5070446 (1991-12-01), Salem
patent: 5088032 (1992-02-01), Bosack
patent: 5111198 (1992-05-01), Kuszmaul
patent: 5113510 (1992-05-01), Hillis
patent: 5117420 (1992-05-01), Hillis et al.
patent: 5124981 (1992-06-01), Golding
patent: 5129077 (1992-07-01), Hillis
patent: 5148547 (1992-09-01), Kahle et al.
patent: 5151996 (1992-09-01), Hillis
patent: 5175865 (1992-12-01), Hillis
patent: 52127

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Pooled receive and transmit queues to access a shared bus in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Pooled receive and transmit queues to access a shared bus in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Pooled receive and transmit queues to access a shared bus in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2821338

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.