Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2001-06-28
2002-08-20
Luther, William (Department: 2664)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S400000, C359S199200
Reexamination Certificate
active
06438130
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to switch fabrics, and specifically to efficient switching of packets within switch fabrics.
BACKGROUND OF THE INVENTION
The computer industry is moving toward fast, packetized, serial input/output (I/O) bus architectures, in which computing hosts and peripherals are linked by a switching network, commonly referred to as a switch fabric. A number of architectures of this type have been proposed, culminating in the “InfiniBand™” (IB) architecture, which has been advanced by a consortium led by a group of industry leaders (including Intel, Sun Microsystems, Hewlett Packard, IBM, Compaq, Dell and Microsoft). The IB architecture is described in detail in the InfiniBand Architecture Specification, Release 1.0, which is available from the InfiniBand Trade Association at www.infinibandta.org and is incorporated herein by reference.
As in other packet networks, each InfiniBand packet carries a media access control (MAC) address, known in InfiniBand parlance as a Local Identifier (LID), which is used by switches in the fabric to convey the packet to its destination. Each InfiniBand switch maintains a Forwarding Database (FDB), listing the correspondence between the LIDs of incoming packets and the ports of the switch. When the switch receives a packet at one of its ports, it looks up the LID of the packet in its FDB in order to determine the destination port through to which the packet should be switched for output. Since the LID field is 16 bits long, the FDB may have up to 64K (2
16
) entries. The InfiniBand standard specifies that the first 48K entries in the FDB are used for unicast packet LIDs, while the final 16K entries are reserved for multicast LIDs. The need to look up every incoming packet in the 64K FDB places a strain on processing resources in the switch, making it difficult to maintain wire-speed switching operation.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide improved devices and methods for switching packets in a switch fabric.
It is a further object of some aspects of the present invention to enhance the speed with which a switch in a switch fabric or other network can process a packet.
It is yet a further object of some aspects of the present invention to enhance the versatility of switch devices used in a switch fabric.
In preferred embodiments of the present invention, each port in a high-speed switch comprises a forwarding database cache, referred to hereinafter as a FDB cache, preferably comprising a two-way set-associative cache. The cache entries identify the respective output ports to which the switch is to send packets with certain MAC addresses. These port assignments are read into the cache from a much larger FDB, such as the 64K-entry FDB used in InfiniBand switches.
When a packet arrives at an input port of the switch, the port looks up the destination MAC address of the packet in its FDB cache, preferably using a few of the least significant bits of the address as the lookup index. When the MAC address matches the target stored in the cache for the given index (i,e., when there is a cache hit), the switch sends the packet to the output port indicated in the cache. The port thus saves considerable processing time by avoiding having to read the port from the FDB itself, as well as conserving bandwidth used in FDB access. Since it is common in a switch fabric for a sequence of packets to be sent along the same route, the likelihood of a cache hit is high. In the event of a cache miss, the input port looks up the MAC address in the FDB. Preferably, the input port inserts the new MAC address and its corresponding port in the cache, most preferably replacing the least-recently-used (LRU) entry having the same index as the current MAC address.
In some preferred embodiments of the present invention, the FDB cache also includes one or more control bits for each entry. Preferably, one of the control bits is a “force-hit” bit, which causes the input port to switch incoming packets to the output port indicated in the cache even when the MAC address of the packet does not match the cache target address. In one of these preferred embodiments, the caches at one or more of the ports are loaded so as to direct all incoming packets to one of the output ports to which a host is connected, and the force-hit bits are set. As a result, all of the incoming packets at these ports will be directed to the host for processing. This technique can be used, for example, to configure the switch and host to serve as a router, thus enhancing the versatility of switching devices using the FDB cache.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a device for switching packets in a network, including:
a switching core;
a plurality of ports, coupled to pass the packets from one to another through the switching core, the ports including, with respect to each packet among the packets switched by the device, a receiving port, coupled to receive the packet from a packet source, and a destination port, to which the packet is passed for conveyance to a packet destination; and
one or more cache memories, respectively associated with one or more of the ports, each of the cache memories being configured to hold a forwarding database cache for reference by the receiving port with which the cache memory is associated in determining the destination port of the packet.
Typically, the packets include respective packet addresses, such as media access control (MAC) addresses, and the forwarding database cache includes entries indicating the destination port for each of a selected plurality of the packet addresses. Preferably, the entries in the forwarding database cache are arranged in one or more tables, which are indexed by a segment of the packet addresses. Most preferably, the segment of the packet addresses includes a predetermined number of the least significant bits of the packet addresses. Additionally or alternatively, the one or more tables include at least two tables.
Further additionally or alternatively, each of the entries includes a target field, corresponding to at least a portion of one of the packet addresses with which the entry is associated, and the target field is compared to the portion of the packet addresses to determine that a cache hit has occurred, whereupon the receiving port reads the destination port from one of the tables. Preferably, when the cache hit does not occur with respect to one of the packets, the destination port is read from a forwarding database outside the cache memory. Most preferably, the destination port read from the forwarding database outside the cache memory is entered in the cache in place of a least recently used one of the entries having a given index.
Preferably, the forwarding database cache includes one or more tables including entries to which the receiving port refers the packets that it receives, each such entry including a target field and a data value indicating the destination port to which the packet should be passed when the packet matches the target field.
In a preferred embodiment, at least some of the entries further include a force-hit flag, such that when the force-hit flag is set in the entry to which the packet is referred, the packet is passed to the destination port indicated by the entry even when the packet does not match the target field. Preferably, the entries in at least one of the one or more tables are configurable so that the data value for all of the entries can be set to indicated the same destination port, and the force-hit flag of all of the entries can be set so that all of the packets received at the receiving port are passed to the same destination port. Most preferably, the one or more cache memories include a multiplicity of cache memories respectively associated with a multiplicity of the ports, and wherein the entries in the multiplicity of the cache memories can be set so that all of the packets received at the multiplicity of the ports are passed to the same destin
Crupnicoff Diego
Gabbay Freddy
Kagan Michael
Webman Alon
Greenblum & Bernstein P.L.C.
Luther William
Mellanox Technologies Ltd.
LandOfFree
Forwarding database cache does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Forwarding database cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Forwarding database cache will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2936200