Multiplex communications – Data flow congestion prevention or control
Reexamination Certificate
2000-08-21
2004-10-12
Cangialosi, Salvatore (Department: 2661)
Multiplex communications
Data flow congestion prevention or control
C370S239000
Reexamination Certificate
active
06804194
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. In particular, the invention relates to a new switching architecture in an integrated, modular, single chip solution, which can be implemented on a semiconductor substrate such as a silicon chip.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous IEEE ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. The newest ethernet is referred to as gigabit ethernet, and is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution. For example, high speed switching requires high speed memory to provide appropriate buffering of packet data; conventional Dynamic Random Access Memory (DRAM) is relatively slow, and requires hardware-driven refresh. The speed of DRAMs, therefore, as buffer memory in network switching, results in valuable time being lost, and it becomes almost impossible to operate the switch or the network at linespeed. Furthermore, external CPU involvement should be avoided, since CPU involvement also makes it almost impossible to operate the switch at linespeed. Additionally, as network switches have become more and more complicated with respect to requiring rules tables and memory control, a complex multi-chip solution is necessary which requires logic circuitry, sometimes referred to as glue logic circuitry, to enable the various chips to communicate with each other. Additionally, cost/benefit tradeoffs are necessary with respect to expensive but fast SRAMs versus inexpensive but slow DRAMs. Additionally, DRAMs, by virtue of their dynamic nature, require refreshing of the memory contents in order to prevent losses thereof. SRAMs do not suffer from the refresh requirement, and have reduced operational overhead which compared to DRAMs such as elimination of page misses, etc. Although DRAMs have adequate speed when accessing locations on the same page, speed is reduced when other pages must be accessed.
Referring to the OSI 7-layer reference model discussed previously, and illustrated in
FIG. 7
, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
SUMMARY OF THE INVENTION
The present invention is directed to a switch-on-chip solution for a network switch, capable of use at least on ethernet, fast ethernet, and gigabit ethernet systems, wherein all of the switching hardware is disposed on a single microchip. The present invention is configured to maximize the ability of packet-forwarding at linespeed, and to also provide a modular configuration wherein a plurality of separate modules are configured on a common chip, and wherein individual design changes to particular modules do not affect the relationship of that particular module to other modules in the system.
The invention is therefore directed to a network switch for network communications which includes a plurality of data ports for transmitting and receiving data, and a head-of-line blocking prevention mechanism for preventing head-of-line blocking in data communication. The head-of-line blocking prevention mechanism includes a determination unit for determining when head-of-line blocking is occurring based upon cell based and packet based thresholds.
The invention also includes a method of preventing head-of-line blocking in a network switch. The method includes determining if a port on a network switch is congested by determining if one of a first cell count threshold and a first packet count threshold is exceeded. The congested port is then deactivated, and all packets destined for the congested port from source ports are dropped when the congested port has been deactivated. The congested port is reactivated when one of a cell count and a packet count goes below a corresponding one of a second cell count threshold and a second packet count threshold.
REFERENCES:
patent: 5875338 (1999-02-01), Powell
patent: 6118761 (2000-09-01), Kalkunte et al.
patent: 6154446 (2000-11-01), Kadambi et al.
patent: 6163528 (2000-12-01), Nagamoto
patent: 6667985 (2003-12-01), Drummond-Murray
Ambe Shekhar
Kadambi Shiri
Broadcom Corporation
Cangialosi Salvatore
Squire Sanders & Dempsey L.L.P.
LandOfFree
Network switching architecture utilizing cell based and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Network switching architecture utilizing cell based and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Network switching architecture utilizing cell based and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3269710