Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2002-03-19
2004-09-21
Nguyen, Hanh (Department: 2662)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S389000, C370S392000, C370S351000, C709S226000, C709S238000
Reexamination Certificate
active
06795447
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. In particular, the invention relates to a new switching architecture in an integrated, modular, single chip solution, which can be implemented on a semiconductor substrate such as a silicon chip.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous IEEE ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. The newest ethernet is referred to as gigabit ethernet, and is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution. For example, high speed switching requires high speed memory to provide appropriate buffering of packet data; conventional Dynamic Random Access Memory (DRAM) is relatively slow, and requires hardware-driven refresh. The speed of DRAMs, therefore, as buffer memory in network switching, results in valuable time being lost, and it becomes almost impossible to operate the switch or the network at linespeed. Furthermore, external CPU involvement should be avoided, since CPU involvement also makes it almost impossible to operate the switch at linespeed. Additionally, as network switches have become more and more complicated with respect to requiring rules tables and memory control, a complex multi-chip solution is necessary which requires logic circuitry, sometimes referred to as glue logic circuitry, to enable the various chips to communicate with each other. Additionally, cost/benefit tradeoffs are necessary with respect to expensive but fast SRAMs versus inexpensive but slow DRAMs. Additionally, DRAMs, by virtue of their dynamic nature, require refreshing of the memory contents in order to prevent losses thereof. SRAMs do not suffer from the refresh requirement, and have reduced operational overhead which compared to DRAMs such as elimination of page misses, etc. Although DRAMs have adequate speed when accessing locations on the same page, speed is reduced when other pages must be accessed.
Referring to the OSI 7-layer reference model discussed previously, and illustrated in
FIG. 7
, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
SUMMARY OF THE INVENTION
The present invention is directed to a communications component for network communications. The communications component comprises a first data port interface supporting a plurality of data ports transmitting and receiving data. A second data port interface supports a plurality of data ports transmitting and receiving data. An internal memory communicates with the first data port interface and the second data port interface. A memory management unit includes an external memory interface for communicating data from at least one of the first data port interface and the second data port interface and an external memory. A plurality of independent communication channels is provided. The independent communication channels communicate data and messaging information between the first data port interface, the second data port interface, the internal memory, and the memory management unit. The memory management unit directs data from one of the first data port interface and the second data port interface to one of the internal memory and the external memory interface according to a predetermined algorithm. The predetermined algorithm allocates memory locations between the internal memory and the external memory based upon an amount of the internal memory available for each of the plurality of data ports.
The invention also includes an embodiment of a communications component for network communications which comprises a first data port interface supporting a plurality of data ports transmitting and receiving data, and a second data port interface supporting a plurality of data ports transmitting and receiving data. A memory management unit communicates with the first data port interface and the second data port interface, and an internal memory communicates with the first data port interface and the second data port interface. An external memory interface is in communication with the first data port interface and the second data port interface. The external memory interface is configured to communicate with an external memory. A plurality of independent communication channels act in cooperation. The communication channels communicate data and messaging information between the first data port interface, the second data port interface, the internal memory, and the memory management unit. The memory management unit directs data from one of the first data port interface and the second data port interface to one of the internal memory and the external memory interface according to a predetermined algorithm. The predetermined algorithm allocates memory locations between the internal memory and the
Ambe Shekhar
Kadambi Shiri
Broadcom Corporation
Nguyen Hanh
Squire Sanders & Dempsey L.L.P.
LandOfFree
High performance self balancing low cost network switching... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with High performance self balancing low cost network switching..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High performance self balancing low cost network switching... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3227852