Electrical computers and digital processing systems: memory – Storage accessing and control
Reexamination Certificate
2000-06-23
2004-09-14
Moazzami, Nasser (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
C714S710000, C369S053170
Reexamination Certificate
active
06792500
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. In particular, the invention relates to a new switching architecture in an integrated, modular, single chip solution, which can be implemented on a semiconductor substrate such as a silicon chip.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous IEEE ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. The newest ethernet is referred to as gigabit ethernet, and is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution. For example, high speed switching requires high speed memory to provide appropriate buffering of packet data; conventional Dynamic Random Access Memory (DRAM) is relatively slow, and requires hardware-driven refresh. The speed of DRAMs, therefore, as buffer memory in network switching, results in valuable time being lost, and it becomes almost impossible to operate the switch or the network at linespeed. Furthermore, external CPU involvement should be avoided, since CPU involvement also makes it almost impossible to operate the switch at linespeed. Additionally, as network switches have become more and more complicated with respect to requiring rules tables and memory control, a complex multi-chip solution is necessary which requires logic circuitry, sometimes referred to as glue logic circuitry, to enable the various chips to communicate with each other. Additionally, cost/benefit tradeoffs are necessary with respect to expensive but fast SRAMs versus inexpensive but slow DRAMs. Additionally, DRAMs, by virtue of their dynamic nature, require refreshing of the memory contents in order to prevent losses thereof. SRAMs do not suffer from the refresh requirement, and have reduced operational overhead which compared to DRAMs such as elimination of page misses, etc. Although DRAMs have adequate speed when accessing locations on the same page, speed is reduced when other pages must be accessed.
Referring to the OSI 7-layer reference model discussed previously, and illustrated in
FIG. 7
, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
SUMMARY OF THE INVENTION
The present invention is related to a method for managing defects in a memory, wherein the method includes the steps of testing a plurality of memory locations to determine an inoperable memory location and moving a memory address corresponding to the inoperable memory location to a first position in a list of available memory addresses. The method further includes the steps of incrementing an address pointer to a second position in the list of available addresses indicating a next available memory address in the list of available addresses, wherein said step of incrementing an address pointer to a second position operates to remove the memory address stored in the first position from the list of available memory addresses.
The present invention is further related to a method for managing a memory, wherein the method includes the steps of providing a memory, wherein the memory includes a plurality of memory locations for storing data therein, and providing an address pool having a plurality of available addresses, wherein each of the plurality of address corresponds to a location in the memory. A first determining step is included for determining a faulty memory location in the memory, and a second determining step is included for determining an address in the address pool that corresponds to the memory location determined to be faulty. The method includes further includes the step of removing the address corresponding to the faulty memory location from the address pool of available addresses.
The present invention further includes a method for managing memory having the steps of arranging a plurality of memory addresses in a list, wherein each of the plurality of memory addresses corresponds to one of a plurality of memory locations in a memory, and indicating a next available address from the memory address list with an address pointer. The method further includes the steps of testing the plurality of memory locations, determining an inoperable memory location, relocating a memory address corresponding to the inoperable memory location to a first position in the address list, and incrementing the address pointer to a position adjacent the memory address corresponding to the inoperable memory location.
The present invention is further related to an apparatus for managing defects in a memory, wherein the apparatus includes a memory having a predetermined number of memory locations for storing data and an address pool having a predetermined number of available memory addresses therein, each of said predetermined number of available addresses corresponding to one of the predetermined
Broadcom Corporation
Moazzami Nasser
Squire Sanders & Dempsey L.L.P.
LandOfFree
Apparatus and method for managing memory defects does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Apparatus and method for managing memory defects, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for managing memory defects will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3248283