Data processing: database and file management or data structures – Database design – Data structure types
Reexamination Certificate
2002-02-27
2004-11-02
Alam, Shahid (Department: 2172)
Data processing: database and file management or data structures
Database design
Data structure types
C707S793000, C707S793000, C709S216000, C709S217000, C711S148000
Reexamination Certificate
active
06813620
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to systems and methods for searching memory in a network device. In particular, the invention relates to systems and methods of searching parallel memory banks simultaneously within a network device, such as a high performance network switch.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known Ethernet technology, which is based upon numerous IEEE Ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches,” which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which have been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Current basic Ethernet wirespeeds typically range from 10 Megabits per second (Mps) up to 10,000 Mps, or 10 Gigabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution.
Competition and other market pressures require the production of more capable network devices that cost less. Increased network and device speed is required by customers.
Network performance, i.e., increased device speed and decreased data packet latency, is directly related to the time that it takes for devices to search memory in conjunction with relaying a packet, e.g. a switch searching memory tables for destination addresses, rules, etc. Thus, in order to support high performance network solutions, new and improved systems and methods are needed for searching memory banks within network devices, such as within a high performance switch.
SUMMARY OF THE INVENTION
Provided is a network device, such as a switch, including a memory, a queue management unit, a memory management unit, and a search switching unit. The memory comprises a plurality of memory banks. The queue management unit is configured to receive a plurality of search requests and to prioritize the search requests. The memory management unit is coupled to the queue management unit and is configured to initiate a plurality of binary searches based on the plurality of search requests. Each binary search is executed simultaneously in different banks of the plurality of memory banks. The search switching unit is coupled to the memory and the memory management unit and is configured to switch each binary search from one memory bank to another memory bank after a predetermined number of search steps are performed by each binary search, until a match is made in each binary search.
According to another embodiment of the present invention, provided is a method of searching a memory of a network device. The method includes a step of providing a network device comprising a memory to be searched. The memory comprises a plurality of memory banks. The method also includes a step of receiving a plurality of binary search requests at the network device. The method includes the step of initiating a plurality of binary searches in the plurality of memory banks at a same time. The plurality of binary searches are based on the plurality of binary search requests. At a predetermined step in each search of the plurality of binary searches, each search is switched to a different memory bank of the plurality of memory banks. The method also includes the step of continuing switching each binary search to a different memory bank of the plurality of memory until a match is made in each binary search, or until all banks have been read.
According to another embodiment of the present invention, provided is a network device including a memory means, a queue management means, a memory management means and a search switching means. The memory means is for maintaining data. The memory means comprises a plurality of memory banks means. The queue management means is for receiving a plurality of search request means and for prioritizing the search request means. The memory management means is for initiating a plurality of binary search means based on the plurality of search request means. Each binary search means is for simultaneously searching different banks means of the plurality of memory banks means. The search switching means is for switching each binary search means from one memory bank means to another memory bank means after a predetermined number of search steps are performed by each binary search means, until a match is made in each binary search means, or until all banks means have been covered.
REFERENCES:
patent: 4354260 (1982-10-01), Planzo
patent: 5278789 (1994-01-01), Inoue et al.
patent: 5644784 (1997-07-01), Peek
patent: 5842038 (1998-11-01), Williams et al.
patent: 5909686 (1999-06-01), Muller et al.
patent: 5920867 (1999-07-01), Van Huben et al.
patent: 5933838 (1999-08-01), Lomet
patent: 5938736 (1999-08-01), Muller et al.
patent: 5956714 (1999-09-01), Condon
patent: 6035297 (2000-03-01), Van Huben et al.
patent: 6119196 (2000-09-01), Muller et al.
patent: 6122669 (2000-09-01), Crayford
patent: 6173384 (2001-01-01), Weaver
patent: 6223175 (2001-04-01), George et al.
patent: 6460120 (2002-10-01), Bass et al.
patent: 6553000 (2003-04-01), Ganesh et al.
patent: 6631367 (2003-10-01), Teng et al.
patent: 6643641 (2003-11-01), Snyder
patent: 0752796 (1997-01-01), None
Yu-Sheng Lin and C. Bernard Shung, “Queue Management for Shared Buffer and Shared Multi-buffer ATM Switches,” XP 000621335, 1996 IEEE, publication date Mar. 24, 1996, pp 688-695.
Billings David
Jorda Mike
Lin Jonathan
Alam Shahid
Broadcom Corporation
Squire Sanders & Dempsey LLP
LandOfFree
Binary search engine and method does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Binary search engine and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Binary search engine and method will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3363584