Apparatus and method for optimizing access to memory

Electrical computers and digital processing systems: memory – Storage accessing and control – Access timing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S168000, C711S169000

Reexamination Certificate

active

06735679

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, asynchronous transfer mode (ATM), ethernet, fast ethernet, and gigabit ethernet environments, generally known as local area networks (LAN). In particular, the invention relates to a new switching architecture in an integrated, modular, single chip solution, which can be implemented on a semiconductor substrate such as a silicon chip.
2. Description of the Related Art
As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous Institute of Electrical and Electronic Engineers (IEEE) ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. The newest ethernet is referred to as gigabit ethernet, and is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution. For example, high speed switching requires high speed memory to provide appropriate buffering of packet data; conventional Dynamic Random Access Memory (DRAM) is relatively slow, and requires hardware-driven refresh. The speed of DRAMs, therefore, as buffer memory in network switching, results in valuable time being lost, and it becomes almost impossible to operate the switch or the network at linespeed. Furthermore, external central processing unit (CPU) involvement should be avoided, since CPU involvement also makes it almost impossible to operate the switch at linespeed. Additionally, as network switches have become more and more complicated with respect to requiring rules tables and memory control, a complex multi-chip solution is necessary which requires logic circuitry, sometimes referred to as glue logic circuitry, to enable the various chips to communicate with each other. Additionally, cost/benefit tradeoffs are necessary with respect to expensive but fast static random access memory (SRAM) versus inexpensive but slow DRAMs. Additionally, DRAMs, by virtue of their dynamic nature, require refreshing of the memory contents in order to prevent losses thereof. SRAMs do not suffer from the refresh requirement, and have reduced operational overhead which compared to DRAMs such as elimination of page misses, etc. Although DRAMs have adequate speed when accessing locations on the same page, speed is reduced when other pages must be accessed.
Referring to the OSI 7-layer reference model discussed previously, and illustrated in
FIG. 7
, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
SUMMARY OF THE INVENTION
The present invention is related to a method for optimizing access to memory, wherein the method includes the steps of receiving a first request for access to a memory, receiving at least two additional requests for access to the memory, and determining a first clock overhead associated with the first request for access to the memory. The method further includes the steps of determining an additional clock overhead associated with each of the at least two additional requests for access to the memory in conjunction with the first request, determining a combination of requests that can be processed together using an optimized overhead, and processing the combination of requests as a single request with the optimal overhead.
The present invention is further related to a method for optimizing access to sychronous dynamic random access memory (SDRAM) in a network switch, wherein the method includes the steps of receiving a plurality of requests for access to an SDRAM, and combining at least two of the plurality of requests for processing as a single request utilizing an optimal clock overhead, in accordance with a predetermined algorithm.
The present invention is further related to a method for optimizing SDRAM in a network switch including the steps of receiving a first, second, third, and fourth requests for access to the SDRAM. Determining the clock overhead associated with processing the first request in conjunction with the second request, determining a clock overhead associated with processing the first request in conjunction with the third request, and determining a clock overhead associated with processing the first request in conjunction with the fourth request. Finally, the method includes the steps of determining an optimal request for access to SDRAM, wherein the optimal request is calculated to yield a minimal clock overhead, and thereafter processing the optimal request.
The present invention is further related to an apparatus for optimizing access to memory in a network switch, wherein the apparatus includes a means for receiving a first request for access to a memory, a means for receiving at least two additional requests for access to the memory, and a means for determining a first clock overhead associated with the first request for access to the memory. Further, the apparatus includes a means for determining an additional clock overhead associated with each of the at least two additional requests for access to the m

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for optimizing access to memory does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for optimizing access to memory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for optimizing access to memory will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3229335

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.