Memory system for increased bandwidth

Electrical computers and digital processing systems: memory – Storage accessing and control – Memory configuring

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06668313

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates in general to an apparatus and methodology for computer memory management yielding increased memory bandwidth. More particularly, the invention relates to an apparatus and methodologies for optimizing the bandwidth in processing a plurality of read and write requests. The invention has particular application to the use of high-speed networks although it is not limited thereto.
Effective management of memory resources is one mechanism that can be leveraged to increase bandwidth in high-speed networks. More particularly, high-speed network memory bandwidth requirements cannot be achieved by randomly interleaving read and write requests to an external RAM controller especially if the data units are smaller than a block of data. Issues with common approaches to memory management are resolving bank conflicts, accommodating bus turn around, processing varied word lengths, supporting a pipelined architecture, mitigating processing delays and guaranteeing memory bandwidth.
A well-known approach for memory management is the utilization of link lists to manage multiple queues sharing a common memory buffer. A link list is commonly comprised of data, where each byte has at least one pointer (forward and/or backward) attached to it, identifying the location of the next byte of data in the chain. Typical link list management schemes do not allow pipelining. Therefore, the standard methodologies of prior art link list structures to optimize memory management is not particularly suited to the handling of very high-speed processes.
Another method to process memory allocation is described in U.S. Pat. No. 6,049,802 to Waggener and Bray entitled “System And Method For Generating A Linked List In A Computer Memory”. This patent discloses link lists that contain several key list parameters. A memory manager determines which link list the data belongs in based on key list parameters. This patent also discloses that the address of the next location in the link list is determined before data is written to the current location for a packet processor. While this allows the next address to be written in the same cycle in which data is written, it is not optimized for very high-speed networks.
One more memory storage technique is described in U.S. Pat. No. 5,303,302 issued to Burrows entitled “Network Packet Receiver With Buffer Logic For Reassembling Interleaved Data Packets”. In this patent, a network controller receives encrypted data packets. A packet directory has an entry for each data packet stored in a buffer. Each directory entry contains a pointer to the first and last location in the buffer where a corresponding data packet is stored along with status information for the data packet. A method is also disclosed for partial data packet transmission management for the prevention of buffer overflow. Processing optimization for the allocation and management of memory is not achieved in this method for pipeline processing.
SUMMARY OF THE INVENTION
The present invention is directed toward a system and method for memory management in a high-speed network environment. Multiple packets are interleaved in data streams and sent to a Memory Manager System. Read and write requests are queued in FIFO buffers. Subsets of these requests are grouped and ordered to optimize processing. This method employs a special arbitration scheme between read and write accesses. Read and write requests are treated as atomic. Memory bank selection is optimized for the request being processed. Alternating between memory bank sets is done to minimize bank conflicts. Link list updates are pipelined. Multiple independent link lists may be supported with the inclusion of a link list identifier. Arbitration between read and write requests continues until the group is exhausted. Then, processing is repeated for the next requests in the BRAM (buffer memories).
The disclosed process optimizes bandwidth while accessing external memory in pipeline architectures.


REFERENCES:
patent: 5303302 (1994-04-01), Burrows
patent: 5491808 (1996-02-01), Geist, Jr.
patent: 5751951 (1998-05-01), Osborne et al.
patent: 5784698 (1998-07-01), Brady et al.
patent: 6049802 (2000-04-01), Waggener et al.
patent: 2306714 (1997-07-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory system for increased bandwidth does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory system for increased bandwidth, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory system for increased bandwidth will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3174045

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.