Method and apparatus for maximizing memory throughput

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S233000

Reexamination Certificate

active

06487202

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a method and apparatus for handling information sent through a digital network and, more specifically, to a method and apparatus for managing queues of cells transmitted through a digital switch.
BACKGROUND OF THE INVENTION
Asynchronous Transfer Mode (ATM) or “cell switching” is a method of transmitting digital information wherein the information is broken into equal sized units called “cells.” The individual cells of information are transmitted from a source node to a destination node through a “connection”. A connection is a pathway through a digital network. A digital network is constructed of digital switches coupled together by digital communication links.
Each cell originates at a source node and is transmitted across the communication links. The communication links carry the cells of information between the digital switches along the connection pathway. The digital switches route the cells from incoming communication links to outgoing communication links and finally to a destination node.
Each digital switch can be connected to several communication links. Furthermore, each communication link can carry several different connections simultaneously. Typically, a cell memory or buffer is used for temporarily holding cells prior to transmission on a communication link. The cell memory is arranged into logical queues. Several queues may be used for separating different types of services and connections. For example, cells belonging to higher priority connections may be stored in queues that have a higher priority of service. In some cases, a separate queue may be assigned to each connection. Servicing a queue entails removing a cell from the queue and sending the cell out on a communication link or to a destination node coupled to the digital switch. A service algorithm is employed to select a queue for service. To fully utilize the bandwidth of a communication link a cell should be selected for service during each service time.
The buffer of each digital switch must be able to store cells for each of the connections of the switch. Efficient management of the cell-storing function is important. For example, incoming cells must be stored at least as fast as they arrive. Furthermore, once it has been determined that cells from a particular queue should be transmitted into the digital network, the time it takes to determine where the appropriate cells are stored in the buffer, and to retrieve the cells from the buffer, should be minimal. While the sequence of cells that correspond to different connections may be altered during the buffering process, the cells within each connection must be transmitted in the same order as they are received with respect to each other.
To facilitate the required management operations, linked lists are typically used to keep track of the location and sequence of the cells stored in the cell memory. That is, a series of “next entry” pointers are established such that each sequence of entries a are linked together. A queue manager typically maintains the links in the queue so that entries (i.e., pointers) in the linked list correspond to the location of cells in the cell memory. The order of entries in each linked list in the queue indicates the order in which the cells in the corresponding blocks in the cell memory were received.
The queue manager may maintain various queues to keep track of the location of the cells that are stored within the cell memory. For example, queues indicating free blocks of memory will likely be required. In order for these queues to reflect the current state of the cell memory, the queue manager must be informed every time a cell is stored in the cell memory and every time a cell is to be transmitted from the cell memory. The various maintenance procedures performed by the queue manager each typically require a number of “read” and “write” operations to the memory unit or units storing the linked lists of pointers.
For example, consider the case where a new cell is added to the cell memory (i.e., a cell arrival event). In general, receiving a cell consists of removing a memory block from the beginning or head of a free block queue (i.e., a queue of unused memory locations in the cell memory), storing the cell in the block, and adding the block to the end or tail of the queue that corresponds to the connection or class of service associated with the received cell. The queue manager may perform this process in response to a signal that indicates the arrival of the new cell and identifies the connection/class of service with which the new cell is associated. The queue manager determines which block of the cell memory is at the head of the free block queue (e.g., by inspecting a pointer of the head record associated with the free block list). Then, the incoming cell is stored in the block in the cell memory at the head of the free block queue (the “receiving block”). Once the cell has been stored, the queue manager must remove the receiving block from the free block queue and add the receiving block to the tail of the queue that corresponds to the connection associated with the cell. Thus, each cell arrival requires various memory read operations (to obtain the receiving block) and memory write operations (to update the free block list and the connection queue) to the memory unit storing the linked lists of pointers.
Similarly, transmitting a cell (i.e., a cell departure event) involves several read and write operations. In general, transmitting a cell includes sending the cell that is stored in the memory block that resides at the head of a designated queue, removing the block from the head of the queue, and adding the block to the tail of the free list queue. The queue manager may initiate this process in response to a signal from a service controller that identifies the connection onto which a cell is to be transmitted (the “designated connection”). The queue manager determines which memory block of the cell memory is at the head of the queue for the designated connection (e.g., by inspecting a pointer of the head record associated with the designated connection). The cell that is stored in the block in the cell memory at the head of the queue for the designated connection (the “source block”) is then trarsmitted. Once the cell has been transmitted from the source block, the queue manager must perform two queue management tasks. First, the source block must be removed from the queue for the designated connection. Second, the source block must be added to the tail of the free block queue.
Because digital switches may have thousands of associated logical connections, the linked lists of pointers described above are typically very large. Accordingly, these linked lists are generally stored in a memory unit (e.g., a random access memory or RAM) external to the queue manager. A bidirectional data bus will couple the queue manager to the external memory unit and the linked list entries will be passed between these units across the data bus. Unfortunately, however, such a system presents a bottleneck. As the speed of the switch throughput increases, the number of individual memory transactions (reads or writes) which occur across the data bus to continually update the linked lists of pointers as cells are serviced also increases. Such a system wastes the available bandwidth of the data bus coupling the memory unit to the queue manager (which is often better utilized in burst transactions where multiple reads or writes are pipelined together) and provides a limiting factor to the overall switch throughput Accordingly, what is needed is a means to accommodate increased throughput in such a digital switch.
SUMMARY OF THE INVENTION
It is an object of the invention to accommodate increased throughput in a digital switch.
It is a farther object of the present invention to provide a method of managing linked lists of pointers for ATM cell queues.
In one embodiment, the present invention provides a method of executing a sequence of multiple dependent operations, each operation in

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for maximizing memory throughput does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for maximizing memory throughput, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for maximizing memory throughput will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2933531

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.