Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering
Reexamination Certificate
1999-09-09
2003-12-23
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output data buffering
C710S055000, C712S228000
Reexamination Certificate
active
06668291
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field
The invention is related to first-in-first-out (FIFO) queues employing non-blocking atomic compare-and-swap (CAS) instructions.
2. Background Art
A FIFO queue may be used by various application or process threads which may wish to enqueue or dequeue certain data on the queue. Typically, a queue is a list of different memory locations containing particular data, and each memory location is typically referred to as a “node” of the queue. The nodes are kept in order by providing in each node a “next” pointer that points to the memory location of the next node in the queue. The head of the queue is the first node (“head node”) while the last node is the tail node. The tail node's next pointer points to a predetermined number, such as NULL. A node is enqueued by inserting it at the tail so that it becomes the new tail node of the queue. This requires the thread to first determine which node is the current tail node. Nodes are dequeued at the head, so that the head node is dequeued and the next node becomes the head node. This requires the thread to first determine which node is the current head node. The queue has a head pointer pointing to the head node and a tail pointer pointing to the tail node.
Maintaining the integrity of the queue while permitting its concurrent use by a number of different threads is a difficult problem. To solve this problem, the queue design must address all possible pathological conditions that the queue could experience. For example, after one thread has identified the tail node in preparation for an enqueue operation, another thread may interrupt and enqueue another node onto the queue (which obsoletes the one node's prior identification of the tail node). As another example: the head and tail nodes may be one and the same node because it is the only node on the queue; and one thread may identify the tail node in preparation for enqueueing a new node onto the queue; but, before it can, another thread may dequeue and move the tail node to another queue (for example) without changing its next pointer from NULL. In this case, the one thread may still succeed in attaching the new node to what it still believes is the tail node of the desired queue, but would actually be enqueueing the new node on the wrong queue. This latter case is typically referred to as the “ABA problem” and is described extensively in the literature. It is plausible that such an event could occur even if there were more than one node on the queue in the following example: after the one thread identifies the tail node, actions by other threads cause the tail node to be moved to the head and then dequeued and re-enqueued on another queue before the one thread completes its enqueueing operation. In any case, the ABA problem entails the risk of a thread unknowingly enqueueing a new node on the wrong queue or other location.
Initially, the ABA problem was solved by providing, whenever one thread was in the middle of an enqueue or dequeue operation, a lock which protected the queue from being changed by another contending thread. However, such blocking queues are susceptible to large unpredictable delays in process execution, since a single thread can monopolize the queue, particularly if it is a low priority thread that is interrupted by other higher priority threads.
As a result, the art has sought a non-blocking queue (i.e., a queue with no locks) permitting concurrent access to the queue by more than one thread without suffering failures due to the ABA problem. In such a concurrent non-blocking queue, the ABA problem has been solved in ways that burden the queue and impair performance. One such concurrent non-blocking queue is described by Michael et al., “Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms,” PODC, 1996. This publication describes a concurrent non-blocking queue in which the ABA problem is addressed by assigning an extra “count” field to the queue pointers such as the next pointer of the tail node. Thus, for example, each time the tail node is modified by any thread, the count associated with the next pointer of the tail node would be incremented. In the ABA situation, if the tail node has been dequeued and re-enqueued on another node, a thread trying to enqueue a new node onto the first queue would recognize that the next pointer “count” field of the what it believes to be tail node has changed, even if the next pointer still has the same value as before. Therefore the thread would not complete its enqueue operation, thereby preventing an ABA problem.
Another difficulty in the implementation of a non-blocking queue is the method of handling the case where the queue is empty; in other words, when there are no nodes in the queue. Support for enqueueing a node on an empty queue, or dequeueing the last node on a queue (leaving it empty) can greatly complicate the implementation, as each enqueue and dequeue operation would then need to maintain both the head and tail pointers. To simplify this case, the queue in the Michael publication keeps at least one node in the queue at all times. To implement this, the queue in the Michael publication must control the nodes, rather than letting threads enqueue or dequeue their own nodes. In the Michael publication, each node is selected from a list maintained for the queue. The data of interest is then stored in the node. Such data is taken from a thread and copied into the node for an “enqueue” operation. It is later copied out of the node and returned to a thread for a “dequeue” operation while the node itself is not, the node always being preserved for use with the queue. If the dequeue operation determines that the node being dequeued is the last node in the queue, it is left there to ensure that there is always at least one node in the queue.
The requirement that the queue allocate and deallocate the individual nodes constricts queue performance and constricts the manner in which threads may use the queue. This is especially true with regard to situations where the enqueue or dequeue operations may take place in an execution context from which memory allocation operations cannot be invoked (such as within an interrupt handler).
It is therefore desired to provide a concurrent non-blocking queue in which it is not necessary to maintain extra count fields and in which the threads themselves enqueue and dequeue any nodes they wish on the queue without any risk of emptying the queue.
SUMMARY OF THE DISCLOSURE
The design described here differs from the Michael publication in two fundamental ways:
a) The use of a “magic number” (other than NULL) to be placed into the next pointer of the last node in the list, thus avoiding the use of a count and circumventing the ABA problem
b) The use of a dummy node to ensure that the queue is never empty, while still allowing the enqueue and dequeue of nodes managed outside of the control of the queue itself.
An application or thread enqueues a new node into the queue by, first, setting the next pointer of the new node to the magic number. If the next pointer of the current tail node points to the magic number, then its next pointer is changed to point to the new node. If this operation is successful, then the queue's tail pointer is changed to point to the new node. If the foregoing conditions were not satisfied, then the tail pointer has been moved by another application or thread during the interim. This is corrected by changing the tail pointer to the next pointer of the node currently pointed to by the tail pointer. Then, the enqueue process is attempted again, and this cycle is repeated until successful.
An application or thread dequeues a node from the queue by, first, making local copies of the current version of the queue's head pointer, tail pointer and the next pointer of the head node (the node pointed to by the head pointer). A check is then made to ensure that the queue's head pointer has not changed, and then a check is made to ensure that the head and tail pointers do not point to th
Forin Alessandro
Raffman Andrew
Gaffin Jeffrey
Leydig , Voit & Mayer, Ltd.
Microsoft Corporation
Peyton Tammara
LandOfFree
Non-blocking concurrent queues with direct node access by... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Non-blocking concurrent queues with direct node access by..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Non-blocking concurrent queues with direct node access by... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3120400