Concurrent non-blocking FIFO array

Electrical computers and digital processing systems: memory – Storage accessing and control – Access timing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S147000, C710S020000, C710S021000, C710S052000

Reexamination Certificate

active

06697927

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to a queue structure for a processor-based device and, more particularly, to a technique for facilitating concurrent non-blocking access to an array of queued entries in a processor-based device (e.g., a server).
2. Background of the Related Art
This section is intended to introduce the reader to various aspects of art which may be related to various aspects of the present invention which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
The use of computers has increased dramatically over the past few decades. In years past, computers were relatively few in number and primarily used as scientific tools. However, with the advent of standardized architectures and operating systems, computers soon became virtually indispensable tools for a wide variety of business applications. The types of computer systems similarly have evolved over time. For example, early scientific computers typically were stand-alone systems designed to carry out relatively specific tasks and required relatively knowledgeable users.
As computer systems evolved into the business arena, mainframe computers emerged. In mainframe systems, users utilized “dumb” terminals to provide input to and to receive output from the mainframe computer while all processing was done centrally by the mainframe computer. As users desired more autonomy in their choice of computing services, personal computers evolved to provide processing capability on each user's desktop. More recently, personal computers have given rise to relatively powerful computers called servers. Servers are typically multi-processor computers that couple numerous personal computers together in a network. In addition, these powerful servers are also finding applications in various other capacities, such as in the communications and Internet industries.
In many servers, multiple requesters (e.g., software threads, hardware, etc.) may contend for access to shared resources, such as memory. Each time a requester accesses memory, it is likely that the contents of a memory location will be altered. Thus, care must be taken in a system that provides for concurrent access to a shared resource to ensure that a requester is accessing valid data. In addition to problems arising from concurrent requests, a requester that has control of the resource may be interrupted, thus providing yet further opportunity for another requester to alter the contents of the shared resource. Without some sort of scheme to govern requests for access to a shared resource, data processing errors or unrecoverable faults may occur.
In many systems, multiple requests to a shared resource are governed by an arbitration scheme which grants only one requester at a time access to a shared resource. The arbitration scheme typically results in a lock being placed on the critical region of the shared resource such that the other requesters are blocked until the current requester has completed the operation and released the lock. Such arbitration schemes become less effective as the number of requesters increases, as each requester must wait its turn to access the resource. Further, because the acts of attaining and releasing the lock may result in communications being transmitted to each of the other requesters, consumption of bus bandwidth bus and latency increase. Thus, these arbitration schemes may not readily scale to execution environments in which a large number of concurrent requests to a shared resource are possible.
Further, because access requests may be generated by entities which are not synchronous with each other, many such arbitration schemes typically involve operating system support to synchronize and arbitrate between requests, thus further increasing latency and affecting scalability. Still further, blocking arbitration schemes may have a detrimental effect on the reliability of the system, because such schemes may create a deadlock situation if the owner of a lock aborts or terminates before releasing the lock. Thus, availability of the server may be compromised, which may be disastrous for many types of applications.
To overcome the disadvantages of the blocking arbitration schemes discussed above, algorithms have been developed which facilitate concurrent, non-blocking access to shared resources by multiple requesters and eliminate the need for bus master and operating system arbitration and synchronization. Ideally, a concurrent non-blocking algorithm provides for concurrent access to a shared resource by multiple requesters and ensures that a requester never will be blocked from access even if a previous requester abruptly terminates or aborts during the access to the resource. Because forward progress by other requesters is not hindered by a lock, such non-blocking algorithms can contribute to higher reliability and increased chance of continued availability of the system.
Another type of problem may occur with many known concurrent non-blocking algorithms, however, which may result in operational errors or deadlock situations. This problem involves a race condition that is referred to as the “ABA” condition. The ABA condition may occur when a first requester accesses queued data stored in a shared memory resource, but loses context before completing the operation which involved the access to memory. For instance, when accessing a shared memory location, the first requester initially may take a snapshot of the data stored in the memory location. If, for some reason, the first requester loses context before completing the memory operation, the data at the memory location may be altered by one or more other requesters. Generally, alteration of the data does not present a problem since, when the first requester regains context, the first requester can determine whether the data has been altered by comparing the present data with the snapshot taken prior to losing context.
The ABA condition arises when it appears that the same data value is stored at the memory location in the queue when the first requester regains context, but, the memory location actually had been written to multiple times and pointers to current and next locations in the queue incremented while the first requester was asleep. For instance, the first requester may take a snapshot of data “A” at the current location before losing context. The next requester removes “A” from the current location in the queue and increments the queue pointer. Then, when the location again becomes current, the requester writes data “B” to the memory location. Then, “B” is removed, “A” is written to that memory location, and queue pointers are incremented. When the first requester regains context, the first requester examines the memory location and sees “A.” Thus, it appears that the contents of the memory location have not changed and that the first requester is accessing the queue in the correct order. In reality, however, the pointer to the current location or to the next location in the queue may have been altered. Thus, if the first requester performs the access, errors in queue operations may result.
Accordingly, it would be desirable to provide a queue structure that could be concurrently accessed by multiple requesters and that did not employ hardware or software locks or other form of synchronization from the operating system. Still further, it would be desirable if concurrent non-blocking access could be provided in a manner that minimizes the risk of the occurrence of the ABA race condition.
The present invention may be directed to addressing one or more of the problems set forth above.


REFERENCES:
patent: 5812799 (1998-09-01), Zuravleff et al.
patent: 6178473 (2001-01-01), Bonola
patent: 6480918 (2002-11-01), McKenney et al.
“Simp

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Concurrent non-blocking FIFO array does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Concurrent non-blocking FIFO array, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Concurrent non-blocking FIFO array will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3349838

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.