Resource management system having a control structure to...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S113000, C710S119000, C710S240000, C709S241000

Reexamination Certificate

active

06233630

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field of the Invention
This invention relates to a resource management system and method. More particularly, it relates to a reusable integer pool for managing access to any reusable resource, including, but not limited to, data channels, data buffers, hardware status bits and logical connections.
2. Background Art
This invention addresses a class of problems that occur in either device driver or application program design. This class of problems can be identified by the following set of characteristics:
1. A limited number of resources of some type are available to the program which cannot be used by more than one user simultaneously. Permission to use such a resource must be obtained before attempting to use it.
2. The permission to use a resource is a “ticket” for a specific resource only (i.e., buffer #
1
, or program #
5
, or channel #
2
, etc.)
3. The maximum number of possible requesters exceeds the maximum available resources (otherwise, permanent assignment of a resource to each requester could be made thereby obviating the entire problem and this solution).
The resource(s) in question can be one of a wide variety of typical system resources. This background will focus on two such areas of application: (1) buffers in main storage on an adapter, and (2) status register bits. Inasmuch as the prior art approaches are different given the specifics of the problem, there is a need in the art to provide a single common approach that is superior to each specific prior art approach.
Application developers often use data buffer areas as sources or targets of operations such as direct memory access (DMA). In many cases, buffers cannot be allocated on-the-fly. This might be for performance reasons, due to limited storage capacity, or because buffers for DMA operations must be in a specific physical address location. In such a case, some number of buffer areas must be prepared for later use. These buffers must then be tracked or managed as they are doled out, used, and then returned by a subprocess to be available for use by another. This management is essential to prevent errors that would occur if more than one thread or process attempted to utilize the same buffer simultaneously.
Referring to
FIGS. 1 and 2
, the prior art approach is to allocate an array of such data buffers and to chain them together in the form of a linked list
108
. In this case there are four available buffers
114
,
116
,
118
,
120
. Referring to
FIG. 1
, the variable “free-list”
110
is a pointer to first buffer
114
. Each buffer area, in turn, has a pointer
112
,
115
,
117
,
119
to the next available buffer. The final buffer
120
in the chain has a pointer value 122 of 0 indicating the end of list. When a data buffer is requested, free-list pointer
110
is consulted. If it is non-zero, a buffer is available. The pointer to it is “given” to the requesting process (it now has access to that buffer) and the free-list variable is updated to point to the next available buffer.
Referring to
FIG. 2
, the state of list
108
following this operation is shown. First buffer
114
is in use by a process, and free list pointer
110
now points
113
to next buffer area
116
, which is now the first available buffer in list
108
. Operation continues with processes requesting buffers (and receiving access to them) so long as there are free buffers. When a process is finished using a buffer, it returns it to the list in an process which is the inverse of that used to obtain a buffer. The “next” pointer
112
in the buffer structure
114
being returned is updated with the current free-list pointer
113
, and the free_list pointer
110
is updated with the address
111
of the buffer
114
being returned.
There are several drawbacks to this approach, as follows:
1. Precious buffer space is consumed by the pointer variable
112
,
122
that must be allotted in each buffer area. This space exists only to manage the linked list and serves no other useful purpose with respect to the actual use of the buffer. For larger numbers of data buffers, with pointers that might consume 4 bytes each, this can add up.
2. Depending upon the underlying hardware and operating system architecture (e.g., OS/2, Windows NT), these pointers
112
,
122
may be what are known as “far” pointers. Far pointers are pointers to memory that lies in another memory segment. Accessing data in a far segment consumes additional CPU cycles since segment registers must be reloaded, and when this is done, a variety of hardware checks are performed “under the covers” to validate the new segment access.
Access of data through far pointers is often a simple necessity, but the prior art method which employs a list
108
of far memory structures
114
,
116
,
118
,
120
via a chain of pointers
111
,
115
,
117
,
119
requires the manipulation of far memory pointers simply to manage the list. Consequently, there is a need in the art for a system and method which requires no far memory manipulations until the buffer is actually to be accessed.
A recurring problem in device driver or application software development is the need to manage or otherwise track the use of a limited number of serially re-useable resources. Serially re-useable implies that the resource is not consumed or otherwise destroyed by its use (it is ‘returned’ when its use is finished) and, most significantly, implies that the resource may not be used by more than one user at a time.
A device driver design often encounters this type of problem when controlling an adapter that provides facilities for performing a requested operation and, on completion of the operation, the adapter informs the requester by setting a status bit in a register. Assuming a 32-bit architecture, the adapter may have 32 individual status bits for reporting completion status on up to 32 individual operations at a time. In order to distinguish between an operation completed for one requester versus that performed and completed for another, the device driver must assign a unique completion status bit in a bit-mapped status register for a given requester (and must be looking for the setting of that specific status bit and no other). Finally, when the requested operation is complete, the status bit (likely distinguished by its integer bit number) must be made available for reuse by some other requester as there might be more total requesters than available bits.
Referring to
FIG. 3
, a 32-bit “status” register is shown. Each bit
134
,
136
,
138
in bit positions
00
-
31
in register
130
indicates the completion status of a certain operation. For example, a number of bits
134
,
138
, etc., are “set”, indicating the completion status of processes (or requests) corresponding to status bit positions
132
of
31
,
24
-
26
,
13
-
16
,
11
,
4
-
7
, and
0
-
2
. Thus, an adapter may be able to handle multiple outstanding requests to perform an operation, and it reports the completion status by setting a bit
134
,
138
, etc., in status register
130
corresponding to that request.
For multiple requests to be tracked, there must be a mechanism for uniquely assigning bits to requests and reusing the bits when the current request is completed. Referring further to
FIG. 3
, the common approach to this problem is to keep a shadow copy of status register
130
having bit positions
132
, setting a bit
134
in the shadow when that status bit number is passed out to a requesting process, and clearing a bit
136
when the bit is no longer needed. The inefficiency in this approach is that shadow register must be scanned each time another process requests a status bit
134
,
136
for use. This scanning is typically performed by shifting the shadow and testing the least significant bit position
00
139
to see if it is zero. If it is, then the number of shifts performed yields the status bit that can be used. In a steady state case, where bits are being “checked out” and returned at random intervals, approximately 16 such shift and test operations will be

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Resource management system having a control structure to... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Resource management system having a control structure to..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Resource management system having a control structure to... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2464931

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.