Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering
Reexamination Certificate
2000-08-31
2004-06-22
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output data buffering
C710S005000, C710S007000, C710S036000, C710S053000, C710S056000, C711S147000, C702S010000
Reexamination Certificate
active
06754739
ABSTRACT:
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a method and architecture for managing and allocating resources in a computer system. More particularly, the invention relates to a resource allocation and management scheme for memory buffers that ensures system progress (i.e., prevents system deadlock), maximizes utilization of available system resources, and guarantees a minimum service rate for requests.
2. Background of the Invention
In a microprocessor based computer system, there are generally two types of messages or transactions: requests and responses. Requests are when the processor needs information. Requests place demands on resources and include actions such as reads and fetches. Requests allocate or tie up a resource while it handles or deals with the request. Responses typically satisfy requests and include completion messages and write backs. Responses deallocate or free up resources by indicating that the resource is done or available. The system resources receiving messages are typically memory or input or output (“I/O”) devices.
Computer systems often attempt to control the issuance of response and request messages in order to optimize system performance. This control of the messages often occurs using pending transaction logic which queues the messages and then selectively issues the messages based on certain predetermined criteria. Additionally, buffers may be employed in the computer memory system to hold data relating to the messages until the message is completed or the data is fully consumed or used. These buffers facilitate data prefetching by allowing memory controllers to buffer the data in advance of its need in order to minimize the perceived memory latency. Buffer usage, like other resources in a computer system, is managed by operations architecture and/or logic known as resource management or allocation schemes.
Despite the system control over the issuance of transactions or messages, there are still differing numbers of requests and responses issued by the system at any given time. In fact, all of the pending transactions may be one type of message or the other. This imbalance between requests and responses means the system can not simply alternate between requests and responses (not to mention that this may not be the most efficient scheme for optimal system performance). Necessarily, then, there will be times when multiple request or response messages must be issued consecutively.
A problem can occur, however, when a continuous stream of request messages is issued. In particular, if response messages are indefinitely blocked by a continuous stream of request messages, a deadlock condition may exist. Specifically, since responses are required to deallocate or free up resources, when responses are blocked, system resources can not be freed. Since requests require allocation of system resources, and since these resources can not be freed up by the blocked response messages, there is the potential for a deadlock. Obviously, any resource management scheme must avoid this condition. The key is that where responses and requests are using the same physical resources, requests must not indefinitely block responses or else deadlock may occur.
Therefore, where the same memory buffers are used in a system to hold data for responses and requests, the buffers must be managed such that the buffers do not become full of requests with no opportunity for responses to go through. This could result in a deadlock where the requests are waiting for resources to free up and the resources can not be deallocated because a response message can not be processed. Moreover, some response messages require an allocation and deallocation of resources such as a read-modify-write operation where the data must first be read, then modified, and then written back to the resource. This kind of operation can be required for parity or where only a portion of the data is being overwritten and the remainder of the data must be retained. Obviously, these operations will similarly be deadlocked when the read portion of the response message is blocked due to all of the available buffer resources being taken up by pending requests waiting for responses which can not be processed.
To avoid this problem, many systems simply dedicate a certain amount of the buffers or other system resources to requests and a certain amount to responses. Often the available resources are just split between requests and responses; for example, half of the available buffer size would be dedicated to requests and the other half to responses. This can be wasteful and inefficient, however, if there are not equal amounts of requests and responses to process. Specifically, if a stream of requests (or responses) needs to be processed, only half of the available buffer can be used. The other half of the buffer is reserved for responses (or requests) even if none are currently pending. This results in decreased system performance and under-utilization of the system's available resources.
The present invention is directed at a method and architecture for dynamically sharing system resources, such as buffers, between requests and responses to enhance system performance while avoiding system deadlocks.
BRIEF SUMMARY OF THE INVENTION
In accordance with the present invention, there is provided a novel method and architecture for system resource allocation and management. In particular, the method and architecture of the present invention allow for a novel resource management scheme to be utilized wherein system resources are dynamically shared resulting in enhanced system performance. This is accomplished by reserving a minimum amount of system resources for responses and a minimum amount of resources for requests, while the remainder of system resources is dynamically shared between responses and requests. This more efficiently uses the available system resources to enhance system performance, while ensuring no system deadlocks and a minimum service rate for requests.
The inventive architecture for resource management and allocation of system resources in a microprocessor-based system, comprises: pending transaction logic managing request and response messages; a buffer having a plurality of entries coupled to the pending transaction logic for storing data associated with the request and response messages; and status control logic coupled to the buffer for monitoring the allocation status of the entries in the buffer; wherein a first entry of the buffer is reserved for response messages only, a second entry is reserved for request messages only, and the remaining entries of the buffer are indiscriminately shared between request and response messages.
The inventive method of managing and allocating system resources, comprises: providing pending transaction logic that manages request and response messages; providing a buffer having a plurality of entries coupled to the pending transaction logic for storing data associated with the request and response messages; providing status control logic coupled to the pending transaction logic and coupled to the buffer for monitoring the allocation status of the entries in the buffer; reserving a first entry of the buffer for response messages only; reserving a second entry of the buffer for request messages only; and sharing the remaining entries of the buffer indiscriminately between the request and response messages.
And finally, the inventive computer system having novel architecture for resource management and allocation of system resources, comprises: an I/O device; a microprocessor; pending transaction logic coupled to the microprocessor and coupled to the I/O device where the pending transaction logic manages request and response messages to and from the microprocessor; a buffer having a plurality of entries coupled to the pending transaction logic for storing data associated with the request and response messages; status control logic coupled to
Bertone Michael S.
Bouchard Gregg A.
Kessler Richard E.
Steinman Maurice B.
Farooq Mohammad O.
Gaffin Jeffrey
Hewlett-Packard Development Company
LandOfFree
Computer resource management and allocation system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer resource management and allocation system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer resource management and allocation system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3359576