Electrical computers and digital data processing systems: input/ – Input/output data processing – Direct memory accessing
Reexamination Certificate
2001-06-01
2004-09-21
Fleming, Fritz (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Direct memory accessing
Reexamination Certificate
active
06795875
ABSTRACT:
BACKGROUND OF THE INVENTION
1. The Field of the Invention
The present invention relates to systems and methods for transferring data to and from memory in a computer system. More particularly, the present invention relates to systems and methods for servicing the data and memory requirements of system devices by arbitrating the data requests of those devices.
2. The Prior State of the Art
An important operational aspect of a computer or of a computer system is the need to transfer data to and from the memory of the computer. However, if the computer's processor is used to perform the task of transferring data to and from the computer's memory, then the processor is unable to perform other functions. When a computer is supporting high speed devices that have significant memory needs, the processor bears a heavy load if the processor is required to copy data word by word to and from the computer's memory system for those devices. As a result, using the processor to transfer data in this manner can consume precious processing time.
A solution to this problem is Direct Memory Access (DMA). A DMA controller essentially relieves the processor of having to transfer data to and from memory by permitting a device to transfer data to or from the computer's memory without the use of the computer's processor. A significant advantage of DMA is that large amounts of data may be transferred before generating an interrupt to the computer to signal that the task is completed. Because the DMA controller is transferring data, the processor is therefore free to perform other tasks.
As computer systems become more sophisticated, however, it is becoming increasingly evident that there is a fundamental problem between the devices that take advantage of DMA and the memory systems of those computers. More specifically, the problem faced by current DMA modules is the ability to adequately service the growing number of high speed devices as well as their varying data requirements.
High performance memory systems preferably provide high bandwidth and prefer large data requests. This is in direct contrast to many devices, which may request small amounts of data, have low bandwidth, and require small latencies. This results in system inefficiencies as traditional devices individually communicate with the memory system in an effort to bridge this gap. It is possible that many different devices may be simultaneously making small data requests to a memory system that prefers to handle large memory requests. As a result, the performance of the memory system is decreased.
This situation makes it difficult for low bandwidth devices, which may have high priority, to effectively interact with high bandwidth devices that may have lower priority. For example, an audio device may support several different channels that receive data from memory. The audio device typically makes a data request to memory for data every few microseconds for those channels. Because devices such as audio devices recognize that they may experience significant latency from the memory system before their request is serviced, the audio device may implement an excessively large buffer to account for that latency.
This is not an optimum solution for several reasons. For instance, many devices maintain a large buffer because they do not have a guarantee that their data requests will be serviced within a particular time period. Other devices maintain an excessively large buffer because it is crucial that the data be delivered in a timely manner even though the devices may have low bandwidth requirements. For example, if an audio device does not receive its data in a timely manner, the result is instantly noticed by a user. Additionally, each device must implement DMA control logic, which can be quite complex for some devices. In other words, the DMA control logic is effectively repeated for each device.
Current devices often interact with DMA systems independently of the other system devices and each device in the system is able to make a data request to the DMA at any time. As a result, it is difficult to determine which devices need to be serviced first. The arbitration performed by systems employing isochronous arbitration often defines fixed windows in which all devices that may require servicing are given a portion. These fixed windows are large from the perspective of high bandwidth devices and small from the perspective of low bandwidth devices. Thus, high bandwidth devices are required to buffer more data than they really need and low bandwidth devices often do not need to use their allocated portion of the window. This results in inefficiencies because all of the available bandwidth may not be used and additional memory is required for the buffers of high bandwidth devices. In essence, current systems do not adequately allow high priority devices to efficiently coexist with high bandwidth devices.
SUMMARY OF THE INVENTION
The present invention provides a DMA engine that manages the data requirements and requests of system devices. The DMA engine includes a data reservoir that effectively consolidates the separate memory buffers of the devices. In addition to consolidating memory, the DMA engine provides centralized addressing as well. The data reservoir is divided into smaller portions that correspond to each device. The DMA engine also provides a scalable bandwidth and latency to the system devices. An overall feature of the present invention is the ability to guarantee that a particular device will be serviced in a programmable response time. This guarantee enables the buffer sizes to be reduced, which conserves memory, as well as permits the available bandwidth to be efficiently utilized.
Because the DMA engine maintains the data reservoir, the DMA engine is responsible for providing each device with the data that the device requests. At the same time, the DMA engine is also responsible for monitoring the remaining data in the data reservoir such that a data request can be made to the system's memory when more data is required for a particular portion of the data reservoir. To accomplish these tasks, the DMA engine provides arbitration functionality to the devices as well as to the memory.
The arbitration functionality provided to the devices determines which devices are eligible to make a data request in a particular cycle. Each de ice may have multiple data channels, but the device is treated as a unit from the perspective of the DMA engine. By only allowing some of the devices to be eligible during a particular cycle, all devices are ensured of being serviced within a particular time period and high bandwidth devices are not permitted to consume more bandwidth than they were allocated.
The arbitration functionality provided between the DMA engine and the memory occurs on a per channel basis rather than a per device basis. Each channel is evaluated in turn to determine whether a data request should be made to memory or whether the channel can wait until it is evaluated again in the future. Because the number of channels is known and because the time needed to service a particular channel is known, each channel is assured of being serviced within a particular time period. This guarantee ensures that the data reservoir will have the data required by the system devices.
The arbitration interface between the system memory and the DMA engine addresses the data needs of each channel in a successive fashion by using a list that contains at least one entry for each channel. The DMA engine repeatedly cycles through the entries in the list to evaluate the data or memory requirements of each channel. In addition, the order in which the channels are evaluated can be programmed such that high bandwidth devices are serviced more frequently, while low bandwidth devices are serviced within a programmable time period. Thus, data requests to or from memory are for larger blocks of data that can withstand some latency.
Additional features and advantages of the invention will be set forth in the description which follows,
Ahsan Agha Zaigham
Gray Donald M.
Fleming Fritz
Martinez David
Microsoft Corporation
Workman Nydegger
LandOfFree
Arbitrating and servicing polychronous data requests in... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Arbitrating and servicing polychronous data requests in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Arbitrating and servicing polychronous data requests in... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3252664