Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing
Reexamination Certificate
1998-02-17
2001-10-09
Isen, Forester W. (Department: 2644)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data routing
Least weight routing
C712S034000, C712S035000, C700S094000
Reexamination Certificate
active
06301603
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to apparatus and methods for dynamically scaling audio processing tasks from a default processor to one or more additional processors which may be of a different type.
2. Description of the Prior Art
Many applications for personal computers require audio processing either for music synthesis or audio effects such as artificial reverberation and 3D localization. Audio is used for musical education, background for advertising, sound effects and musical accompaniment for computer games, and entertainment. Music synthesis offers advantages over the playback of prerecorded music. It is easier to modify the musical accompaniment in response to actions of the listener, for example by changing the tempo or the orchestration as the intensity of game play increases. Also, the control parameters for the synthesizer require a much lower bandwidth than streams of digitized audio samples. Similarly, adding audio effects during playback makes it easier to modify the effect in response to actions of the listener, for example by changing the apparent position of a sound in response to joystick manipulations.
The most common method for controlling music synthesis in a multimedia system is via MIDI (Musical Instrument Digital Interface) commands. MIDI represents music as a series of events, such as “note on,” “note off,” and “volume.” MIDI organizes the synthesis process into sixteen logical channels, where each channel is assigned a particular “patch” (musical timbre). The stream of MIDI events is normally produced by an application such as a music sequencing program or game, but it can also be provided by an external controller such as a music keyboard. The music synthesizer responds to the MIDI stream to create the desired audio output. Synthesizers are normally able to synthesize some number of voices (often 32) at the same time. The MIDI standard permits these voices to have up to 16 different timbres.
The most common way to control audio effects processing is through Application Program Interfaces (APIs) provided as part of the operating system running on the PC (e.g., Microsoft Windows 95). For example, the DirectSound3D API controls an audio effect that makes it seem as if a sound is emanating from any location surrounding the listener. Audio effects processors are normally able to process some number of audio streams at the same time (often 8 for 3D positioning).
Audio processing in personal computers is accomplished either using hardware accelerator chips (supplied on add-on cards or on the mother board) or using the host CPU. Hardware accelerator chips can be based on fixed-function hardware designed specifically for audio processing or general-purpose digital signal processors that are programmed for audio processing. Hardware accelerators increase cost, particularly when they are designed to support worst-case signal processing requirements.
Using the host processor has the advantage of reducing cost and hardware complexity, but distracting the host processor with audio processing tasks slows the operation of a current application such as a game.
The computational requirements for audio processing often vary depending on the requirements of the application. For example, the number of voices of music synthesis required can vary from a few to 32, 64, or more. Similarly, the number of streams of 3D positioning can vary from 1 or 2 to 8, 16, or more. The current practice for implementing algorithms on hardware accelerators or the host CPU is to place an a priori limit on the number of signal processing tasks the algorithm will perform. Such a limit is required in a hardware accelerator to determine the hardware resources that need to be provided. In a host-based implementation, the limit is required to assure that some computational resources remain for the CPU to run the application, the operating system, and any other tasks it may be required to perform concurrently. Once a processor reaches its limit, the processor either ignores requests to perform additional tasks or it finds a way to shed tasks already running to make way for the new request.
A need remains in the art for apparatus and methods which allow music synthesis and audio effects processing to dynamically scale from a default processor to one or more additional processors which may not be of the same type—for example from a DSP to the host CPU—in a manner which permits the audio system to support more tasks as the need arises.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide apparatus and methods which allow music synthesis and audio effects processing to dynamically scale from a default processor to additional heterogeneous processors, in a manner that is transparent to the user.
The present invention dynamically allocates audio processing tasks between two or more heterogeneous processors, for example a host processor such as might be found in a PC and a hardware acceleration unit such as might be found on a sound card in the PC. The audio processing load on each processor is determined and tasks are allocated based upon this determination. Each audio processor communicates with a common audio processing parameter data set to ensure that the sound quality is the same regardless of which processor is used.
In general, the goal of this invention is to optimally load the hardware acceleration unit(s), and only invoke the processing power of the host when the accelerator resources are used up—in other words, to keep the host as idle as possible while reserving its resources for cases in which the instantaneous processing load exceeds the accelerator capabilities.
It is important musically that the sound produced by the host processor and the hardware acceleration units be of identical quality so that the user is not aware of which resources are being used for processing. This “seamless” behavior requires that the processing engines running on the various processors implement the same algorithm despite differences in the architecture of the processors. It also requires that all processing engines receive the same audio processing parameters. Delivering the same audio processing parameters to all processing engines can be achieved by duplicating the synthesis and processing controls and parameters for the host and the accelerators, but this is inefficient in storage and access bandwidth. Instead, the present invention puts audio processing parameters in the memory of the host PC and permits the hardware accelerators to access these parameters via the bus access mechanisms found in contemporary multimedia systems (e.g., the “PCI bus” found in modern PCs).
The heterogeneous nature of the processor array results in differences in the time it takes each engine to produce audio output in response to control inputs (known as “processing latency”). The latencies of the various processors must be equalized to assure synchronization of the audio outputs.
The following is a partial list of key features of the present invention:
1) Configuration supports a host processor and a plurality of coprocessors.
2) Host determines allocation of audio tasks based upon load and available resources.
3) Preferred arrangement is to overflow to the host once the hardware accelerators are fully utilized.
4) Scaling of audio processing from one processor to others is seamless.
5) Host handles synchronization of processors.
6) Resources (such as host memory and D/A conversion) can be shared.
7) Supports autonomous audio processing units (host distributes commands to acceleration units).
8) Supports slave audio processing units (host manages all resource allocation of accelerators).
Apparatus according to the present invention dynamically allocates audio processing load between at least two heterogeneous audio processors, such as the host processor and a hardware acceleration unit. It includes means for determining the current audio processing load value for each processor, and means for allocating audio processing load among the processors dyn
Barish Jeffrey
Maher Robert Crawford
Bales Jennifer L.
EuPhonics Incorporated
Isen Forester W.
Macheledt Bales & Johnson LLP
Pendleton Brian
LandOfFree
Scalable audio processing on a heterogeneous processor array does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Scalable audio processing on a heterogeneous processor array, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scalable audio processing on a heterogeneous processor array will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2578607