Music event timing and delivery in a non-realtime environment

Music – Instruments – Electrical musical tone generation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C084S645000

Reexamination Certificate

active

06353172

ABSTRACT:

TECHNICAL FIELD
This invention relates to methods of sequencing music events and passing them to hardware drivers and associated devices for playing.
BACKGROUND OF THE INVENTION
Context-sensitive musical performances have become essential components of electronic and multimedia products such as stand-alone video games, computer based video games, computer based slide show presentations, computer animation, and other similar products and applications. As a result, music generating devices and/or music playback devices have become tightly integrated with electronic and multimedia products.
Previously, musical accompaniment for multimedia products was provided in the form of pre-recorded music that could be retrieved and performed under various circumstances. One disadvantage of this technique was that the pre-recorded music required a substantial amount of memory storage. Another disadvantage was that the variety of music that could be provided was limited by the amount of available memory.
Today, music generating devices are directly integrated into electronic and multimedia products for composing and providing context-sensitive musical performances. These musical performances can be dynamically generated and varied in response to various input parameters, real-time events, and conditions. For instance, in a graphically based adventure game the background music can change from a happy, upbeat sound to a dark, eerie sound in response to a user entering into a cave or some other mystical area. Thus, a user can experience the sensation of live musical accompaniment as he engages in a multimedia experience.
In a typical prior art music generation architecture, an application program communicates with a synthesizer or synthesizer driver using some type of dedicated communication interface, commonly referred to as an “application programming interface” (API). In a system such as this, the application program delivers notes or other music events to the synthesizer, and the synthesizer plays the notes immediately upon receiving them. The notes and music events are represented as data structures containing information about the notes and other events, such as pitch, relative volume, duration, etc.
In the past, synthesizers have been implemented in hardware as part of a computer's internal sound card or as an external device such as a MIDI (musical instrument digital interface) keyboard or module. With the availability of more powerful computer processors, however, synthesizers are now being implemented in computer software.
Whether the synthesizer is implemented in hardware or software, the delivery of music events needs to be precisely timed—each event needs to be delivered to the synthesizer at the precise time at which the event is to be played.
Achieving such precise delivery timing can be a problem when running under multitasking operating systems such as the Microsoft Windows operating system. In systems such as this, which switch between multiple concurrently-running application programs, it is often difficult to guarantee that an application program will be “active” at any particular time.
Various mechanisms, such as interrupt-based callbacks from the operating system, can be used to simulate real-time behavior and to thus ensure that events are delivered by application programs on time. However, this type of operation is awkward and is not supported in all environments. Other systems have utilized different forms of time-stamping, in which music events are delivered ahead of time along with associated indications (timestamps) of when the events are to happen. As implemented in the past, however, time-stamping has been somewhat restrictive. One problem with prior art time-stamping schemes is that not all synthesizers or other receiving devices have dealt with timestamps in the same way. In addition, the identification of a reference clock has been problematic.
Software-based synthesizers introduce further complications related to delivery timing. Specifically, a software-based synthesizer is more likely to exhibit a noticeable latency between the time it receives an event and the time the event is actually produced or heard. In contrast to the operation of a hardware synthesizer, which processes its various voices on a sample-by-sample basis, a software synthesizer typically produces wave data for discrete periods of time that can range from 10 milliseconds to over 50 milliseconds. Once the synthesizer begins processing the wave data for an upcoming period, new events can begin only after this period. Accordingly, such a software synthesizer exhibits a variable latency, depending on whether the synthesizer is in the process of calculating wave data for one of the periods. Event delivery can become especially troublesome when delivering notes concurrently to different synthesizers, each of which might have a different (and constantly varying) latency.
Yet another problem with the prior art arises because hardware drivers and software-based synthesizers are typically implemented in the kernel portion of a computer's operating system. Because of this, calling the synthesizer or hardware driver requires a ring transition (a transition from the application address space to the operating system address space) for each event delivered to the hardware driver or synthesizer. Ring transitions such as this are very expensive in terms of processor resources.
Thus, there is a need for an improvement in the way music events are delivered from application programs to music rendering devices such as synthesizers. Such a delivery system should work with synthesizers and other hardware drivers that have different latencies, including synthesizers and hardware drivers having variable latencies. It should also ease the burden of real-time event delivery, and reduce the overhead of application-to-kernel ring transitions.
SUMMARY OF THE INVENTION
In accordance with the invention, a master clock is maintained for use by application programs and by music processing components. Applications then time-stamp music events before sending the music events to music processing components. The music processing components then take responsibility for playing the events at the proper times, with reference to the master clock. Music processing components are designed to expose a latency clock interface. At any moment, the latency clock interface indicates the earliest time, in the same time base as used by the master clock, at which a new event can be rendered. This interface gives application programs the information they need to provide music events far enough in advance to overcome variable latencies of the music processing components.
Rather than sending events one at a time to the music processing components, an application program periodically compiles groups or buffers containing time-stamped events that arc to be played in the immediate future. These groups are provided to kernel-mode music processing components, so that a plurality of music events can be provided to kernel-mode components using only a single ring transition.


REFERENCES:
patent: 4526078 (1985-07-01), Chadabe
patent: 4716804 (1988-01-01), Chadabe
patent: 5052267 (1991-10-01), Ino
patent: 5164531 (1992-11-01), Imaizumi et al.
patent: 5179241 (1993-01-01), Okuda et al.
patent: 5218153 (1993-06-01), Minamitaka
patent: 5278348 (1994-01-01), Eitaki et al.
patent: 5281754 (1994-01-01), Farrett et al.
patent: 5286908 (1994-02-01), Jungleib
patent: 5300725 (1994-04-01), Manabe
patent: 5315057 (1994-05-01), Land et al.
patent: 5355762 (1994-10-01), Tabata
patent: 5455378 (1995-10-01), Paulson et al.
patent: 5496962 (1996-03-01), Meier et al.
patent: 5734119 (1998-03-01), France et al.
patent: 5753843 (1998-05-01), Fay
patent: 5811706 (1998-09-01), Buskirk et al.
patent: 5827989 (1998-10-01), Fay et al.
patent: 5883957 (1999-03-01), Moline et al.
patent: 5902947 (1999-05-01), Burton et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Music event timing and delivery in a non-realtime environment does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Music event timing and delivery in a non-realtime environment, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Music event timing and delivery in a non-realtime environment will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2874486

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.