Creating music via concatenative synthesis

Music – Instruments – Electrical musical tone generation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C084S609000, C084S645000

Reexamination Certificate

active

07737354

ABSTRACT:
A “Concatenative Synthesizer” applies concatenative synthesis to create a musical output from a database of musical notes and an input musical score (such as a MIDI score or other computer readable musical score format). In various embodiments, the musical output is either a music score, or an analog or digital audio file. This musical output is constructed by evaluating the database of musical notes to identify sets of candidate notes for each note of the input musical score. An “optimal path” through candidate notes is identified by minimizing an overall cost function through the candidate notes relative to the input musical score. The musical output is then constructed by concatenating the selected candidate notes. In further embodiments, the database of musical notes is generated from any desired musical genre, performer, performance, or instrument. Furthermore, notes in the database may be modified to better fit notes of the input musical score.

REFERENCES:
patent: 4527274 (1985-07-01), Gaynor
patent: 4613985 (1986-09-01), Hashimoto
patent: 5703311 (1997-12-01), Ohta
patent: 5750912 (1998-05-01), Matsumoto
patent: 5895449 (1999-04-01), Nakajima
patent: 6304846 (2001-10-01), George
patent: 6424944 (2002-07-01), Hikawa
patent: 6576828 (2003-06-01), Aoki
patent: 7015389 (2006-03-01), Georges
patent: 7016841 (2006-03-01), Kenmochi
patent: 2004/0019485 (2004-01-01), Kobayashi
patent: 2004/0243413 (2004-12-01), Kobayashi
patent: 2005/0137880 (2005-06-01), Bellwood
Ian Simon, Sumit Basu, David Salesin, and Maneesh Agrawala. “Audio Analogies: Creating New Music from an Existing Performance by Concatenative Synthesis.” In Proceedings of the Int'l Conf. on Comp. Music 2005. Aug. 2005.
Diemo Schwarz. Data-Driven Concatenative Sound Synthesis. PhD Thesis in Acoustics, Computer Science, Signal Processing Applied to Music, Universite Paris 6—Pierre et Marie Curie, Jan. 20, 2004.
Sven Konig. sCrAmBlEd?HaCkZ! Website: Concept. Apr. 25, 2006. <http://web.archive.org/web/20060425220027/http://www.popmodernism.org/scrambledhackz/?c=1>.
Eliot Van Buskirk. Wired.com Commentary. Apr. 17, 2006. <http://www.wired.com/print/entertainment/music/commentary/listeningpost/2006/04/70664>.
Schwarz, D. “A system for data-driven concatenative sound synthesis”, DAFX00 Proceedings, Verona (It), Dec. 7-9, 2000.
Zils, A., F. Pachet, “Musical Mosaicing” Proceedings of DAFX 01, Limerick (Ireland), 2001.
Diemo Schwarz. “New developments in data-driven concatenative sound synthesis.” In Proc. Int. Computer Music Conference, 2003.
Diemo Schwarz. “The Caterpillar System for Data-Driven Concatenative Sound Synthesis.” DAFX03 Proceedings, London, UK, Sep. 8-11, 2003.
Diemo Schwarz. Current Research in Concatenative Sound Synthesis. International Computer Music Conference (ICMC). Barcelona, Sep. 2005.
D. Schwarz, G. Beller, B. Verbrugghe, S. Britton. Real-Time Corpus-Based Concatenative Synthesis with CataRT >>, 9th International Conference on Digital Audio Effects (DAFx), Montreal, 2006.
Schwarz, Diemo. “Concatenative sound synthesis: The early years” Journal of New Music Research 35.1 (Mar. 2006).
Arcos, J.; De Mantaras, r., and Serra, X. “SaxEx: A Case-Based Reasoning System for Generating Expressive Musical Performances” Journal of New Music Research 27(3), pp. 194-210. 1998.
Derenyi, I, and Dannenberg, R. “Synthesizing Trumpet Performances.” In Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association, pp. 490-496. 1998.
Hertzmann, A.; Jacobs, C.; Oliver, N.; Curless, B.; and Salesin, D “Image Analogies” In Eugene Fiume, editor, SIGGRAPH 2001, ComputerGraphics Proceedings, pp. 327-340. ACMPress / ACM SIGGRAPH, 2001.
Jojic, N., Frey, B., and Kannan, A.. “Epitomic Analysis of Appearance and Shape.” In Proceedings of the InternationalConference on Computer Vision (ICCV), 2003.
Orio, N., and Schwarz, D. “Alignment of Monophonic and Polyphonic Music to a Score,” in Proceedings of the ICMC, Havana, Cuba, 2001.
Raphael, C., Automatic Segmentation of Acoustic Musical Signals Using Hidden Markov Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(4):360-370, 1999.
Roucos, S., and Wilgus, A., “High Quality Time-Scale Modification for Speech.” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 493-496. IEEE, 1985.
Schwarz, D., “A System for Data-Driven Concatenative Sound Synthesis,” in Digital Audio Effects (DAFx), Verona, Italy, 2000.
Zils, A., and Pachet, F., “Musical Mosaicing.” in Proc. Cost G-6 Conf. Digital Audio Effects DAFX-01, Limerick, Ireland, 2001.
Jehan, T, “Creating Music by Listening” PhD Thesis, MIT, 2005. http://web.media.mit.edu/˜tristan/Papers/PhD—Tristan.pdf.
Beller, G., Schwarz, D., Hueber, T., and Rodet, X, “A Hybrid Concatenative Synthesis System on the Intersection of Music and Speech” in Journées d'Informatique Musicale, Jun. 4, 2005, http://mediatheque.ircam.fr/articles/textes/Beller05c/.
The Singing Synthesis Software VOCALOID, http://www.vocaloid.com/en/introduction.html, Accessed Mar. 29, 2006.
Cantor: The vocal machine, http://www.virsyn.de/en/E—Products/E—CANTOR/e—cantor.html, Accessed Mar. 29, 2006.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Creating music via concatenative synthesis does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Creating music via concatenative synthesis, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Creating music via concatenative synthesis will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4243929

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.