Roger Dannenberg Imagines the Future of Music at CS+X Colloquium
Dannenberg expects human-computer music performances in the future
Having first learned to play the trumpet at age 11, Roger B. Dannenberg is a lifelong musician. But when he experienced a synthesizer in high school, his understanding of the art expanded well beyond traditional instrumentation.
“The idea that a sound could be constructed from components like an oscillation or instrument controller was an incredible concept,” Dannenberg said. “It’s something that’s inspired me my whole life.”
A professor of computer science, art, and music at Carnegie Mellon University, Dannenberg discussed—and demonstrated—the capabilities that arise from combining music and computation during his talk “Automated Music Listening and Understanding.” A part of CS+X colloquium series, the Friday, June 13 event led up to the McCormick-hosted Midwest Music Information Retrieval Gathering conference.
“Music has always been a very technological field,” Dannenberg said. “Think about the metallurgy that went into building brass instruments or the mechanical engineering that went into a cast-iron-reinforced grand piano. There’s a lot of technology in music going back at least 8,000 years, so it’s no surprise that people combine music and computation.”
From synthesizers and drum machines to Napster and iPods, computers play a major role in music recording, editing, and distribution. Services like Shazam and Pandora can even identify songs and make music recommendations for the listener. Dannenberg said these technologies make music more fun, more available, and more personal than ever.
In his own work, Dannenberg has taken computation even further. He’s created programs that allow computers to actually interact with musicians by accompanying them. The programs have the ability to follow the musician even during improvisational pieces. One of his creations is the SmartMusic System, interactive music education software used by more than 100,000 music students. He also played a central role in the development of Piano Tutor and Rock Prodigy, which are interactive multimedia music education systems.
Right now, Dannenberg said his programs work best with chamber music but have little sophistication when it comes to popular music such as jazz and rock’n’roll. He said that in the future computers will make great music by performing popular music with intelligence. He predicts these computer programs will also be available to collaborate with groups of musicians missing a member.
“Suppose you want to get together with some friends and play music, but you’re missing the bass player,” Dannenberg suggested. “What if you could download a bass player for that afternoon? That’s the dream, but we have a lot to do to realize that dream.”
Dannenberg also shared a new technology he’s developing that can automate the process of mixing and editing recorded music. The Intelligent Audio Editor listens to the recording and fixes mistakes to smooth out the sound. He also shared a program that uses machine learning to understand the style of music being played. For example, when playing jazz, the computer can recognize if the style is lyrical, pointillistic, frantic, or syncopated.
Despite these amazing advances in music computation, Dannenberg was careful to point out that technology will never replace real musicians but instead will help them learn instruments, practice, and have fun.
“It should enrich the musical experience,” he said. “Rock Prodigy teaches people how to play the guitar, so they can get off Guitar Hero and onto the real instrument. And enjoy that physical mastery of actually playing something.”