See all NewsEngineering News

Conference Brings Top Music and Computer Science Researchers to Northwestern

Midwest Music Information Retrieval Gathering (MMIRG) brought researchers, students, and industry professionals together at Northwestern.

Time and again, professor Bryan Pardo has encountered some of the Midwest’s top music researchers at international conferences in Italy, Germany, and France.

First, it struck Pardo as odd that ambitious, like-minded colleagues traveled across the Atlantic to connect. Then, he thought it unfortunate. 

Bryan PardoIn 2011, Pardo launched the inaugural Midwest Music Information Retrieval Gathering (MMIRG), a one-day gathering of music information and audio processing researchers, students, and industry professionals. After a three-year hiatus, the MMIRG returned to Northwestern’s Evanston campus on June 14, bringing its confluence of music, computation, and audio to the Ford Motor Company Engineering Design Center.“We have the ability to build research and industry links right here in the Chicago area and the Midwest, which has so much great energy when it comes to music,” said Pardo, an associate professor of electrical engineering and computer science and of music theory and cognition. 

“I wanted to assemble local people in sound, music, and academia to discover the synergies that can happen closer to our respective home bases,” Pardo said.

A focus on innovation

The daylong event featured a total of 16 presentations from professors, PhD candidates, and industry innovators. Some examples include:

  • Zhiyao Duan (PhD ’13) presented ongoing research detailing the problems with data-driven approaches in automatic music transcription, which struggles to account for octave and polyphony errors as well as deep musical structures. Duan, now an assistant professor at the University of Rochester heading that institution’s Audio Information Research Lab, is currently working to incorporate music knowledge into automatic music transcription, which he claims can have a broad impact on music education.
  • James Symons, a Northwestern PhD candidate in music theory and cognition, introduced research aiming to get computers to learn and understand musical patterns relevant to human listeners. He identified 28 patterns that can be combined “like a jigsaw puzzle.”
  • Zafar Rafii offered a glimpse at his early work creating audio fingerprints and a matching system that can identify live songs. A PhD candidate in electrical engineering and computer science, Rafii hopes to develop a system that can address the audio degradations and variations that currently plague live music identification.
  • Mark Cartwright, another Northwestern PhD student, described his ongoing efforts to design a more user friendly, accessible audio production interface. His SynthAssist project, a first-of-its-kind software, allows users to communicate audio concepts with evaluative feedback or “soft examples,” such as vocal imitation or pre-recorded sound, to overcome the barriers of complex synthesizer interfaces.
  • Soundslice co-founder Adrian Holovaty, a software developer and musician, introduced his upstart online venture that makes sheet music and music education more interactive. Currently synching music notation with audio recordings, Holovaty hopes to transition from a largely manual process to an automated approach that will grow Soundslice into a powerful force in online music education.

The event also featured eight research posters showcasing cutting-edge new work, such as crowdsourcing a reverberation concept map, piano transcription using multi-frame spectrogram factorization, and the design and added value of electric guitar and MIDI controller integration.

Sparking collaboration

The MMIRG conference concluded with a town hall-styled meeting about strategies to build a regional community of music researchers, which Pardo referenced as one of the event’s principal objectives.

“We have a vibrant local community here and we should all be talking to one another much more,” Pardo said.

As one example, Pardo pointed out three specific attendees: one who creates musical notations by hand; another who has written software to synchronize music with notation; and still another who does optical music recognition.

“These are individuals who are not working together right now but have the potential to leverage each other’s talents to create something awesome,” Pardo said. “That’s the type of collaboration I hope this event sparks.”