Menu
See all NewsEngineering News
Events

Pushing Boundaries with Generative Artmaking

Panelists discussed the implications of artificial intelligence for artistic creation

Sound wave

Art and the artistic process might not spring to mind so easily when considering artificial intelligence (AI). The composition of profoundly moving orchestral music, the brush and pen strokes behind a visual masterpiece, the selection of words building to resonant storytelling — these fundamentally human pursuits seem intuitively at odds with machines. But proponents of AI say we’re in the dawn of a new age of AI-created art.

At the intersection of art and programming, generative models employ deep learning algorithms to produce artifacts based on high-level, abstract representations of data using multiple processing layers with complex structures. Generative models compose songs, create paintings, produce videos, and write text documents.

To shed light on perceptions of AI as a tool and collaborator in the artistic process, or perhaps even a competitor to human artists, the Northwestern University Center for Human-Computer Interaction + Design (HCI+D), a collaboration between Northwestern Engineering and the School of Communication, hosted a virtual panel on May 9 to discuss the implications of AI for artistic creation.

“As our AI systems for artistic creation become more powerful, we need to reexamine the human role in the artistic process and our relationship to the technology we work with to realize artistic visions. This panel was a wonderful opportunity to bring together thought leaders and practitioners in the arts and technology to examine what it means to be an artist working with AI in the 21st century,” said Bryan Pardo, codirector of HCI+D, head of the Interactive Audio Lab, and professor of computer science in the McCormick School of Engineering and of radio/television/film in Northwestern’s School of Communication.

Bryan Pardo

The event’s three panelists shared their experiences and work in generative artmaking and generally agreed that AI is a tool and collaborator.

“AI tools are just like any other technology,” said Aaron Hertzmann, principal scientist at Adobe Research and an affiliate faculty member at the University of Washington. “The tools benefit art, empower artists, and create new forms of expression.”

“Analogous to tool and collaborator, in the music context we could think about instruments and improviser,” said Anna Huang, a Magenta research scientist at Google Brain, Canada CIFAR AI Chair at Mila, and adjunct professor at the Université de Montréal. “An instrument is something that somebody is playing, so in some ways it's more passive. An improviser is a player that jams with you and brings their own agency to the interaction.”

Moisés Horta Valenzuela, an autodidact sound artist, creative technologist, and electronic musician, went a step further, emphasizing that the deep learning models are the art.

“The generative or system-making is the artwork, rather than the outputs,” Horta Valenzuela said. “The output is just a reflection of the system that I created and also of the data and representations that I give to the system.”




Transforming art through technology

Hertzmann provided a historical snapshot of how technology has consistently transformed art, from the development of photography and its impact on painting and portraiture in the mid-19th century to the initial tension between hand-drawn animation artists and the field of computer animation in the early 1990s.

“The classical painter Paul Delaroche is quoted as having said ‘From today painting is dead,’ because it seemed like photography was taking over the job of the artist, which was making realistic pictures,” Hertzmann said. “Within half a century, photography was accepted within the fine art canon and today we don't really question whether photography can be considered art.”

Hertzmann also outlined the critical and cultural response to generative art beginning in the 1960s with early computer-generated imagery and evolving with the DeepDream program developed in 2015 by Google that trained a neural network to recognize patterns and produce psychedelic, dream-like images. The hype and media attention on the idea of software as the artist quickly caused a similar backlash in the art community.

“Calling things ‘intelligent’ has societal implications and can be damaging,” Hertzmann said. “Calling an algorithm an artist gives people the wrong impression about what's actually going on under the hood. Computer-generated art is made by humans and it's all authored and controlled by humans.”

Co-creativity in music

Approaching the topic from the music domain, Huang sees the potential in co-creativity – people and AI working together – to communicate moments of connection through sound.

She is a composer as well as a judge and organizer for the AI Song Contest, an international human-AI songwriting competition.

“The idea behind the contest is to explore the wide range of possibility within the different modalities of music creativity and also to accelerate the collaboration and feedback loop between artists and scientists,” Huang said.

Contest submissions are evaluated based on the song itself and the creative process behind the scenes. Four to seven generative models might be used within one song, for instance, to generate the different components of a composition, such as the melody and song design.

Huang shared examples of contest entries where the use of AI tools had an impact both on the workflow, producing new and faster ways of working, and in pushing the boundaries of creative expression by enabling new interdependencies between songwriting and sound design.

She also discussed a user-centric, interaction driven approach to building generative models, as executed in the 2019 Bach Doodle, which used the machine learned model Huang created called Coconet to power Google’s first AI Doodle, enabling tens of millions of users to co-compose with AI in their browser.

Huang subsequently collaborated with Ryan Louie, a PhD student in Northwestern’s Technology and Social Behavior program, to conduct an evaluation study with composers and listeners which found that both more expressive generative models and more steerable or controllable interfaces can improve the user experience and users’ sense of agency.

Representing culture through AI

One goal of the AI Song Contest is to create a cultural dialogue in which people from all parts of the world can use AI to experiment within their own genres and musical traditions. This mission resonates with Horta Valenzuela, who draws from the cultural traditions of Tijuana, México, as the artist 𝔥𝔢𝔵𝔬𝔯𝔠𝔦𝔰𝔪𝔬𝔰 to explore ancient and contemporary music through the lens of critical non-hegemonic, decolonial theory.

Horta Valenzuela seeks to challenge the Eurocentric and Western-centric ideas of universality and examines which cultures and human experiences are being represented in AI and big data-driven systems.

“I’m very interested in the failures and glitches of algorithms and their inability to represent beyond the world view of the training sets,” Horta Valenzuela said.

He shared a project called Nahualtia Tlatzotzonalli (Shapeshifter Musician), a visual-music artwork of 13 compositions created as NFT’s on the blockchain using generative neural network techniques. He developed a deep learning algorithm that created the compositions by transforming visual data into sounds. The audio signals were trained on a data set of Horta Valenzuela’s own electronic music compositions.

His neural audio synthesizer project, called Semilla, shares Huang’s user-centric approach. He built an interface that allows artists without programming or AI experience to understand how generative models represent data in multi-dimensional space.

“We work with generative neural networks in a very black box kind of way, and this can be very frustrating because an artist wants to be able to control, or at least have a notion of, what the system is doing,” Horta Valenzuela said. “You just don't want to roll a dice and just see what happens.”