Menu
See all NewsEngineering News
Research

Student-Designed Avatar Makes Public Debut at Panel, Theatrical Production Is Next

Digital avatar Elizabeth introduced the participants and responded to questions posed by the panel



Hello, everyone. I am a conversational model designed by the students at Northwestern University.”

That was how a digital avatar named Elizabeth introduced itself April 26 during a panel discussion on the future of artificial intelligence (AI). The unprecedented moment at Chicago Innovation’s “AI vs. IQ” virtual event was a key milestone after weeks of work from a group of undergraduate and graduate students, and they aren’t finished yet.

The students, who are advised by Kristian Hammond, the Bill and Cathy Osborn professor of computer science, began designing Elizabeth in January. The conversational avatar is powered by OpenAI’s large language model, ChatGPT. Some students worked on converting the text into speech so that Elizabeth could voice the words ChatGPT generated. Another team designed the avatar so it could move and have a voice. Another group converted speech into text so the avatar could interpret what the humans were saying during the panel discussion.

“We were stitching Elizabeth together,” said Prachi Patil, fourth-year undergraduate student studying computer science and cognitive science. “We created a pipeline where she can hear, think, speak, and emote because of all the different teams’ work. We were responsible for tying those pieces together into this bigger system.”

Hammond, director of the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI), moderated the Chicago Innovation event. He asked Elizabeth the same questions the human panelists answered and also allowed the avatar to introduce everyone participating in the event.

“First and foremost, the students did an incredible job,” Hammond said. “They took one piece of technology and then added it to several other technologies to create something absolutely new. That’s the most exciting thing. They were thinking in terms of how these technologies can come together and build something that did not exist before. They did a magnificent job in that space.”

This was the first project featuring Elizabeth. The plan is to also use the avatar in a future improv show at the Annoyance Theatre & Bar in Chicago. The group is currently preparing for rehearsals, but no date has been set yet for the show.

Elizabeth is still a work in progress. To craft the best responses, students use a technique called prompt engineering. This is when someone guides the machine into saying exactly what is intended. Students are able to leverage GPT 3.5, the language model Elizabeth is based on, through OpenAI’s application programming interface (API). An API helps computer programs communicate with one another.

(top l to r): Willa Barnett and Hugo Flores García; (bottom l to r) Kyle Jung and Prachi Patil“We can't really mess with how it says things unless we fine-tune the model for a specific purpose,” said Kyle Jung, a computer science major from South Korea. “We just have to work with it and come up with prompts for it in order to say these things. For example, we can say, ‘You are part of a comedy show in Chicago, and you're hosting.’ We tell it exactly who it is.”

“It can be very corny,” said Willa Barnett, who graduated from Northwestern in 2022 after studying theater and computer science. “You never know how it's going to take a prompt exactly.”

Barnett said she prompted the avatar to use filler words like “um” when speaking, but then Elizabeth started every sentence with “ah.”

The decision to make Elizabeth look digital was intentional. The avatar does not have hair. It’s also gray and lacks an eye color.

“It's not a human,” Barnett said. "It's a different being that is like an amalgamation of human conversation and thought. But it doesn't have to be a fake us.”

Hugo Flores García, a PhD candidate in computer science, spoke about the dangers of anthropomorphizing these technologies. “It's a little scary that people who only see the facade go, ‘Oh, wow! This is human,' or 'this knows everything,' or 'this can do stuff for you,’” he said. “It can be a little bit dangerous if we don't communicate how this thing works, what this thing is doing, and where all of the ‘thoughts’ come from. It's just a snapshot of the internet.”

Students are learning about human-machine interaction throughout this process. They are also learning about technical challenges. The avatar gets data from local servers, but it is also communicating with servers from OpenAI. This can cause latency, which Elizabeth joked about during the panel discussion:

When you are talking to me, your question is sent to a server that houses my AI brain. The brain thinks about your message and generates a response. This can take some time because your message has to travel a long distance before it can be processed. This delay is called latency. To fill in the time, when I am doing things like thinking up responses, I sometimes just explain stuff… like latency.”

“I learned how much work there is left to do before we get to AI that's like stealing our jobs and taking over the world,” Patil said. “The system we've built is awesome. But it's a lot of work, and it's still in its beginning stages.”

This is a topic Chicago Innovation will be revisiting in the near future.