Menu
See all NewsEngineering News
Research

Teaching a Computer with Natural Language

Companions Cognitive Systems have the ability for human-like learning.

Someday we might be able to build software for our computers simply by talking to them.

Ken Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science, and his team have developed a program that allows them to teach computers as you would teach a small child—through natural language and sketching. Called Companion Cognitive Systems, the architecture has the ability for human-like learning.

“We think software needs to evolve to be more like people,” Forbus says. “We’re trying to understand human minds by trying to build them.”

Forbus has spent his career working in the area of artificial intelligence, creating computer programs and simulations in an attempt to understand how the human mind works. At the heart of the Companions project is the claim that much of human learning is based on analogy. When presented with a new program or situation, the mind identifies similar past experiences to form an action. This allows us to build upon our own knowledge with the ability to continually adapt and learn.

Ken Forbus

In one line of experiments, Forbus and his team teach the Companion how to play games, including tic-tac-toe and Freeciv, a strategy game in which players build a civilization. For tic-tac-toe, the user starts by introducing the game using natural language, letting the Companion know it’s a two-player marking game. Next the user sketches the three-by-three game board. The user introduces the idea of players by saying things like “X is a player” and drawing an X. Game play is also described in natural language, such as “X goes first,” “X and O take turns marking empty squares,” and so on. Through demonstration, the user shows the Companion how to win, teaching more of the rules along the way.

“A lot of our human-to-human communication is a way of establishing common ground,” Forbus says. “The sketch provides a piece of common ground. It’s a shared medium that both we and the software can do to illustrate what’s happening.”

Not only are these interactions simpler, they also allow for faster learning. Forbus says that analogy-based learning is much more rapid than today’s machine learning techniques. To learn a game, a Companion only needs a small number of examples. The examples provide context, so the Companion can narrow down the intent behind the communication. Then it knows what to expect and what to look for.

“It demonstrably gets better at building up a civilization with just a little bit of natural language advice,” says Forbus, referencing the Companion’s ability to play FreeCiv. “By giving it just half a dozen sentences of explanation, it performs much better.”

In a world where “big data” has become a ubiquitous buzzword, people sort through pages of information and run computer programs sometimes hundreds of thousands of times. “We’re the opposite extreme of that,” says Tom Hinrichs, research associate professor in electrical engineering and computer science. “We’re trying to replicate the sorts of interactions you would have with another person.”

Forbus and Hinrichs both stress that these lines of experiments are about much more than playing games. They are about building a computer that has a rich understanding of natural language, spatial reasoning, and sketching.

“Given the tremendous amount of expertise that normally goes into writing software, if we can teach our computers to do things by talking to them, then that’s a huge win,” Hinrichs says. “The end result is a running program that we can look at and understand.”