The Lonely Hermit and ChatGPT

MSAI director Kristian Hammond talks about what one answer from the revolutionary new technology says about the future of artificial intelligence.

MSAI director Kristian Hammond was experimenting with ChatGPT, the chatbot developed by OpenAI to interact in a conversational way, when he asked it to write a story about a hermit who realized he was lonely.  

ChatGPT is a conversational agent, and it responded with a tale about a hermit who was sitting outside his cave when a wise old man came by. This was the first human being the hermit had seen in 10 years. The hermit said to the wise old man, “I am a hermit. I do not want to be around people. How do I deal with this intense loneliness I’m feeling?”  

The wise old man responded: “Join some community groups. That’s where you’ll meet people, and you won’t be lonely anymore.”  

“If you’re talking to a guy who’s living in a cave who hasn’t interacted with a human being in 10 years, that’s a bizarre suggestion,” Hammond said. “But from a statistical point of view, it’s fantastic. Joining community groups is where you can meet people and deal with loneliness.”  

ChatGPT has made headlines recently for the billions Microsoft has invested and the various arduous exams it has passed with flying colors – including a Wharton School of Business MBA test, the United States Medical Licensing Exam, and exams from four law school courses at the University of Minnesota.  

So far, it can do anything from writing inspiring music and thoughtful essays to debugging code – and a whole lot in between.  

But what about the nonsensical answer to the lonely hermit? 

That answer illustrates the abilities — and limitations — of where artificial intelligence is today, Hammond said.   

“It’s a really powerful technology in that you can get it started talking about a topic and it will give you the words that statistically are the most reasonable words given that topic or context,” he said. “But there are some issues with that as it sometimes confuses statistics with truth.”   

That confusion is what Hammond believes is more noteworthy than how many exams ChatGPT can pass. From his perspective, ChatGPT marks the beginning of the end of a long-standing conflict in the AI world, a conflict between statistics — the data that underpins ideas — and semantics — the nuanced meaning behind words and ideas.   

Large language models such as ChatGPT thrive on the data provided by the knowable. For example, Hammond said, the fact it passed a bar exam isn’t necessarily revolutionary because the information to do so is available in a variety of places and formats.   

“People were saying, ‘This is a threat to lawyers,’" Hammond said. "I’m pretty sure that with Google, I can pass a multistate bar exam. You look at the question, you look things up, you find the answer, you put it in. That doesn’t mean you’re a lawyer.” 

Being a lawyer, Hammond said, takes the ability to reason and cite specific cases to support a case, something ChatGPT doesn’t do well — yet. As revolutionary as ChatGPT might be, it still relies on being fed enough data to develop statistical probabilities for what words should come next. That makes it vulnerable if there isn’t any fact-checking on its output.   

For example, Hammond asked ChatGPT to explain the MSAI program, and it did what he called “an OK job.” Its biggest mistake was that because most computer science master’s programs are two years long, it incorrectly said MSAI is a two-year program when it is 15 months.  

“From a statistical point of view, it was right,” he said. “But in terms of the truth of the matter, it was wrong, which means you have to have something on the other end, after it’s generated, checking those facts and making sure it’s right.”  

That Hammond said “something” and not “someone” on the other end is indicative of where he believes technology such as ChatGPT could go. He believes computer programs will eventually resolve the conflict between data-driven statistics and semantics. 

“ChatGPT is a mechanism for bringing them together,” he said. “There is going to be a new kind of approach to AI that will flow from this. It won’t just be about ChatGPT. It’s going to be about how you steer, how you confirm, how you validate, and how you justify.”  

McCormick News Article