Menu
See all NewsEngineering News
play buttonEvents

Fluency vs. Fact: How Industries and Researchers are Navigating Generative AI

CASMI hosted a virtual discussion focused on how businesses are using large language models

CHATBOT
Watch the entire discussion including Professors Kristian Hammond and Jessica Hullman.

Generative artificial intelligence (AI) is quickly becoming big business. Companies are investing billions of dollars into technologies that can generate text, photos, and videos. However, it’s important to understand their capabilities and limitations, AI experts told the Center for Advancing Safety of Machine Intelligence (CASMI)

Kristian Hammond (left), Jessica Hullman

On April 3, CASMI hosted the virtual panel "The Harms and Benefits of Generative AI: Exploring the Differences Between Fluency and Fact." The discussion focused on how businesses are using large language models like ChatGPT, and how researchers are exploring the next developments of generative tools and investigating potential harms these systems could create. 

“One of the concerns is that people will be looking at these models as almost having human-level knowledge or even the knowledge level of a search engine,” said Kristian Hammond, Bill and Cathy Osborn Professor of Computer Science and director of CASMI. “These technologies are more fluency engines than information systems.” 

Large language models are statistical systems that predict the next most likely word in a sequence, based on the vast data they were trained on, generally sourced from the internet. They work by finding and replicating patterns in language. However, they are prone to errors and have been known to hallucinate, or confidently give incorrect information. 

David Leake, computer science professor at Indiana University, has focused his research on AI and cognitive science and discussed the limitations inherent to statistical language systems. Leake recently experimented with Google’s large language model, Bard. He and a friend asked the system to analyze a poem after giving Bard the title and the text of the poem itself. 

One of the concerns is that people will be looking at these models as almost having human-level knowledge or even the knowledge level of a search engine. These technologies are more fluency engines than information systems.

Kristian HammondDirector of CASMI, Bill and Cathy Osborn Professor of Computer Science

“Bard came back with a very nice analysis of each one of the verses but actually gave a different author for the poem,” Leake said. “The reason was there was a famous poet who had written a poem with a similar title, and so the odds were much more likely that the author would be the person in its training data rather than this unknown that we had given it. 

“This is a sort of error that a human would never make,” Leake added. 

“We know that people are over-trusting of AI in some ways,” said Jessica Hullman, Ginni Rometty Associate Professor of Computer Science at Northwestern University, whose research focuses on uncertainty in data analysis and reasoning about uncertainty. “One of the major issues is people are not questioning enough the content that they're getting.”

“ChatGPT caught the world by storm”: Varying reactions to fast-moving technologies

When OpenAI released ChatGPT in November, many were surprised at its popularity. It became the fastest-growing application in history. Naturally, many businesses wanted to invest.  

“It seems like the utility of the technology, especially with the release of ChatGPT, just caught the world by storm,” said Ben Lorica, principal at Gradient Flow. 

Lorica advises startup companies that are interested in using the technologies. He has noticed two trends: people want to build custom language models with smaller parameters, and people understand these models need external resources. 

OpenAI recently released plug-ins for ChatGPT. These are tools that allow companies to plug ChatGPT into their own websites, giving it access to proprietary information. 

We know that people are over-trusting of AI in some ways. One of the major issues is people are not questioning enough the content that they're getting.

Jessica HullmanGinni Rometty Associate Professor of Computer Science

“We're still in the early days,” Lorica said. “We’re still learning a lot as we go. One thing that is clear is people want to move with these technologies, so how do you do that as safely as possible? That’s the challenge.” 

One response has been to put a temporary pause on training such powerful systems. More than 50,000 signatures have been collected in an open letter calling for a six-month moratorium on systems more powerful than ChatGPT-4. The letter, whose signatories include billionaire Elon Musk and Apple co-founder Steve Wozniak, argues that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

The panelists pointed out that it would be unrealistic to force people to stop using the technologies. 

“Historically, when there's a new technology, there's often a fear of it,” Hullman said. “If these models can go online and just create a website or post a new Wikipedia page, that’s the kind of thing that is more concerning. 

“I think that maybe there's some sort of regulation that needs to happen." 

Controlling Generative AI Technologies 

Governments around the world are working on or considering regulating AI. The US has no binding federal regulation yet, but some agencies have developed frameworks for responsible use, including the White House’s blueprint for an AI Bill of Rights and the National Institute of Standards of Technology’s AI Risk Management Framework. The European Union is attempting to pass the AI Act. The EU already has the General Data Protection Regulation

“In the European Union, there’s the right to explanation for decisions that are made by automated systems,” Leake said. “If one were to leverage language models in something like a symbolic AI system that actually was interpretable, I think that would have a lot of benefits for at least being able to assess the results and potentially trust them.” 

Hammond believes we are at the brink of being able to artificially generate factual text. 

“If I have a language model, and I'm using it to articulate a set of facts that I can guarantee, that's very different than a language model that I'm using to actually find out about those facts,” Hammond said. “In one case, you've got control over the knowledge. In the other case, you're hoping for the best.”