Menu
See all NewsEngineering News
Events

Policymakers are ‘Way Behind the Curve’ on AI Threats, Expert Warns

V.S. Subrahmanian explains how artificial intelligence is used for both good and bad

When it comes to protecting against cybersecurity threats via artificial intelligence, “policymakers are way behind the curve.”  

V.S. Subrahmanian

That is the take by V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science in Northwestern and Engineering and a faculty fellow at the Northwestern Roberta Buffett Institute for Global Affairs, who said policymakers in the U.S. and elsewhere are woefully unprepared for the challenges posed by advances in artificial intelligence (AI), and that the current legislative framework is “not anywhere near where we need it to be.”

Subrahmanian spoke Nov. 17 as part of FP’s Tech Forum, which brought together speakers from government, industry, and academia to discuss a range of issues related to tech policy.

Advancements in disinformation

Subrahmanian said that disinformation typically requires two elements: content to be generated and an ability to spread that content virally through networks. “There are going to be dramatic advances in the next few years on both of those [fronts], many of them have already happened,” he said. On the content side, he added, there is now the ability to generate emotion in text, images and video, an advance in area known as “affective computing.”

New tactics in the works

While it seems that tech companies have been able to discover and largely destroy bots that were part of influence campaigns on social media several years ago, Subrahmanian said that new strategies are in the works.

“They’re going to pre-position assets in the network…, fake accounts and so forth, that are intended to spread such news over extended periods of time,” he said. These accounts, he added, may sit around for years before they become active on a specific topic. Malicious actors will throw “cannon fodder,” a first-line, expendable wave of accounts that are intended to be discovered by the defenders, rendering them and detection technology less prepared to confront the following waves.

“Policymakers are way behind the curve on this, both in Washington and elsewhere,” he said. “They are aware of some of these possibilities for sure. But the state of legislation is not anywhere near where we need it to be.”

AI for good

“It’s usually not the technology that’s good or bad. It’s the intent of the actor who uses that technology in a specific use-case,” Subrahmanian said.

For example, while fakes are a bad idea, they can be harnessed in a way that slows down a cyberattack. To help combat intellectual property theft, companies can now use existing technology that creates multiple fake versions of a patent to confuse thieves who gained access to the network, Subrahmanian explained, incurring a difficult cost on the attacker.

And in fighting malware, AI is close to generating multiple evolved versions of the malware. Cybersecurity firms can then develop far more robust protections which protect against a wider variety of attacks, he added.

About V.S. Subrahmanian

He is a leading expert in artificial intelligence, cybersecurity, predictive modeling, probabilistic inference and machine learning, social media, and counterterrorism. He has been an invited speaker at the United Nations, Capitol Hill, and the Mumbai Stock Exchange. The author of numerous publications, his work has been featured in major publications including The Economist, Scientific American, The Wall Street Journal, and Science.