Generative AI's disinformation threat is 'overblown,' top cyber expert says
Cybersecurity experts fear artificial intelligence-generated content has the potential to distort our perception of reality — a concern that is more troubling in a year filled with critical elections.
But one top expert is going against the grain, suggesting instead that the threat deep fakes pose to democracy may be "overblown."
Martin Lee, technical lead for Cisco's Talos security intelligence and research group, told CNBC he thinks that deepfakes — though a powerful technology in their own right — aren't as impactful as fake news is.
However, new generative AI tools do "threaten to make the generation of fake content easier," he added.
AI-generated material can often contain commonly identifiable indicators to suggest that it's not been produced by a real person.
Visual content, in particular, has proven vulnerable to flaws. For example, AI-generated images can contain visual anomalies, such as a person with more than two hands, or a limb that's merged into the background of the image.
It can be tougher to decipher between synthetically-generated voice audio and voice clips of real people. But AI is still only as good as its training data, experts say.
"Nevertheless, machine generated content can often be detected as such when viewed objectively. In any case, it is unlikely that the generation of content is limiting attackers," Lee said.
Experts have previously told CNBC that they expect AI-generated disinformation to be a key risk in upcoming elections around the world.
Matt Calkins, CEO of enterprise tech firm Appian, which helps businesses make apps more easily with software tools, said AI has a "limited usefulness."
A lot of today's generative AI tools can be "boring," he added. "Once it knows you, it can go from amazing to