Creating fake photos, videos or audio clips used to require considerable resources in terms of time, technical expertise and finances. But the advent of AI has drastically lowered these barriers, providing anyone with malicious intent and a smartphone with the power to manipulate public opinion.
The sophistication of AI-generated content makes it more challenging to discern fact from fiction, making it harder to combat misinformation.
The implications of AI deepfakes on elections are multifaceted. For instance, pro-European President Maia Sandu of Moldova has found herself at the center of a disinformation storm as AI deepfakes falsely depict her endorsing a political party aligned with Russia.
Rumeen Farhana, an opposition lawmaker and a vocal critic of the ruling party in Bangladesh, also faced a similar case last year, when an AI-generated deepfake video showed her wearing a bikini. The deeply conservative Muslim nation of Bangladesh went wild with fury, discrediting Farhana's image.
Meanwhile, in Slovakia, just days before the national elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but people still shared them on social media as if they were real. (Related: Google’s AI is completely fabricating fake quotes to smear truth-tellers.)
In other words, AI-generated deepfakes can be used to smear or bolster a candidate's image, manipulate voter sentiment and erode trust in democratic institutions.
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
Henry Ajder, an expert in generative AI based in England, stated that, as AI-generated disinformation campaigns become increasingly sophisticated and widespread, the question now is not whether AI deepfakes could influence elections but rather how influential they will become.
"You don't need to look far to see some people being clearly confused as to whether something is real or not," said Ajder.
With the U.S. presidential election coming up in November, experts warn that more AI-generated deepfakes are likely to emerge.
"I expect a tsunami of misinformation," said Oren Etzioni, an artificial intelligence expert and a professor emeritus at the University of Washington. "I can't prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified."
Deepfakes have started making their way into experimental presidential campaign ads. Etzioni said the scope for misinformation is vast, from fabricating political statements to simulating crises such as medical emergencies or acts of violence.
"You could see a political candidate like President [Joe] Biden being rushed to a hospital," he said. "You could see a candidate saying things that he or she never actually said. You could see a run on the banks. You could see bombings and violence that never occurred."
These fake tools could target specific groups and spread false messages about voting. For instance, they could send convincing text messages, spread wrong information about voting processes on WhatsApp in different languages, or create fake websites that look like the official website of a government organization.
"Everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality," said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.
Watch the video below about investigative reporter Michael Shellenberger raising concerns that the Threads app is already censoring accounts of users.
This video is from the NewsClips channel on Brighteon.com.
Google plans to develop AI systems specifically for CENSORSHIP enforcement.
Biden’s regulation of AI usage raises concerns about NARRATIVE CONTROL.
China using AI technology to IMPERSONATE U.S. voters, Microsoft confirms.
Sources include: