"There are multiple ads over the internet falsely using my name, likeness and voice promoting miracle cures and wonder drugs," he said in a Sept. 1 post on Instagram. "These ads have been created without my consent, fraudulently and through AI."
Hanks confirmed that he has nothing to do with the said posts, products, treatments or the spokespersons touting them. While he admitted to having Type 2 diabetes, he reiterated that he only works with his board-certified doctor for his treatment.
"Do not be fooled. Do not be swindled. Do not lose your hard-earned money," the actor of "Forrest Gump" and "Captain Phillips" fame concluded his post. (Related: Democrats trying to censor AI-created memes before election.)
Hanks stopped short of naming the companies, according to a report by the Epoch Times. Last October, the actor called out an unnamed dental plan for allegedly using his AI image to promote its service.
The actor isn't alone, however, as country singer and actress Lainey Wilson has also fallen victim to AI voice cloning. She testified in February in support of the No AI Fraud Act before the House Judiciary Subcommittee on Courts, Intellectual Property and the Internet.
"It's not just artists who need protecting. Fans need it too," Wilson told lawmakers. "It's needed for high school girls who have experienced life-altering deepfake porn using their faces and for elderly citizens convinced to hand over their life savings by a vocal clone of their grandchild in trouble. AI increasingly affects every single one of us."
Experts say there's not much Hanks or any other celebrity can do about it. GPTZero CEO Edward Tian said this is because there aren't sufficient federal and state laws regulating the use of AI to replicate the voice or likenesses of public figures.
"Laws need to catch up to AI use in the U.S. and globally," he told the Epoch Times. "People have been able to create AI-generated content of celebrities without facing legal action."
Vijay Balasubramaniyan, CEO of Atlanta-based software technology company Pindrop, pointed to the widespread availability of both commercial and open-source AI tools. This, he added, is part of the problem.
"Tackling this challenge requires vigilant consumers, better social media oversight to manage misleading content and risky links, improved AI control mechanisms from commercial AI-generation tools, and regulations that increase the cost for fraudsters," Balasubramaniyan said.
Ken Miyachi, CEO of Bitmind, warned that the unauthorized use of a celebrity's AI voice or image can reduce public trust and damage their brand and reputation
"Celebrities invest years building their brand, with teams working tirelessly to craft their public image," Miyachi told the Epoch Times. "Deepfakes and AI voice cloning can rapidly erode this hard-earned reputation, potentially causing severe damage to their credibility and career."
This is where the No AI Fraud Act, introduced by U.S. Reps. María Elvira Salazar (R-FL) and Madeleine Dean (D-PA), comes into play. If approved, the proposal would create legal mechanisms to prevent the unauthorized use by AI platforms of Americans' likeness and voice.
Head over to CelebrityReputation.com for similar stories.
Watch this video of Elon Musk discussing AI voice changing technology.
This video is from the High Hopes channel on Brighteon.com.
Tesla plans to launch humanoid robots next year, says Elon Musk.
Free AI voice generation software successfully hacked into bank accounts using simulated voices.
STUDY: Voice analysis tech can accurately detect Type 2 diabetes through speech patterns.
AI chatbot loses bid to become mayor of Wyoming’s capital city.
Sources include: