Skip to main content
Tags: deepfake | deepfakes | wiles

Could the Next Campaign Scandal Be Completely Fake?

deep fake and or deep fakes

(Elnur/Dreamstime.com)

By    |   Tuesday, 10 June 2025 04:28 PM EDT

We May Be Seeing the Advent of Politics in an Age of Deepfakes

When federal investigators started probing an impersonator targeting White House chief of staff Susie Wiles, it wasn’t just a Washington D.C. oddity — it actually was a warning shot.

The impersonator used texts and AI-generated voice mimicry to contact senators, governors and business leaders, reportedly making requests plausible enough to get a few to engage.

The FBI and the White House are scrambling to understand how Wiles’s contacts were accessed and what the impersonator’s endgame might be.

In an Emergent Deepfake Era: The Beginnings of a Larger Crisis

As the U.S. barrels toward the 2026 midterms and the 2028 presidential race, the most dangerous threat to democracy may not come only from hostile nations or bad policies —but from perfectly believable lies.

Thanks to artificial Intelligence (AI), it’s now possible to create audio, video, and images so real they could show public figures doing or saying things they never did — and make millions believe it.

Deepfakes aren’t just digital forgeries. They’re weapons already deployed in political warfare where truth is optional, and outrage is instant.

The Collapse of Truth vs. The Corrosion of Trust

And unlike truth, which can often be restored with facts, trust is harder to repair once lost.

Back in 1710, Jonathan Swift warned that "falsehood flies, and the truth comes limping after it" — a line that now reads like a prophecy for the deepfake era.

Deepfakes use machine learning to mimic voices, clone faces and fabricate scenes. They can be deployed for satire or art—but in politics, they are increasingly used to mislead, manipulate and destabilize.

Audio deepfakes replicate a person’s voice, often from just seconds of recording. Scammers use these to impersonate business leaders and celebrities. In politics, they can be deployed as fake phone calls, endorsements or confessions — almost impossible to verify in real time.

Video deepfakes depict public figures saying or doing things they never did. A single fabricated clip could derail a campaign, ignite violence, or swing an election—before the truth catches up.

Photographic deepfakes create or alter images so convincingly they can simulate crimes, rallies, documents, or critical evidence. In the wrong hands, they are extraordinary propaganda tools.

Deepfakes Have Thrown Their Hats in the Ring; The Outrage Is Real . . . And The Polls Shift Instantly

In February 2024, a robocall in New Hampshire used a deepfake version of President Biden’s voice to tell Democrats not to vote in the primary.

It was a targeted effort to suppress turnout produced for as little as a few hundred dollars. This wasn’t a prank. It was a low-cost, high-impact political attack… and it worked. But it’s only the beginning.

What happens in 2026 if a video emerges days before Election Day showing a Senate candidate making racist comments—only for it to be proven fake after the polls close?

What if a deepfake video shows a prominent lawmaker endorsing a controversial foreign policy stance… or being bribed by donors?

Today’s voting public and political institutions are not ready.

Most major campaigns have expanded media monitoring and assembled "rapid response" teams. But these are defensive tools from a bygone era — built around even today’s media cycles, not viral synthetic lies.

Reactive strategies are far too slow and too late.

Political teams still rely on systems built to flag misleading headlines and out-of-context quotes — not fully synthetic video and audio meant to deceive.

Technology has advanced, but strategy hasn’t.

In a fight where perception outruns proof, delay is costly. Malicious actors are often the first to weaponize emerging technologies long before institutions can defend against them.

There are still no federal standards for verifying political content, no structured partnerships between campaigns and tech platforms to detect or remove deepfakes. . . and few campaigns have seasoned deepfake experts ready when it matters most.

And when deepfakes are caught, the legal system lags. U.S. law offers limited protection unless the content involves defamation or commercial harm.

Most political deepfakes are designed to slip through the cracks—engineered to be effective, deniable, and just legal enough to survive.

On May 19, President Trump signed the bipartisan Take It Down Act into law, criminalizing the distribution of non-consensual intimate imagery—including AI-generated deepfakes — and mandating platforms remove such content within 48 hours of notification.

It marks the first federal law directly addressing AI-driven abuse—but no law can stop the flood. Regulation can’t keep pace with technology evolving this fast.

Pope in a Puff Coat, Taylor Swift in a Scandal, Your CEO Next

In the last year, some of the most viral deepfakes have come from entertainment: Pope Francis in a fake designer puffer; Taylor Swift in AI-generated pornography, fake images of Katy Perry at the 2024 Met Gala and a deepfake of rapper Drake feuding with another artist using synthetic lyrics.

Most recently, over 200 musicians, including Perry, Billie Eilish and Bon Jovi, signed an open letter condemning the exploitation of their voices, creativity and likenesses by AI.

In Hollywood, these incidents grabbed headlines, created synthetic media shocks, and racked up millions of views — but none altered elections, legislation, or national security.

In politics, it destabilizes.

Combating Deepfakes a Crisis-Readiness Test

Clever Tweets, Reactive Briefings Are No Match

This isn’t just about controlling the message — it’s about safeguarding political reality itself.

Deepfakes challenge truth at its core.

Success demands foresight, planning, and skilled experts ready to shut down synthetic media before it spreads.

It’s not about managing the spin after the fact — it’s about preventing the lie from eclipsing the truth. Unfortunately, many still don't understand that difference.

Richard Torrenzano is chief executive of The Torrenzano Group. For nearly a decade, he was a member of the New York Stock Exchange (NYSE) management (policy) and executive (operations) committees. He is the co-author of "Digital Assassination" and his new book, recently launched, "Command the Conversation: Next Level Communications Techniques."

© 2025 Newsmax. All rights reserved.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.

Politics
Deepfakes challenge truth at its core. It’s not about managing the spin after the fact, it’s about preventing the lie from eclipsing the truth. Unfortunately, many still don't understand that difference.
deepfake, deepfakes, wiles
994
2025-28-10
Tuesday, 10 June 2025 04:28 PM
Newsmax Media, Inc.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© 2025 Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© 2025 Newsmax Media, Inc.
All Rights Reserved