FCC bans AI-generated voices in robocalls to prevent fraud

Discover how the FCC’s recent ruling makes AI-generated voices in robocalls illegal. Learn how this decision empowers crack down on fraudulent calls, protecting consumers from scams.

Abdul-Rahman Oladimeji Bello
FCC bans AI-generated voices in robocalls to prevent fraud
A representational image of malicious phone calls, robocalls.

Techa Tungateja/iStock 

The Federal Communications Commission (FCC) has taken decisive action to curb the rising threat of voice-cloning technology in robocalls.

The unanimous adoption of a Declaratory Ruling on February 8, 2024, represents a pivotal moment in the ongoing battle against fraudsters who exploit AI-generated voices to deceive and extort unsuspecting victims.

FCC Chairwoman Jessica Rosenworcel minced no words, declaring, “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice.”

The ruling, effective immediately, arms State Attorneys General with potent tools to pursue legal action against those orchestrating voice-cloning scams.

AI robocalls and fraud prevention

The prevalence of these fraudulent calls has surged in recent years, capitalizing on AI’s ability to mimic the voices of celebrities, political figures, and even close family members.

Recognizing the urgent need for intervention, the FCC launched a Notice of Inquiry in November 2023 to delve into the role of AI in illegal robocalls. The inquiry sought input on strategies to combat these scams and explored the possibility of subjecting AI to oversight under the Telephone Consumer Protection Act (TCPA).

The TCPA, the FCC’s primary tool for regulating telemarketing calls, now explicitly encompasses AI-generated voices. The same stringent standards requiring telemarketers to obtain explicit consent before robocalling consumers now apply to voice-cloning technology.

The ruling empowers the FCC to enforce civil penalties against violators, block calls from carriers facilitating illegal robocalls, and permit consumers and organizations to file lawsuits against robocalls.

The FCC’s decision aligns with the interests of 26 State Attorneys General, who, in a display of unity, wrote to the FCC supporting more stringent measures against AI-generated robocalls.

This collaborative effort underscores the urgency of combatting scams that leverage technology to deceive the public. State law enforcement agencies will now have enhanced legal avenues to pursue perpetrators, marking a significant step toward eliminating illegal robocalls nationwide.

The move comes in response to a recent incident in New Hampshire, where voters received robocalls impersonating US President Joe Biden ahead of the state’s presidential primary.

These calls, urging voters not to cast their ballots, exposed the potential for voice-cloning technology to interfere with democratic processes. An estimated 5,000 to 25,000 of these misleading calls were placed, prompting a criminal investigation into two Texas-based companies linked to the incident.

Concerns about the misuse of AI extend beyond robocalls in the broader global landscape. Deepfakes, which leverage AI to create realistic audio and video manipulations, have emerged as a significant global concern. Incidents involving audio deepfakes targeting senior politicians in the UK, Slovakia, and Argentina underscore the broader challenges posed by AI manipulation, especially in the context of elections.

Uploading the integrity of communications

As technology advances, the FCC’s decisive action serves as a critical deterrent against the malicious use of AI-generated voices in robocalls. By making using AI to generate voices illegal, the FCC aims to protect consumers, uphold the integrity of communication, and thwart fraudulent schemes that exploit the vulnerabilities of unsuspecting individuals.

This significant development comes on the heels of a letter received by the FCC in mid-January, signed by attorneys general from 26 states, urging the agency to restrict the use of AI in marketing phone calls.

Pennsylvania Attorney General Michelle Henry, who led this effort, emphasized, “Technology is advancing and expanding, seemingly by the minute, and we must ensure these new developments are not used to prey upon, deceive, or manipulate consumers.”

As the world grapples with AI’s challenges, this ruling, released through a press release, demonstrates regulatory bodies’ commitment to staying ahead of technological advancements and protecting the interests of the general public.

message circleSHOW COMMENT ()chevron