In our world, where we can see and hear things online, there’s a problem with fake stuff that looks and sounds just like real life. There are sneaky videos and voices online that look and sound natural, but they’re fakes called “deepfakes.” These deepfakes trick people into believing things that aren’t true, and the US government is stepping in with new rules to stop them.
- The FTC wants to make a new rule to keep us safe from fake videos and voices that trick people, called deepfakes. Companies that make deepfakes can’t sell them to trick people.
- People across the US are worried that deepfakes could make it hard for everyone to know what’s true, especially with the big election in 2024 coming up.
- The government is also trying to stop fake robot calls that sound like real people. Plus, different states are making it against the law to create and share fake videos. All these steps are to protect us from these high-tech tricks.
FTC Proposes Rule Update to Shield Consumers from Deepfake Impersonation
The Federal Trade Commission (FTC) is making new rules to help protect us from being fooled by deepfakes. They’re working on making it harder for the bad guys to use these fakes to scam people. Here are the main things the FTC wants to do:
- They will change a rule to stop people from pretending to be businesses or the government and protect regular folks like you and me from scams where someone might pretend to be someone else.
- The FTC is also looking at stopping companies from selling deepfake technology if it can trick people.
- They’re focusing on keeping everyone safe from the harm that these convincing fake videos and voices could cause.
GenAI platforms could face legal prohibitions against offering services that they know or have reason to suspect may be utilized to harm consumers through impersonation.
FTC Chair Lina Khan worries about how much AI is used to fake people’s identities and scam others.
The threat posed by deepfake technology does not just affect celebrities or public figures; it increasingly affects ordinary individuals and businesses through a variety of deceptive tactics:
- There is a notable rise in online romance scams and corporate fraud cases involving deepfake impersonation.
- Public opinion surveys indicate that a substantial majority of Americans are apprehensive about the influence of deepfakes on disseminating misinformation, particularly in the context of the upcoming 2024 U.S. election.
Federal initiatives in response to the deepfake phenomenon are as follows:
Without a specific federal law explicitly banning deepfakes, victims have had to rely on existing legal remedies, such as copyright law and privacy rights, which can be cumbersome and slow to enforce.
The FTC changing its rules is just one part of the significant actions the U.S. government is taking, including the FCC making it against the law for robots to make calls with fake human voices.
While only ten states have enacted laws that criminalize the creation and distribution of deepfakes, primarily focusing on non-consensual pornography, these laws can expand to address a broader range of deepfake abuses.
How good this new plan is will rely on what people say about it and the exact words they decide to use in the rule. They want to make sure that our rules are really good at keeping people safe as technology advances.
Leave a Reply