The No AI FRAUD Act (H.R. 6943) wins this year’s Worst for Innovation Award. Featuring recklessly loose legislative definitions, H.R. 6943 not only threatens to chill innovation in AI technology, but in all tech products and services that feature people’s faces and voices.

The No FRAUD AI Act was written with the intent of protecting Americans from having their image or voice used by AI to create unauthorized deep fakes, a worthwhile goal. 

Where the legislation falls apart is in its definition of what constitutes an AI deepfake. To prevent the practice of unauthorized deepfakes, the bill enables individuals to sue over any unauthorized “digital depiction” or “digital voice” replica created in whole or in part using digital technology.

Today, effectively all of the media we see on the internet has been created and altered by “digital technology.” Every photo on a Google image search has been resized. Every Instagram post with a filter has been digitally altered. Every video recorded on a phone and posted on TikTok has gone through multiple layers of digital alterations.

By some estimates, 3.2 billion images and 720,000 hours of video are uploaded to the Internet every day. In every one of those photos with a face, in every video with a voice, some digital alteration has occurred.

Few of these digital depictions come close to what would commonly be described as a deepfake. Regardless, the No FRAUD AI Act attaches liability to each of these images and the services that create and host them. 

Notably, the bill targets anyone who creates or distributes a “personalized cloning service,” capable of producing the images and voice replicas described above. Again, the legislation’s broad definition of digital cloning as any technology with the primary purpose of digitally reproducing a voice lands everyday devices like the iPhone in legal hot water.

What does the bill mean for innovation? The No FRAUD AI Act wouldn’t just discourage developers from creating new AI technology. If it were to pass, the bill would create a world of legal expenses for anyone daring enough to create an app that takes or features pictures of people. 

As multiple critics have noted, the bill also includes an ineffectual “First Amendment defense” section for speech protected by the Constitution. However, the First Amendment protects speech with or without such a section. If anything, the inclusion of such preemptive language indicates bill sponsors expect the No FRAUD AI Act to run into the First Amendment, like someone who begins a sentence with, “No offense, but…”

By threatening any technology with a face or voice, the NO FRAUD AI Act goes above and beyond, potentially crushing both new innovations and technology we’ve come to recognize as commonplace. While the bill faces an uphill climb in Congress, its wide-ranging, unintended impacts embody the spirit of the Tech 404 Awards, making it a winner in our books.

GO BACK

SIGN UP TO KEEP UP