Minnesota Moves to Ban AI-Generated Deepfake Pornography Amid Rising Concerns

by Olawunmi Sola-Otegbade
0 comments
Menopause Care and Reproductive Health Banner
Minnesota Moves to Ban AI-Generated Deepfake Pornography Amid Rising Concerns

Molly Kelly was shocked to discover in June that someone she knew had used widely available “nudification” technology to create highly realistic and sexually explicit images and videos of her, using family photos from social media.

“My initial shock turned to horror when I learned that the same person targeted about 80 to 85 other women, most of whom live in Minnesota, some of whom I know personally, and all of them had connections in some way to the offender,” Kelly said.

Backed by her testimony, Minnesota is considering a new approach to combat deepfake pornography. A bipartisan bill would target companies that operate websites and apps enabling users to upload photos and transform them into explicit content.

Various states and Congress are weighing regulatory strategies for artificial intelligence. Many have already banned the dissemination of sexually explicit deepfakes or revenge porn, whether AI-generated or not. Minnesota’s proposal aims to prevent such material from being created in the first place, stopping its spread online.

AI law experts warn the proposal may face constitutional challenges on free speech grounds.

Advocates Push for Strict AI Regulations

Democratic Sen. Erin Maye Quade, the bill’s lead author, argues that additional restrictions are necessary due to rapid AI advancements. The proposed law would require “nudification” site operators to block Minnesota users or face civil penalties of up to $500,000 per unlawful access, download, or use. Developers would be responsible for ensuring Minnesota residents cannot access their services.

“It’s not just the dissemination that harms victims,” Maye Quade said. “It’s the fact that these images exist at all.”

Kelly emphasized how quickly someone can create “hyper-realistic nude images or pornographic videos” in minutes, noting that law enforcement has mainly focused on distribution and possession.

Nationwide Efforts to Tackle AI-Generated Abuse

San Francisco recently filed a lawsuit against several “nudification” websites, accusing them of violating laws against fraudulent business practices, nonconsensual pornography, and child sexual abuse. That case remains ongoing.

Last month, the U.S. Senate unanimously approved a bill by Sen. Amy Klobuchar (D-MN) and Sen. Ted Cruz (R-TX) to make publishing nonconsensual sexual imagery, including AI-generated deepfakes, a federal crime. Social media platforms would be required to remove such content within 48 hours of a victim’s request. Former First Lady Melania Trump has urged the Republican-controlled House to pass the measure.

In Kansas, lawmakers expanded the definition of child sexual exploitation to include AI-generated images that are “indistinguishable from a real child, morphed from a real child’s image, or generated without any actual child involvement.” Florida has introduced a bill criminalizing AI-generated child sexual abuse imagery, with similar measures proposed in Illinois, Montana, New Jersey, New York, North Dakota, Oregon, Rhode Island, South Carolina, and Texas.

Maye Quade intends to share her bill with other states, stressing that few legislators realize how easily accessible the technology has become.

“If Congress won’t act, we’ll push as many states as possible to take action,” she said.

Victims Speak Out

Sandi Johnson, senior legislative policy counsel for RAINN (Rape, Abuse & Incest National Network), testified that Minnesota’s bill would hold websites accountable.

“Once these images are created, they can be posted anonymously, widely shared, and become nearly impossible to remove,” she said.

Megan Hurley, another victim, expressed distress over discovering explicit images and videos of her generated by a “nudification” site. As a massage therapist, she fears the reputational damage.

“It is far too easy for someone to use their phone or computer to create convincing, synthetic, intimate images of you, your family, and friends,” Hurley said. “I do not understand why this technology exists, and I find it abhorrent that companies profit from it.”

Legal Challenges to the Bill

AI law experts Wayne Unger of Quinnipiac University and Riana Pfefferkorn of Stanford University caution that the Minnesota bill is too broad and could face legal obstacles.

Pfefferkorn suggested limiting the law’s scope to images of real children, which courts generally do not protect under the First Amendment. However, she warned it could still conflict with federal law shielding websites from liability for user-generated content.

“If Minnesota pursues this path, lawmakers must clarify the bill and narrow its definition of ‘nudification’ and related technologies,” Unger said.

Maye Quade, however, argues that the bill is constitutionally sound because it regulates conduct, not speech.

“This cannot continue,” she said. “Tech companies cannot keep unleashing this technology into the world without consequences. It is harmful by its very nature.”

Source: Swifteradio.com

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00