Deepfakes, an emerging trend in AI-generated pornography, have become particularly disturbing. They use AI to replace one person’s face with another’s and often mimic their facial expressions.
It’s an unsavory twist on revenge porn, where non-consensual photos and videos are shared online. Not only celebrities are being used as bait for this form of deceptive exploitation, but everyone.
It’s a business
The deepfake AI porn industry is operating undercover, and the technology behind it is being utilized in various ways. For instance, some creators use this new AI capability to create videos featuring women without consent – a horrifying and disturbing application of this powerful new technology.
Deepfake pornography is nothing new, but with machine learning becoming more accessible and AI technology developing rapidly, its production has become more accessible. This issue is severe and could have life-altering repercussions for those affected.
One of the primary issues with these videos is that they often feature women’s faces in violent, pornographic scenes. Much like hanime xxx scenarios, where anything is possible, the same nature can be found in these visual creations. It can be an act of revenge porn and emotionally draining for victims to watch their images appear in a video they didn’t make.
Though this issue of fake AI porn is mainly concentrated online, it has also affected women in offline settings. Anita, a Twitch streamer who has been the target of some deepfake videos, told CNN it can be an incredibly surreal experience to see yourself featured in such pornographic media.
She notes that playing video games can be a source of trauma that requires thousands of dollars in therapy and legal action to resolve. Furthermore, there is often an underlying lack of empathy from other gamers.
She laments how some in the community have begun to “blame women” for such abuse
She points out that some online gamers haven’t taken enough time to understand that this is an issue women are dealing with daily.
Recent examples include an AI tool called Lensa that enables users to create pornographic images of themselves. Critics have accused this program of racial bias and other issues.
One such website is MrDeepFake, a mecca for deepfake porn. It boasts plenty of celebrity-themed content, like videos featuring actresses Emma Watson and Scarlett Johansson. As such, it serves as an outlet for deepfake creators to market themselves and showcase their works.
It’s a crime
The deepfake AI porn industry operates undercover, impacting people without their knowledge. The technology is so advanced that cyber criminals don’t require much source data to create an image or recording that incites feelings of fear, misunderstanding, or anger.
Many of these videos target women, who are typically the most susceptible to image-based sexual abuse. But technology is also being employed against children exploited by pedophiles.
MailOnline recently exposed how some perverts use artificial intelligence to generate graphic, realistic indecent images of children before sharing them online with other perverts. This has sparked calls for government action from child abuse charities and the FBI, prompting calls for an official response.
These AI-generated images of kids are causing distress to young victims who cannot defend themselves from attacks.
The videos aim to mimic reality, creating anxiety and panic that could lead to severe mental health issues
One victim reported to NBC News that due to the photos, she lost her job, experienced depression and anxiety, and attempted self-harm. Other individuals have changed their names or taken drastic measures like taking themselves offline from social media.
Due to the increasing volume of non-consensual AI-generated porn, some states have passed laws regulating its use. For example, California recently passed a law allowing people to sue if they are injured by an AI video. In addition, Virginia is considering passing legislation making it illegal to distribute altered images of someone to influence an election within 60 days after voting.
The legislation would allow victims of unauthorized furry and deepfake porn to prove that they were injured by the content, though establishing the cause can be challenging for them. Ultimately, suppose victims can demonstrate that the material was intentionally created to cause mental suffering. In that case, they may have grounds for bringing a lawsuit.
The problem of non-consensual AI-generated sexual activity is a stark reminder that there are legal gaps in the regulation of the tech sector. But with proper attention, these troubling gaps could be rectified.
It’s a weapon
The deepfake AI porn industry operates undercover, using readily accessible artificial intelligence (AI) technology to generate and distribute pornographic content. This industry threatens public trust in digital media by creating and distributing non-consensual material.
The deepfake industry can have devastating consequences for individuals of all ages, genders, and ethnicities. It creates and spreads non-consensual sexual content and acts as a tool for extortion or control over individuals. Furthermore, it fuels those responsible for sexual abuse or intimate partner violence, leading to reputational damage and other legal issues that require legal intervention.
Deepfakes have been used in various contexts, such as revenge porn and political messaging
However, there is also concern about their potential use in intimate partner violence situations – where they could extort and coerce victims into engaging in unwanted sexual activity.
Experts have been considering this problem for some time, which has recently spurred the UK government into passing a law that criminalizes creating fake images. But as with all new regulations, numerous questions remain about how best to tackle this problem and safeguard individual privacy.
Experts told NBC News that while UK law is an effective tool to combat fake videos, proving who created them can be tricky. “The only way to determine if someone is creating a deepfake is by testing their ability to create video footage that looks like it’s based on an actual face – which may prove challenging due to how much information about people they can use to produce something unrecognizable as real,” said Gary Broadfield, a cybercrime specialist at Barnfather Solicitors.
He emphasizes the need to work towards passing legislation that addresses the problem of deepfake content and its devastating effect on women, noting that while revenge porn is already an issue, using AI technology to produce fake videos that look real will quadruple the amount of abuse experienced.
It’s a problem
Deepfakes are an insidious type of AI-generated synthetic media. They comprise images, videos, and entire identities that have been doctored to appear authentic. They are often used to spread misinformation or disinformation, target political candidates, or manipulate social media channels.
These intrusive tools have existed for years, yet their reach has only grown. A recent Wired investigation revealed that deepfaked porn has raced millions of views on mainstream sites like Pornhub.
With technological advances, creating offensive images and videos has become easier than ever. For example, anyone with access to a computer can take a picture of their ex’s face and edit it into an explicit image or video.
The issue with AI-generated images is that they can be deeply distressing, especially for women. Such non-consensual depictions can harm a woman’s reputation, career prospects, mental health, relationships, and self-esteem.
These issues are compounded when those who use AI-generated porn videos to harass others do so without the victim’s knowledge or consent. Streamers, in particular, are especially vulnerable, as they may not know that the sites hosting these videos host these videos.
One Twitch streamer, Brandon “Atrioc” Ewing, was in an embarrassing predicament earlier this year. While live-streaming, he accidentally clicked on a website that allowed him to purchase and watch deepfake porn of his female colleagues. Later that night, CNN reported that the images he’d seen had become so distressing that it was causing him distress while sleeping.
The consequences of this abuse go far beyond just those who profit from it.
An increasing number of civilian victims – including gaming community members – are being targeted by deepfake creators.
As the AI industry matures, governments will find it increasingly challenging to regulate the content created by these tools. Therefore, lawmakers must safeguard individuals from potential harm caused by these technologies.
Congress must act now to combat the dangers of deepfake AI so that it doesn’t get out of hand and cause harm to people’s lives. To this end, the National Women’s Law Center is advocating for Congress to pass the Deepfakes Accountability Act; this bill would impose sanctions on creators of illegal AI-generated porn and provide legal recourse for victims.