NEW YORK — Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms or help design advertising campaigns.
But experts fear the darker side of the easily accessible tools could worsen something that primarily harms women: nonconsensual deepfake pornography.
Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Porn created using the technology first began spreading across the internet several years ago when a Reddit user shared clips that placed the faces of female celebrities on the shoulders of porn actors.
Since then, deepfake creators have disseminated similar videos and images targeting online influencers, journalists and others with a public profile. Thousands of videos exist across a plethora of websites. And some have been offering users the opportunity to create their own images — essentially allowing anyone to turn whoever they wish into sexual fantasies without their consent, or use the technology to harm former partners.
The problem, experts say, grew as it became easier to make sophisticated and visually compelling deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from the internet and spit out novel content using existing data.
“The reality is that the technology will continue to proliferate, will continue to develop and will continue to become sort of as easy as pushing the button,” said Adam Dodge, the founder of EndTAB, a group that provides trainings on technology-enabled abuse. “And as long as that happens, people will undoubtedly … continue to misuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

Andres Kudacki
Australian Noelle Martin poses for a photo Thursday, March 9, 2023, in New York. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. (AP Photo/Andres Kudacki)
Noelle Martin, of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. To this day, Martin says she doesn’t know who created the fake images, or videos of her engaging in sexual intercourse that she would later find. She suspects someone likely took a picture posted on her social media page or elsewhere and doctored it into porn.
Horrified, Martin contacted different websites for a number of years in an effort to get the images taken down. Some didn’t respond. Others took it down but she soon found it up again.
“You cannot win,” Martin said. “This is something that is always going to be out there. It’s just like it’s forever ruined you.”
The more she spoke out, she said, the more the problem escalated. Some people even told her the way she dressed and posted images on social media contributed to the harassment — essentially blaming her for the images instead of the creators.

Andres Kudacki
Australian Noelle Martin poses for a photo on Thursday, March 9, 2023, in New York. The 28-year-old found deepfake porn of herself 10 years ago when out of curiosity one day she used Google to search an image of herself. (AP Photo/Andres Kudacki)
Eventually, Martin turned her attention towards legislation, advocating for a national law in Australia that would fine companies 555,000 Australian dollars ($370,706) if they don’t comply with removal notices for such content from online safety regulators.
But governing the internet is next to impossible when countries have their own laws for content that’s sometimes made halfway around the world. Martin, currently an attorney and legal researcher at the University of Western Australia, says she believes the problem has to be controlled through some sort of global solution.
In the meantime, some AI models say they’re already curbing access to explicit images.
OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.
Meanwhile, the startup Stability AI rolled out an update in November that removes the ability to create explicit images using its image generator Stable Diffusion. Those changes came following reports that some users were creating celebrity inspired nude pictures using the technology.
Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it’s possible for users to manipulate the software and generate what they want since the company releases its code to the public. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “any misuse for illegal or immoral purposes.”
Some social media companies have also been tightening up their rules to better protect their platforms against harmful materials.
TikTok said last month all deepfakes or manipulated content that show realistic scenes must be labeled to indicate they’re fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events and cause harm.
The gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have a deepfake porn website open on his browser during a livestream in late January. The site featured phony images of fellow Twitch streamers.
Twitch already prohibited explicit deepfakes, but now showing a glimpse of such content — even if it’s intended to express outrage — “will be removed and will result in an enforcement,” the company wrote in a blog post. And intentionally promoting, creating or sharing the material is grounds for an instant ban.
Other companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.
Apple and Google said recently they removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but one report released in 2019 by the AI firm DeepTrace Labs found it was almost entirely weaponized against women and the most targeted individuals were western actresses, followed by South Korean K-pop singers.
The same app removed by Google and Apple had run ads on Meta’s platform, which includes Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement the company’s policy restricts both AI-generated and non-AI adult content and it has restricted the app’s page from advertising on its platforms.
In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images, and AI-generated content — which has become a growing concern for child safety groups.
“When people ask our senior leadership what are the boulders coming down the hill that we’re worried about? The first is end-to-end encryption and what that means for child protection. And then second is AI and specifically deepfakes,” said Gavin Portnoy, a spokesperson for the National Center for Missing and Exploited Children, which operates the Take It Down tool.
“We have not … been able to formulate a direct response yet to it,” Portnoy said.
___
-
15 things AI can — and can’t — do
PopTika // Shutterstock
Artificial intelligence is a technology built and programmed to assist computer systems in mimicking human behavior. Algorithm training informed by experience and iterative processing allows the machine to learn, improve, and ultimately use human-like thinking to solve complex problems.
Although there are several ways computers can be "taught," reinforcement learning—where AI is rewarded for desired actions and penalized for undesirable ones, is one of the most common. This method, which allows the AI to become smarter as it processes more data, has been highly effective, especially for gaming.
AI can filter email spam, categorize and classify documents based on tags or keywords, launch or defend against missile attacks, and assist in complex medical procedures. However, if people feel that AI is unpredictable and unreliable, collaboration with this technology can be undermined by an inherent distrust of it. Diversity-informed algorithms can detect nuanced communication and distinguish behavioral responses, which could inspire more faith in AI as a collaborator rather than just as a gaming opponent.
Stacker assessed the current state of AI, from predictive models to learning algorithms, and identified the capabilities and limitations of automation in various settings. Keep reading for 15 things AI can and can't do, compiled from sources at Harvard and the Lincoln Laboratory at MIT.
You may also like: How alcohol-related deaths have changed in every state over the past two decades

PopTika // Shutterstock
Artificial intelligence is a technology built and programmed to assist computer systems in mimicking human behavior. Algorithm training informed by experience and iterative processing allows the machine to learn, improve, and ultimately use human-like thinking to solve complex problems.
Although there are several ways computers can be "taught," reinforcement learning—where AI is rewarded for desired actions and penalized for undesirable ones, is one of the most common. This method, which allows the AI to become smarter as it processes more data, has been highly effective, especially for gaming.
AI can filter email spam, categorize and classify documents based on tags or keywords, launch or defend against missile attacks, and assist in complex medical procedures. However, if people feel that AI is unpredictable and unreliable, collaboration with this technology can be undermined by an inherent distrust of it. Diversity-informed algorithms can detect nuanced communication and distinguish behavioral responses, which could inspire more faith in AI as a collaborator rather than just as a gaming opponent.
Stacker assessed the current state of AI, from predictive models to learning algorithms, and identified the capabilities and limitations of automation in various settings. Keep reading for 15 things AI can and can't do, compiled from sources at Harvard and the Lincoln Laboratory at MIT.
You may also like: How alcohol-related deaths have changed in every state over the past two decades

-
15 things AI can — and can’t — do
Ground Picture // Shutterstock
AI combines data inputs with iterative processing algorithms to analyze and identify patterns. With each round of new inputs, AI "learns" through the deep learning and natural language processes built into training algorithms.
AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it.
Ground Picture // Shutterstock
AI combines data inputs with iterative processing algorithms to analyze and identify patterns. With each round of new inputs, AI "learns" through the deep learning and natural language processes built into training algorithms.
AI rapidly analyzes, categorizes, and classifies millions of data points, and gets smarter with each iteration. Learning through feedback from the accumulation of data is different from traditional human learning, which is generally more organic. After all, AI can mimic human behavior but cannot create it.
-
-
15 things AI can — and can’t — do
Bas Nastassia // Shutterstock
AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to "teach" AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
College entrance exams require greater logic and language capacity than AI is currently capable of and often include open-ended questions in addition to multiple choice.
Bas Nastassia // Shutterstock
AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to "teach" AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
College entrance exams require greater logic and language capacity than AI is currently capable of and often include open-ended questions in addition to multiple choice.
-
15 things AI can — and can’t — do
Proxima Studio // Shutterstock
The majority of employees in the tech industry are white men. And since AI is essentially an extension of those who build it, biases can (and do) emerge in systems designed to mimic human behavior.
Only about 25% of computer jobs and 15% of engineering jobs are held by women, according to the Pew Research Center. Fewer than 10% of people employed by industry giants Google, Microsoft, and Meta are Black. This lack of diversity becomes increasingly magnified as AI "learns" through iterative processing and communicating with other tech devices or bots. With increasing incidences of chatbots repeating hate speech or failing to recognize people with darker skin tones, diversity training is necessary.
Proxima Studio // Shutterstock
The majority of employees in the tech industry are white men. And since AI is essentially an extension of those who build it, biases can (and do) emerge in systems designed to mimic human behavior.
Only about 25% of computer jobs and 15% of engineering jobs are held by women, according to the Pew Research Center. Fewer than 10% of people employed by industry giants Google, Microsoft, and Meta are Black. This lack of diversity becomes increasingly magnified as AI "learns" through iterative processing and communicating with other tech devices or bots. With increasing incidences of chatbots repeating hate speech or failing to recognize people with darker skin tones, diversity training is necessary.
-
-
15 things AI can — and can’t — do
Zephyr_p // Shutterstock
Unstructured data like images, sounds, and handwriting comprise around 90% of the information companies receive. And AI's ability to recognize it has almost unlimited applications, from medical imaging to autonomous vehicles to digital/video facial recognition and security. With the potential for this kind of autonomous power, diversity training is an imperative inclusion in university-level STEM pedagogy—where more than 80% of instructors are white men— to enhance diversity in hiring practices and in turn, in AI.
Zephyr_p // Shutterstock
Unstructured data like images, sounds, and handwriting comprise around 90% of the information companies receive. And AI's ability to recognize it has almost unlimited applications, from medical imaging to autonomous vehicles to digital/video facial recognition and security. With the potential for this kind of autonomous power, diversity training is an imperative inclusion in university-level STEM pedagogy—where more than 80% of instructors are white men— to enhance diversity in hiring practices and in turn, in AI.
-
15 things AI can — and can’t — do
Andrey_Popov // Shutterstock
Andrey_Popov // Shutterstock
-
-
15 things AI can — and can’t — do
Chepko Danil Vitalevich // Shutterstock
Chepko Danil Vitalevich // Shutterstock
-
15 things AI can — and can’t — do
Vasilyev Alexandr // Shutterstock
Vasilyev Alexandr // Shutterstock
-
-
15 things AI can — and can’t — do
Gorodenkoff // Shutterstock
AI can be described as brittle, meaning it can break down easily when encountering unexpected events. During the isolation of COVID-19, one Scottish soccer team used an automatic camera system to broadcast its match. But the AI camera confused the soccer ball with another round, shiny object — a linesman's bald head.
Gorodenkoff // Shutterstock
AI can be described as brittle, meaning it can break down easily when encountering unexpected events. During the isolation of COVID-19, one Scottish soccer team used an automatic camera system to broadcast its match. But the AI camera confused the soccer ball with another round, shiny object — a linesman's bald head.
-
15 things AI can — and can’t — do
Claudia Herran // Shutterstock
Flippy is an AI assistant that is flipping burgers at fast food chains in California. The AI relies on sensors to track temperature and cooking time. However, Flippy is designed to work with humans rather than replace them. Eventually, AI assistants like Flippy will be able to perform more complicated tasks—but they won't be able to replace a chef's culinary palate and finesse.
Claudia Herran // Shutterstock
Flippy is an AI assistant that is flipping burgers at fast food chains in California. The AI relies on sensors to track temperature and cooking time. However, Flippy is designed to work with humans rather than replace them. Eventually, AI assistants like Flippy will be able to perform more complicated tasks—but they won't be able to replace a chef's culinary palate and finesse.
-
-
15 things AI can — and can’t — do
Ground Picture // Shutterstock
Ground Picture // Shutterstock
-
15 things AI can — and can’t — do
Sharomka // Shutterstock
In 2017, a Dallas six-year-old ordered a $170 dollhouse with one simple command to Amazon's AI device, Alexa. When a TV news journalist reported the story and repeated the girl's statement, "...Alexa ordered me a dollhouse," hundreds of devices in other people's homes responded to it as if it were a command.
As smart as this AI technology is, Alexa and similar devices still require human involvement to set preferences to prevent voice commands for automatic purchases and to enable other safeguards.
Sharomka // Shutterstock
In 2017, a Dallas six-year-old ordered a $170 dollhouse with one simple command to Amazon's AI device, Alexa. When a TV news journalist reported the story and repeated the girl's statement, "...Alexa ordered me a dollhouse," hundreds of devices in other people's homes responded to it as if it were a command.
As smart as this AI technology is, Alexa and similar devices still require human involvement to set preferences to prevent voice commands for automatic purchases and to enable other safeguards.
-
-
15 things AI can — and can’t — do
Roman Strebkov // Shutterstock
China's pharmaceutical companies rely on AI to create and maintain optimal conditions for their largest cockroach breeding facility. Cockroaches are bred by the billions and then crushed to make a "healing potion" believed to treat respiratory and gastric issues, as well as other diseases.
Roman Strebkov // Shutterstock
China's pharmaceutical companies rely on AI to create and maintain optimal conditions for their largest cockroach breeding facility. Cockroaches are bred by the billions and then crushed to make a "healing potion" believed to treat respiratory and gastric issues, as well as other diseases.
-
15 things AI can — and can’t — do
Miriam Doerr Martin Frommherz // Shutterstock
People fear that a fully automated economy would eliminate jobs, and this is true to some degree: AI isn't coming, it's already here. But millions of algorithms programmed with a specific task based on a specific data point can never be confused with actual consciousness.
In a TED Talk, brain scientist Henning Beck asserts that new ideas and new thoughts are unique to the human brain. People can take breaks, make mistakes, and get tired or distracted: all characteristics that Beck believes are necessary for creativity. Machines work harder, faster, and more—all actions that algorithms will replace. Trying and failing, stepping back and taking a break, and learning from new and alternative opinions are the key ingredients to creativity and innovation. Humans will always be creative because we are not computers.
Miriam Doerr Martin Frommherz // Shutterstock
People fear that a fully automated economy would eliminate jobs, and this is true to some degree: AI isn't coming, it's already here. But millions of algorithms programmed with a specific task based on a specific data point can never be confused with actual consciousness.
In a TED Talk, brain scientist Henning Beck asserts that new ideas and new thoughts are unique to the human brain. People can take breaks, make mistakes, and get tired or distracted: all characteristics that Beck believes are necessary for creativity. Machines work harder, faster, and more—all actions that algorithms will replace. Trying and failing, stepping back and taking a break, and learning from new and alternative opinions are the key ingredients to creativity and innovation. Humans will always be creative because we are not computers.
-
-
15 things AI can — and can’t — do
vfhnb12 // Shutterstock
Learning from sensors, brush patterns, and teeth shape, AI-enabled toothbrushes also measure time, pressure, and position to maximize dental hygiene. More like electric brushes than robots, these expensive dental instruments connect to apps that rely on smartphone's front-facing cameras.
vfhnb12 // Shutterstock
Learning from sensors, brush patterns, and teeth shape, AI-enabled toothbrushes also measure time, pressure, and position to maximize dental hygiene. More like electric brushes than robots, these expensive dental instruments connect to apps that rely on smartphone's front-facing cameras.
-
15 things AI can — and can’t — do