AUSTIN (KXAN) — In March 2025, Austin Police allegedly found more than 365,000 images of child sexual abuse material on the devices of Carl Innmon, then a fifth grade teacher at Baranoff Elementary in the Austin Independent School District.
According to police affidavits, pictures of two Baranoff students were found among the evidence. Police allege that Innmon altered the images with artificial intelligence to make them explicit. KXAN reached out to Innmon’s attorneys, but they declined to comment on the case.
Kathryn Rifenbark serves as the Director of the CyberTipline in the Exploited Children Division at the National Center for Missing and Exploited Children (NCMEC).
She said AI-generated child exploitation is accompanying the already “huge increase” in the online enticement and sextortion of children in recent years. The NCMEC received more than 500,000 reports of online enticement in the first six months of 2025.
“That is almost double the number of reports we received in the first six months of 2024,” she said.
In that same 2025 timeframe, the NCMEC was inundated with more than 440,000 reports involving generative AI.
As technologies — and criminal enterprises — continue to evolve, experts said continued vigilance, education and legislation are crucial to protecting children.
Sextortion: Forms and advancements
Rifenbark said many cases of online child sexual abuse involve sextortion, a hybrid word combining “sex” and “extortion.”
A perpetrator will make contact with a child, often through social media or gaming platforms, and convince them to send explicit imagery. Once obtained, the offender then threatens to disseminate the sexual images or videos unless the victim meets their demands.
Common ultimatums include sending more sexual content, sexual favors or money. Additionally, Rifenbark reports that some offenders will demand a child send photos or videos of self-harm or harm to animals, known as sadistic sextortion.
Even if a child refuses to send compromising content, AI provides a way for perpetrators to create the blackmail material themselves.
“If the child refuses to send the offender an image, the offender can use AI technology to create a fake nude of that child… saying that they’re going to send that fake picture to the child’s family and friends if they do not comply with the threats,” Rifenbark said.
Generative AI can create explicit images of children where no real person is depicted, but Rifenbark also sees cases like that of Carl Innmon, who allegedly used AI to render clothed photos of children into explicit images.
AI can also allow offenders to pose as a real person, often someone of a similar age to the victim, in an attempt to form a connection before asking for explicit content.
Natalie Andreas, a professor of AI ethics at the University of Texas at Austin, warns that such deepfakes can also take the forms of audio and video, and that such technology is growing “more sophisticated every day.”
Who is most vulnerable?
“The most vulnerable victims of these crimes are children, especially teenagers,” said William Costello, a detective with the Austin Police Department.
Costello said exact data is difficult to pinpoint, given that sextortion schemes are “highly unreported,” but a study of 1,200 Americans aged 13 to 20 found that one in five participants had been personally victimized by sextortion as a minor.
Conducted by Thorn, a child safety nonprofit, the results show that another one in five reported knowing someone other than themselves with this lived experience.
While the majority of adolescent sextortion victims are girls, Rifenbark said teenage boys are more often the victims of financial sextortion, which Costello reports is the most common demand in such crimes. The NCMEC received over 23,500 financial sextortion reports from January through June 2025.
How should victims seek help?
Costello urges victims of sextortion to stop all contact with the offender, even if threatened with the release of sexual content.
“If you keep responding, or if you do start sending money, [the demands are] only going to increase,” he said.
Victims are also encouraged to bring the situation to trusted adults or authorities for help.
“The most important thing for people to know who are victimized through sextortion is that they’re not alone, and that there is hope out there for them,” said Rifenbark.
The NCMEC’s CyberTipline serves as a national reporting hub for online child exploitation. Reports are directed to NCMEC staff who can relay a victim’s case to the appropriate law enforcement agency. The center also has several methods to remove explicit photos and videos from the internet and prevent them from being shared.
Take It Down is a free online service that identifies exact copies of an explicit image or photo for removal through a hash value, or “unique digital footprint.” If users have the original content on their device, they can upload it to Take It Down, where a hash value will be assigned and shared with participating platforms.
Platforms such as Facebook, Instagram, TikTok, YouTube, PornHub, OnlyFans and Snap Inc., the parent company of Snapchat, have agreed to scan their apps and sites for any posted content matching submitted hash values, where it can then be removed if identified.
Only the hash values, not the actual content, are shared with the NCMEC and Take It Down platforms, so the content never leaves a user’s device, and photos and videos are not viewed.
While Take It Down operates to remove explicit content depicting victims under the age of 18, adult victims of sextortion or non-consensual intimate image (NCII) abuse can submit a report to StopNCII.org, which utilizes the same strategy of removal through hash values.
Both Rifenbark and Costello also advise victims to contact local law enforcement.
“Despite its embarrassment and its shame level, definitely report it. We need hard numbers and people to identify as victims of this problem,” said Costello.
Costello admits that it is often difficult for arrests to be made in sextortion cases, especially because offenders often operate from overseas. However, local police departments are able to leverage federal partnerships to aid victims.
“We do have officers embedded in the federal agencies here as task force officers, so there is a very close relationship. The task force officers physically work in the FBI office, so there’s a daily contact there,” he said.
The legislative landscape: New laws to protect kids, regulate AI
“Trying to use regulation as a way to prevent misuse of AI is challenging when AI development happens at a very rapid pace and policy development happens at a much more glacial pace,” said Dr. Ken Fleischmann, a professor at UT Austin’s School of Informatics and founder of public impact research initiative Good Systems Ethical AI.
Nonetheless, Gov. Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law last legislative session.
Set to take effect Jan. 1, 2026, the law imposes a series of restrictions on AI developers, including prohibition of AI systems to produce or distribute child pornography and other unlawful deepfakes. Developing and distributing AI that impersonates a child under 18 in explicit text-based conversations is also prohibited.
Fleischmann called the new law “a good first step toward trying to prevent misuse of AI and protecting users’ privacy.”
Already in effect as of Sept. 1 are Texas Senate Bills 20 and 1621. Both address the creation of child sexual abuse material (CSAM) by AI.
SB 20, the “Stopping AI-generated Child Pornography Act,” creates a new felony offense for the creation, possession or distribution of any visual material that appears to depict a minor, including all content that contains a real, animated, digital or AI-generated image of a child.
SB 1621 updates existing state child pornography statutes to include AI-generated and computer-generated imagery, allowing for prosecution even if the identity of a child cannot be confirmed.
Prevention: How to be most proactive as tech advances
Rifenbark emphasizes that “prevention is key, and early communication is very critical in preventing child sexual exploitation of all types.”
She recommends that parents have consistent talks with their children about online risks in an age-appropriate way, creating a “safe space” for them to feel confident enough to confide in a trusted adult and seek further help if an incident occurs.
On the user end, Andreas encourages the constant development of “AI literacy,” strategies and approaches to identify whether content is AI-generated. She identified dragging speech and sudden, jerky movements as some common signs of deepfake videos.
“Any time you’re experiencing a piece of media that’s causing a big reaction in you — fear, anger, even happiness to some extent… vet it somewhere else,” she said.
Fleischmann calls on AI developers to “consider potential misuse of the technology and try to design ways to make that harder, or ideally impossible, to misuse,” but he also cautions consumers of their role in prevention: use and consume the technology ethically.
“Given how fast the technology advances and how hard it is [for regulation] to keep up, I think it is inherent on all of us to be very conscious and thoughtful users of AI,” Fleischmann said.
Credit: Source link
