Please ensure Javascript is enabled for purposes ofwebsite accessibility
Close Alert

Instagram content recommendations, hashtags helped boost network of child porn


FILE - The Instagram logo is seen on a cell phone, Oct. 14, 2022, in Boston. (AP Photo/Michael Dwyer, file)
FILE - The Instagram logo is seen on a cell phone, Oct. 14, 2022, in Boston. (AP Photo/Michael Dwyer, file)
Facebook Share IconTwitter Share IconEmail Share Icon

Instagram is the most popular platform for the advertisement of the creation and sale of underage-sex content, according to a report published Wednesday morning by researchers at Stanford.

The Wall Street Journal was first to report on Instagram’s issues with under-age sex contact and tipped off researchers at Stanford and the University of Massachusetts Amherst. According to the Stanford Internet Observatory report, large networks of accounts that appear to have been operated by minors were advertising the sale of self-generated child sexual abuse content.

Instagram’s algorithm that recommends content to users based on what they interact with, along with the widespread use of hashtags using terms directing people to the explicit content and helped connect buyers and sellers.

Meta’s popular photo-sharing app emerged as the most preferred option for networks of buyers and sellers of the content due to its content recommendation algorithm and the use of hashtags that connected them.

“Due to the widespread use of hashtags, relatively long life of seller accounts and, especially, the effective recommendation algorithm, Instagram serves as the key discovery mechanism for this specific community of buyers and sellers,” the researchers wrote.

Meta said it was starting a task force to further investigate how Instagram assists in the spread of child sexual abuse material.

“Child exploitation is a horrific crime. We work aggressively to fight it on and off our platforms, and to support law enforcement in its efforts to arrest and prosecute the criminals behind it,” a Meta spokesperson said in a statement. “Predators constantly change their tactics in their pursuit to harm children, and that’s why we have strict policies and technology to prevent them from finding or interacting with teens on our apps, and hire specialist teams who focus on understanding their evolving behaviors so we can eliminate abusive networks.”

The company said its specialist teams dismantled 27 abusive networks from 2020 to 2022, and 490,000 accounts were disabled for violating its child safety policies.

But there are still questions as to how some of the hashtags that were used were allowed to proliferate to start with. Some of them included graphic terms like #preteensex and #pedobait. Researchers also noted a prompt warning users of sexual imagery of children may be included in some search results, but still allowed them to click through.

“Clearly, they're doing a terrible job of updating their system or (those) should have been hashtags that were banned from the start, not something that they should have gotten around to in 2023,” said Andrew Selepak, a social media professor at the University of Florida. "But this is a common problem with Facebook that they do a terrible job with moderation across lots of different content.”

Meta has come under fire for failures to protect children in the past, along with prior failures to reduce the spread of harmful content and the harmful mental health impacts of Instagram on teens.

“It's kind of a whack-a-mole thing where one problem pops up, goes away and a new problem pops up,” Selepak said.

Other tech platforms outside of social media are part of the issue with the online spread of sexual content involving minors, according to researchers. Some accounts were found on Twitter, and online services like Telegram and Discord are also leveraged, they wrote.

Twitter appeared to take the accounts down more aggressively, according to the researchers. They said TikTok is a platform where the content doesn’t appear to proliferate because it has stricter and faster content enforcement.

Social media companies have struggled with dealing with content moderation for years and despite advances in technology and huge cash investments, have not been able to stop the spread of harmful or offensive content.

“The sheer volume of posts that are made, the videos that are posted, the captions that are posted in every language known to have moderators to be able to examine all of that and all of those different languages,” Selepak said. “It's cost-prohibitive for a social media platform to be able to do all that with human moderators, yet AI can't do it either.”

The report said that an industry-wide approach is needed to take on the issues with sexually explicit content involving minors.

“An industry-wide initiative is needed to limit production, discovery, advertisement and distribution of SG-CSAM; more resources should be devoted to proactively identifying and stopping abuse,” the Stanford researchers wrote.

Loading ...