Concerns regarding the prevalence of child sexual abuse material (CSAM) on Reddit have been highlighted by a recent social media post, with a user identified as "Reddit Lies" stating, > "Reddit must be such a wonderful website if you're a p*dophile." This sentiment underscores persistent criticism directed at the platform's content moderation efforts despite its stated zero-tolerance policy. The tweet reflects a broader public and organizational scrutiny of social media's role in combating online child exploitation.
Organizations like the National Center on Sexual Exploitation (NCOSE) have consistently criticized Reddit, placing it on their "Dirty Dozen List" for four consecutive years. NCOSE asserts that despite Reddit's public attempts to enhance its image and introduce new policies, these measures have not translated into effective prevention and removal of exploitative content in practice. They claim that the platform remains a hub for CSAM and image-based sexual abuse, easily accessible to users.
In response to such criticisms, Reddit maintains a strict policy against the sexual exploitation of minors, emphasizing its commitment to combating this illegal activity. The company utilizes a combination of automated technologies, including hash-matching techniques like PhotoDNA and YouTube CSAI Match, human review, and community reports to detect and remove violative content. Reddit states that confirmed CSAM is immediately removed, and offending accounts are permanently banned.
According to Reddit's Transparency Report for July to December 2023, the platform submitted 133,588 CyberTipline reports to the National Center for Missing and Exploited Children (NCMEC). This figure represents the company's official referrals of suspected child sexual exploitation, indicating the scale of detected material. Reddit's report also noted a 216% increase in user reports for violations of its Rule 4, which prohibits inappropriate and predatory behaviors involving minors.
The challenge of moderating vast amounts of user-generated content is complex, as evidenced by similar issues faced by other major social media platforms. Reports indicate that AI-driven moderation systems, while crucial for scale, can sometimes lead to false positives, resulting in wrongful account suspensions and significant distress for users. This highlights the ongoing difficulty in balancing robust enforcement with accurate identification in the fight against online child exploitation.