Pornhub has published its first-ever transparency report that sheds light on its moderation practices and on the reports it received from January to December 2020.
Pornhub removed a ton of content and went through some very major changes last December after the New York Times reported that its lax policy enforcement allows it to monetize rape and child exploitation videos.
Pornhub has removed 653,465 pieces of content that violated its guidelines. Those include videos depicting a minor and trying non-consensual, such as revenge pornography and doxing attempts. It also removed videos containing animal harm, violence, and prohibited bodily fluids.
READ ALSO: MasterCard, Visa Terminate Use Of Their Cards On PornHub
The website explained that it deals with Child Sexual Abuse Material (CSAM) on its website through moderation efforts and from reports submitted by the National Center for Missing and Exploited Children.
The center submitted over 13,000 potential CSAM last year, with 4,171 being unique reports and others being duplicates.
Pornhub said it uses several detection technologies to moderate contents before publishing. In 2020, it scanned all previously uploaded videos against YouTube’s CSAI Match, the video platform’s proprietary technology for identifying child abuse imagery.
It also scanned all previously submitted photos against Microsoft’s PhotoDNA, which was designed for the same purpose.
Pornhub assures that it will continue using both technologies to scan all videos submitted to its platform. In addition, the website uses Google’s Content Safety API, MediaWise cyber fingerprinting software and Safeguard its own image recognition technology meant to combat both CSAM and non-consensual videos.
0 Comment(s)