Content and trigger warning: This article has references to child predators and how they operate on YouTube. The video linked, while being informative, is incredibly disturbing and shows real videos with predatory comments attached to them.
Thursday morning, September 19, 2019, numerous verified YouTube content creators woke up to an email from the platform informing them of new verification standards that will go into effect in October. Previously, a YouTube channel could become verified after 100,000 accounts had subscribed. Many creators took the de-verification as a slap in the face. Some pointed out how their ability to network among verified channels had been impacted. Others were disappointed that what seemed like validation from the platform they helped to build was being stripped away. Not being a YouTube content creator, I was solely curious. Why would the most popular free video hosting platform in the world openly insult its largest network of contributors?
It was speculated among many creators that YouTube was pandering to corporate content such as celebrities, television networks, and movie production companies. The speculation was correct, in part. YouTube has made the change to appeal to corporate interests but not only to those mentioned.
Back in 2017 stories came to light about child predator rings operating in the comments section of YouTube videos. Predators would watch a video of a child posted to YouTube and they would timestamp the video in places where the child was in “compromising” positions. These videos would then be shared among the ring of sexual predators where they’d openly comment how attracted they are to the children.
Advertisers became angry when they found that their lead-in and banner ads had been playing on videos that child predator rings were using to communicate and to share child-sex ring information. In response, YouTube began deactivating the comments section of the predatory videos. This did not stop the behavior in any way. The predators could simply repost the videos from a new account and/or continue on to different videos whose comments had not been disabled. The next step YouTube took, in trying to secure its advertisers, is to guarantee where their ads would play. How are they doing that? That’s what lead to the deactivations; by making sure that the only accounts that are verified are those individuals or companies who have an off YouTube following.
The new verification criteria are more vague than the previous numerical standard. A google search provides the following via YouTube:
“We’ve updated the eligibility criteria for verification badges on YouTube. This change is to help viewers distinguish the official channel of a creator, celebrity, or brand. In the next weeks, verified channels will also get a new look.
Verified channel eligibility
Channel verification is proactively provided to creators, artists, companies or public figures to help distinguish their official channel. There’s no process to request channel verification.
Channels are typically verified if they:
-Have built a large audience and community on YouTube.
-Are widely recognized outside of YouTube and have a strong presence online.
-Or, have a channel name that could be confused with other channels on YouTube.
If your channel is verified, it’ll stay verified unless you change your channel name. If you change your channel’s name, the renamed channel won’t be verified. YouTube reserves the right to revoke verification or terminate your channel if you violate our Community Guidelines or the YouTube Terms of Service.”
If we were only examining the wording of the statement it would indicate that the, now, unverified content creators were correct in believing they had been demoted by the platform, but when you dig deeper into the dark reality the move appears to be a dodge. Instead of actually putting money behind eliminating child predators from YouTube, the company is creating a false sense of security for advertisers and alienating the talented content creators who have worked hard to build their communities.
This feels a lot like YouTube saying, “Hey, look over there!” The real issue is that child predators operate freely on YouTube, sharing videos, and commenting without YouTube taking any direct action to protect its users. In a world where monetary gain from advertisers is necessary to keep a free video content sharing platform up and running, some form of effective action needs to be taken to protect both users and advertisers. Unverifying content creators doesn’t provide any real safety for responsible youtube users or investors, it just disrespects the very people who helped to build the platform in the first place.