The recent changes in Google's brand safety policies mean stricter guidelines and more controls for advertisers on YouTube.
Google said Tuesday that it would change the default settings for ads so they serve up on content that meets higher levels of brand safety and only serve up against "legitimate creators" in its Partner Program.
The plan means that there will be new account-level controls, which make it easier for advertisers to exclude specific sites and channels from all their AdWords for Video and Google Display Network campaigns.
The changes will also make it easier for brands to exclude high-risk content and provide more control on where the ads appear.
Although there are a handful of controls already in place, Philipp Schindler, Google chief business officer, wrote in a blog post that the company also would revisit its community guidelines to determine and redefine the definition of "appropriate" content.
But Google's move reflects only one aspect of a changing industry that most believe needs stricter policies to protect brands.
While brand safety took on a new meaning the day The Times uncovered a flaw in Google's ad-serving platform that led Volkswagen, Toyota, Tesco and others to pull advertising from YouTube, the Internet giant isn't the only network in need of ridding its network of derogatory or horrifying content.
Facebook has had to deal with live videos of suicides and killings via Facebook Live. Early in March, Facebook launched a live chat support test through messenger that gives users watching the live video a way to reach out to the person.
In one instance last weekend, a man streaming on Facebook Live was fatally shot by police after reportedly attempting to strike an officer's car, according to one report.
Earlier this year, a fourteen-year-old teenage girl hung herself while streaming on Facebook Live while in the bathroom of her foster parents' home in Miami, reports a media outlet.