YouTube says it has put in place new safety measures that look to block people going down a “rabbit hole” of extremist content, while also educating brands on how to avoid showing up alongside it.
This year, the Google-owned video platform found itself at the centre of a growing scandal after an investigation by The Times revealed ads from big brands were appearing next to unsavoury or illegal content posted by groups including terrorists, white supremacists and pornographers. Some 250 brands since decided to pull their advertising from the platform – although some have since returned.
Speaking at a YouTube press breakfast event this morning (20 June) at the Cannes Lions festival, Google’s president of EMEA business and operations, Matt Brittin, admitted the brand “doesn’t always get it right”, but that it has been looking at different ways it can improve brand safety.
Its first area of focus is around the type of content that should be on the platform, and what is deemed “not acceptable”. Secondly, while not all of YouTube’s content is monetised, YouTube is taking a closer look at what “subset of content” is appropriate for advertising. Lastly, within that subset of content, it looks to distinguish what material is appropriate for brands to advertise alongside.
Using machine learning to tackle terrorist content
Tackling terrorist and extremist content is high on its list of priorities. Brittin said YouTube is now “using the latest technology” to help avoid the spread of such views.
“We’re using machine learning in the fight against this. If someone uploads a video that we identify as unacceptable, we can fingerprint it using machine learning to stop them being reposted. We also work with industry colleagues so that we can share it with others, so it won’t be uploaded on Facebook, Twitter or elsewhere,” he said.
In many cases, it’s a man talking to a camera about politics. [Inappropriate content] is quite hard to identify – it’s a nuanced decision in some cases.
Matt Brittin, Google
“What we’re doing now is using tech more proactively to identify patterns. We now have better detection [by] using machine learning.”
YouTube is also doubling the number of experts it is working with to identify what content is deemed acceptable or unacceptable – which it admits can be “really hard” to do.
“In many cases, it’s a man talking to a camera about politics. [Inappropriate content] is quite hard to identify – it’s a nuanced decision in some cases. Others are more clear-cut,” he explained.
“We worked for a long time with experts like NGOs, experts in particular regions, Europol, the Home Office and home security governments around the world to try and work with us to identify this and we’re doubling the number of experts to advance this as well.”
Brittin said the brand is also setting “tougher” standards when it comes to potentially offensive and controversial content, which means it cannot be monetised, be commented on or show up in suggested lists. “It’s so that no one wakes up in front of YouTube [and finds] they’re looking at something they didn’t choose to watch. We don’t want people going down a rabbit hole of this type of content,” he added.
Besides focusing brand safety, YouTube says it is also shifting its attention to the field of early intervention. It is working with its sister company Jigsaw to use ad targeting and partnerships to fight radicalisation.
“[It helps] put content in front of people who are exploring some of these topics, which helps counteract their shift towards radicalisation. This is at an early stage, but we’re looking at how we can extend these insights working with partners,” he said.
YouTube is also working with brands more closely to ensure they know how to use the controls that decide where their ads show up, as he admitted they were previously “too complicated”.
He concluded: “We want to run through with them to think more clearly about what is a suitable environment for their brand, and whether they are implementing the controls in the right way.”