Disinformation and hate: they have been with us since time immemorial, but social media is now providing a rich breeding ground for their perniciousness. They infect our lives and create panic and distrust. They destabilise democracies, put people’s lives at risk and force people, particularly women, from public life.
Every day we are exposed to articles which spew lies, which inflame intolerance. Over the last few months alone we have seen the Intelligence and Security Committee confirm what many have long suspected about the UK being a clear target of Russian disinformation. Elsewhere, white supremacists exploit the killing of George Floyd to spread hate; and some extraordinary crackpot theories about the origins of Covid-19 are spreading faster than the virus itself.
And we can expect it to get worse. The US presidential election is set to generate millions of pages of mendacity and nastiness. Within minutes of Democratic candidate Joe Biden’s announcement of Kamala Harris as his vice-presidential pick, deeply unpleasant lies about her were already being spread. It is going to be a dirty campaign. And as we near the production of a Covid-19 vaccine (keeping everything crossed here) we can expect the anti-vaxxers to flood the internet with their dangerous nonsense.
It is easy to dismiss these as the rantings of a few mad fools. But it is far from the case. Without seeking to get overly dramatic about it, or indeed appearing like a conspiracy nut myself, it is clear that a significant amount of disinformation is organised.
The Global Disinformation Index estimates that every year brands unwittingly provide an estimated £250m to disinformation sites through online ads served.
An Oxford University report published last year found evidence of organised social media manipulation campaigns in 70 countries, with at least one political party or government agency using social media to shape public attitudes domestically or in other states. Whether to gain power, hold on to it, destabilise a government or a political movement in their country or another, they are actively, deliberately spreading disinformation and division. We need to be worried.
For a long time, the focus has been on the platforms. We have looked to Facebook and Google to get their houses in order, to take responsibility for brand safety. I absolutely have placed a huge onus on the platforms and am not going to apologise for it.
But surely the twin evils of lies and hate are as problematic – actually, more so – for brands? As we saw when the whole brand safety furore broke a few years ago, no one wants to have their ads next to unsavoury content.
I know some brands decided to boycott Facebook in July in an effort to encourage the platform to address online hate. But all brands need to look at their own actions. The Global Disinformation Index (GDI) estimates that every year brands unwittingly provide an estimated £250m to disinformation sites through online ads served. So, if we are to make the internet a more salubrious space, now is the time for brands to step up.
Brands need to take a more discriminating approach to their brand safety controls. I would encourage them to stop using blanket bans on strings of words like ‘Black Lives Matter’, because there is no real evidence of damage to your brand if it appears next to a hard news story – indeed, it might benefit you because the reader is likely to stay on the page longer.
But the same is not the case if your brand appears next to dodgy content. There is a lot of it out there so I understand that it might seem easier to create a list of unacceptable words to avoid trouble, but that might mean you miss out on audiences you really want to target (for example legitimate reporting of Black Lives Matter was consumed by large numbers of people often regarded as ‘hard to reach’).
So what do you do? Well, the GDI has made the task easier. It produces an index which provides disinformation risk ratings for news sites in media markets across the world. The risk ratings are neutral, independent and transparent, and are done at site level. The process has identified over 20,000 monetised disinformation sites which spread deliberate deception, often malicious.
So the one thing all brands could do is put these sites on your blocked lists. You really don’t want to give any money to climate change deniers, white supremacists, or conspiracy theorists. And you really don’t want to lend your brand, which you have invested so much in building, to those who would undermine our democracy.