As we learned during the general election, political campaigns now routinely involve paid social advertising utilising a variety of data to identify likely supporters or swing voters.
Such ‘social scoring’, whether done manually or by an algorithm, is concerning to some. Professor John Rust of Cambridge University’s Psychometric Centre told the Guardian: “The danger of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology; it can predict and potentially control human behaviour.”
He finds it “incredibly dangerous” that people’s “attitudes are being changed behind their backs”.
User profiling is nothing novel. Dynamic pricing and credit rating, for example, may seem similarly unfair to many – identifying your mobile device or computer as an indicator of wealth, or your credit score as an indicator of financial concerns.
Consent is becoming a thorny issue and marketers everywhere need to understand what measures their data protection officers are putting into place.
Social scoring using public social media posts has been fair game for a while too, with employers and landlords checking for anything that may set off alarm bells. There are companies that will check private posts and messages for landlords too, taking advantage of the competitive property market to ask that prospective tenants sign up to handover access.
Such software cannot legally use factors such as age and pregnancy to determine suitability, but, as an article in Gawker explained, it can determine estimates of a tenant’s “extroversion, neuroticism, openness, agreeableness, and conscientiousness”.
Informed consent and the ‘black box’
The use of machine learning is ramping up quickly. IBM Watson offers a suite of off-the-shelf functionality and Google provides a whole range of APIs. Martech vendors are adding machine learning features and client-side companies are looking to employ data scientists to take advantage of their wealth of customer data.
Now is the perfect time for marketers to increase their understanding, not least because the General Data Protection Regulation (GDPR) is due to come into force in May 2018, stipulating that processing of personal data must be “lawful, fair and transparent”.
There are many examples where this arguably isn’t the case. Harvard research suggests internet searches for “black-identifying” names generate advertisements associated with arrest records far more often than those for “white-identifying” names. Discrimination needs to be actively guarded against.
The Information Commissioner’s Office’s (ICO) advice on getting consumers’ consent for data processing argues that “informed trust” will give a competitive advantage and reduce the risk of backlash.
This might be tricky for AI. The ICO acknowledges analysing big data for unknown patterns involves “unpredictability by design” and that deep learning includes “inevitable opacity that makes it very difficult to understand the reasons for decisions made as a result”.
This is summed up nicely in a paper by Christopher Kuner et al: “In practice, how can informed consent be obtained in relation to a process that may be inherently non-transparent (a ‘black box’)?”
What’s clear is that more thought is needed in this area, with guidance on the GDPR’s ‘profiling’ provisions set to come from the ICO later in the year, as well as the commissioning of research into social scoring as it applies to employment, housing, finance etc.
A new world of data
These questions of transparent and fair data usage will of course only multiply. One only has to look at the impact of internet of things technology, such as tracking sensors in out-of-home advertising, to see what the eventual data landscape could look like.
Consent is becoming a thorny issue and marketers everywhere need to understand what measures their data protection officers are putting into place for the GDPR, and how that affects marketing workflow.
But we should not let the talk of profiling demonise the technology of machine learning. It is only as useful and as fair as data scientists enable it to be. What’s more, we shouldn’t forget that human decision making is fallible and often prejudiced.
In fact, Kuner et al propose that algorithmic processes have a role to play in a fairer society, suggesting “it may in future be feasible to use an algorithmic process to demonstrate the lawfulness, fairness, and transparency of a decision made by either a human or a machine to a greater extent than is possible via any human review of the decision in question”.
Robots as arbiters. An interesting thought.
Marketing Week will be hosting a conference focused around chatbots and machine-based learning on 4 July. You can book tickets for Supercharged here.
Ben Davis is senior writer at Marketing Week’s sister title Econsultancy