The future of AI, whatever industry you want to focus on, is a strange discussion for the layman because the technology in its current state of maturity seems a mix of stunning triumph and inarguable work-in-progress.
I can speak to my phone, say “Okay, Google, show me photos of Ted” and my Pixel will quickly display images of my oldest son that it has accurately categorised in my Google Photos app.
There are cars that can drive themselves, to a large extent.
And yet, current AI systems have difficulty with causality and don’t seem to demonstrate reasoning.
Gary Marcus is a professor in the Department of Psychology at New York University and was previously founder and CEO of Geometric Intelligence, a machine learning company later acquired by Uber. In a paper titled The Next Decade of AI, he contrasts robust intelligence with what he calls “pointillistic intelligence, intelligence that works in many cases but fails in many other cases, ostensibly quite similar, in somewhat unpredictable fashion.”
Memorably, Marcus illustrates his point by demonstrating the limitations of the neural network GPT-2, the text-generating model developed by OpenAI. Marcus shares some tests of GPT-2, giving the system sentence fragments with which to generate a continuation. They are enjoyably absurd. For example:
If you break a glass bottle that holds toy soldiers, the toy soldiers will probably… follow you in there.
“Even with massive amounts of data, and new architectures,” Marcus argues, “the knowledge gathered by contemporary neural networks remains spotty and pointillistic”.
As Brian Bergstein puts it in an article for MIT Tech Review titled What AI still can’t do: “It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.”