Skip to Main Content

Computer says no: The need for explainable AI

Posted by Alex Warren on 3rd December 2019

I’ve always been a bit of an AI cynic. For all the talk of robots taking over the world, the reality is that most artificial intelligence has been fundamentally crap.

Half of the AI efforts we’ve seen have either turned out to be thinly veiled If-This-Then-That algorithms, or chatbots that are no more sophisticated than Clippy the paperclip.

Despite this personal cynicism, even I have to admit that in 2019, “genuine” AI has seen an explosion in real-world applications.

Artificial intelligence is everywhere. It’s being used to make investment decisions, to decide whether employees should be hired or fired, and even to pre-emptively diagnose diseases. In fact, AI-powered trading algorithms are already responsible for 50% of all stock market transactions, while 14 of the UK’s 43 police forces are adopting AI-powered ‘crime-prediction’ software to profile potential criminals.

For the Luddites among us, all of this may sound terrifying. But the reality is that there’s nothing wrong with AI being used to make these decisions — as long as it can justify its reasoning.

This, however, is where the problem lies. Currently, the vast majority of AI systems operate as a ‘black box’, with data being input and seemingly unrelated decisions being output. Unlike a traditional computer model, AI-based systems don’t follow a clear logic path, their decisions are — for want of a better phrase — their own.

This problem is only set to get worse with the rise of genetic programming. Here, computer algorithms ‘design themselves’ through a process of Darwinian natural selection. Computer code is initially generated randomly and then repeatedly shuffled to emulate reproduction. Every so often, a random mutation is thrown in to liven things up.

This natural selection of code, combined with the opaque decision making of AI, is making it harder than ever to understand what is going on under the hood of our machines.

To address this concern, Google has recently thrown its weight behind the push for ‘explainable AI’.

Along with a series of new visual AI development tools, Google’s push also includes the adoption of counterfactuals within AI. In short, this means that AI systems are asked to justify their decisions, testing whether they would come to the same conclusions if certain data sets were removed. By running this process multiple times, a clearer picture can emerge of how AI is reaching its decisions.

Sadly, even Google admits that this system isn’t perfect. Commenting on a recent Tech Tent podcast, Google’s Dr Andrew Moore suggested that counterfactuals cannot truly explain AI’s decisions. Instead, they simply provide a better “diagnosis” for how the decision came about.

While this sounds concerning, in reality, such a broad diagnosis may be more than enough to go on.

Black box thinking has always been an element of decision making — regardless of whether it’s a decision by a human or an AI. For most professionals, whether a lawyer, doctor, stockbroker or judge, there is an element of decision making that simply can’t be explained, if only because, as a species, we’re terrible at knowing our own minds.

The economist Daniel Susskind has explained this idea with the example of Tiger Woods:

“If you asked Tiger Woods to explain how he hits a golf ball so far, he might be able to offer you insight into a few of the thoughts that pass through his mind as he swings the club. He might also perhaps pass on a few hints. But he would struggle to articulate the complex network of accumulated heuristics, intuitions, and hand-to-eye interactions that have contributed to his supremacy as a golfer. Many of these will be unconscious, inculcated through repeated practice and use, and some so deeply embedded that even Tiger himself would be unaware of them. Yet none of this precludes us from building a mechanical swinging arm that could hit the golf ball further and straighter than Tiger.”

In short, most of us cannot explain every aspect of our own thinking. Instead, we can only give a rough diagnosis of how we arrived at our conclusions.

The question is, if such broad diagnosis is considered good enough for humans, should it also be considered good enough for AI?

Alex Warren

Alex Warren is an expert in AI and marketing technologies. He has published two books, Spin Machines, and Technoutopia and is regularly quoted in PR, marketing and technology media. In his role as a Senior Account Director at Wildfire he helps tech brands build creative strategies that deliver results and cut through the marketing BS.