Why The Interaction Element Is Crucial In Explainable AI 2

Why The Interaction Element Is Crucial In Explainable AI

AI has been weathering more than a few storms of late. As exciting as the prospects are for AI, there are many glitches to its perfect potential to scare users off. From facial recognition with a racial bias to GPS algorithms sending huge trucks down the fastest route, that also happens to be completely unsafe for large vehicles, there are certainly more than a few problems with your average algorithm-based platform.

This has hailed a new age of Explainable AI. A scenario where humans can understand how an AI program got to a specific answer or solution. It makes sense in light of the trucking and race issues highlighted above, right? It’s good to hold tech solutions to account.

Cue the data experts at their machines, tapping away to delve into the secrets of a specific AI problem. But is this a problem best left to data science? Or is it something that needs to be considered at the design stage?

Can you trust your AI?

Turn to PwC, and you’ll find an in-depth article talking up the benefits of Explainable AI. The question they pose is, “Can you trust your AI?” A well-placed one to be sure. Because this is about building trust. If people don’t feel confident in AI, it’s not going to work out.

In a study by PwC 67% of business leaders believe that AI and automation will negatively impact stakeholder trust. As it business leader it makes absolute sense to analyse AI programs, or “lift the lid” on the “black box” making the decisions. Because it’s true, sometimes, the answers generated don’t seem to have a logical path to its conclusion. As PwC points out, although in most cases the outcome may be benign, in others it can be extremely damaging to trust and perception.

Can we rely on black box Explainable AI?

So that’s the problem all wrapped up, right? We employ the right safeguards to sense check AI programs. After all, AI is data driven. That’s exactly why it’s all too easy to lump the problem in the data-science realm. It also raises questions about transparency. Where the data comes from, and the accessibility of the data should a person request it.

Now we’re entering tricky waters because consent around personal data is one that has sparked fierce debate in recent years. Secondly, as this article in Tech Crunch outlines, asking a company to reveal the intricacies of its AI program is similar to asking it to “disclose its source code.” That would finish off a business in seconds.

This is really about the realities of black box Explainable AI compared to truly transparent AI. If we want the industry to grow, neither simplified model seems to hold the perfect answer.

The responsibility of UX designers

Which leads us to the idea that if there’s a social responsibility for Explainable AI, surely the design process of said AI system needs to be accountable too. Rather than retrospectively picking it apart, thought needs to be put into the design phase, to ensure its fit for purpose.

As UX designers, this means that thinking about Explainable AI as a concept should be something that’s integral to the end-to-end process. If there is such an “end” to ongoing product development of course. This comes down to considering interpretations early in the design process. And this intrinsically falls under the remit of designers as they research and perfect their products.

Safeguarding your AI products

As a business, all the noise around AI and negative press could certainly firmly placed you in the 67% bracket stated by PwC. It could seem too risky to even entertain. Or, you could be furiously researching black box Explainable AI to keep your program on track post-launch.

What about that aired thought on considering interpretations as part of the design process, however? Could a combination of the two provide a more robust standard of AI program? Heading back to the folks at Forbes, David Talby, Tech Expert tells us that it’s about “elevating your own AI projects.” His breakdown goes a little like this:

  1. Consider user’s experiences to build trust and confidence from the start
  2. Focus on building the most accurate AI models you can
  3. Understand what your users will value and apply it

The difficulty, of course, is that the technology is still very much in its infancy. To a point, we’re just feeling our way, seeing what works, and attempting to learn from the issues that arrive along the way. This makes it even more vital to build in measures like this, and to keep questioning not just after the event, but as you walk through each process in a program’s conception.

In conclusion …

The debate around so-called black box Explainable AI and its effectiveness is an argument that’s as contentious as the one around AI’s unreliability. In short, the most obvious answer for keeping such a new tech in check raises as many queries as AI does.

This is exactly why leaders in the field are talking about Explainable AI as a design issue as much as a data science one. Although it deals with data and its role in AI as a technology, it also boils down to the inner workings of the programs themselves. The things the developers put into the design to make it tick along in the way it does. Only by taking a well-rounded approach to looking at Explainable AI can we hope to keep it heading in the right direction. In some ways, it’s rather like those huge trucks on the fastest route to their destination. Running headlong into danger. With Explainable AI it seems that the most obvious route may not give you the best answer.

Andrew Machin
Andrew Machin

With over 25 years’ experience in UX and digital strategy, Andrew has helped many national and global brands such as John Lewis, Harley Davidson, Johnson & Johnson, and Interflora create exceptional digital product experiences.

Through the success of such projects Andrew has received high-profile accolades that span innovation, strategy, and design, such as the Dadi Grand Prix Award and the Digital Impact Award for Innovation.

This experience has led to Andrew judging digital design awards, been featured in .net magazine, lecturing at Leeds university, and speaking at seminars and conferences across the UK.

Articles: 110

Newsletter Updates

Enter your email address below and subscribe to our newsletter