Humankind has a deep-rooted discomfort with artificial intelligence. There’s a reason why films like 2001: A Space Odyssey, and more recently, Ex Machina, captivate our imaginations. And while we’re nowhere near creating artificial intelligence as sophisticated at HAL 9000 or Ava, we still have a strong distrust of the technologies that already permeate our lives, even if they are making our choices easier by predicting our preferences and tendencies, and in many cases, doing our work for us.
At ProductTank MTL this past Wednesday night, three of Montreal’s AI product leaders — Jean-François Gagné, CEO of Element AI; Alexis Smirnov, co-founder of Dialogue; and Jason MacDonald, product manager at Acquisio — spoke to a packed audience about developing AI technologies, how artificial intelligence can be integrated into existing products, and what they see as the future of artificial intelligence.
While they likely didn’t intend to, they all spoke to a simple but pivotal problem: in order for artificial intelligence to succeed, we need to trust it, and in order for people to trust AI technologies, we need to feel that we can actually understand them.
“There isn’t any single person in this room who doesn’t ignore algorithms and the advice that they give us every day,” said MacDonald. “It happens when you log onto Netflix; when your GPS tells you to go this way and not that way. We ignore these things because we ultimately want control.”
“You can be against it, or you can try to be part of the conversation.” – Jean-François Gagné, CEO of Element AI
Gagné, who is CEO at Montreal’s AI-first incubator and accelerator, lingered on the same point: “Our brains have a really hard time grasping what goes on with these systems,” he said. “It’s about trust; if your users trust the algorithm and they feel like they’re in control of it, ultimately they’ll engage it.” On the other hand, “if you minimize [the UI] too much, and take away too much control, people won’t push the button.”
Gagné dug into an example where artificial intelligence is used to plan transportation logistics for a company like PepsiCo. The system is 10 percent better than what they’re operating at right now, and at first everyone is excited. But then they ask why. And one question leads to another, and the clients become less convinced by the system because they want to be able to understand it, but they can’t.
“This is the fundamental problem that I live every single day,” said MacDonald, echoing Gagné’s point. “People trust human justifications over algorithms. They naturally want to be able to justify how things transpired and got there.”
He described the typical process that takes place with all of his clients when they first start using Aquisio’s product.
“I’d wake up on Monday, I’d have a conversation with somebody: they loved it! Then day two: I’m getting huge concerns. Nervousness. They’re trying to understand how it works. They’re trying to interpret the algorithms.”
For Gagné, the solution this problem is transparency. “As you design the systems, you need to plan for the questions that people are going to ask you in order to build trust.”
He noted that at JDA, a software company specializing in supply-chain solutions, his team ended up putting almost as much work into building interpretation and explanations for customers, as they did in actually creating their technologies.
Fortunately, people can be convinced to trust algorithms. We already rely on them every day in a huge number of the applications we use on our phones, computers and even while watching television. And given that many of the new models being built can replicate human behaviours to simplify tasks across industries, it won’t be a matter of if, but a matter of when.
“There are all sorts of things that can be done through AI, through deep learning, natural language understanding, and integrated into a human experience,” said Alexis Smirnov, as he explained the differences between AI-1st and AI-2nd technologies.
Deciding how to use artificial intelligence — whether to base a product on AI, or to use AI to improve the product then becomes the important question.
“I like to compare the kind of tools we’re starting to get access to, to when people discovered fire, said Gagné. “Fire can do really bad things, but it also changed everything.”
“We should start thinking about what we want out of this, as well as what we don’t want in order to shape that future,” he concluded. “Then you can do two things: you can be against it, or you can try to be part of the conversation.”