avatarLaila Goubran

Summarize

A history of human-machine trust

How we got here and what we need to do to move forward

Lack of trust in artificial intelligence is often considered one of the strongest current barriers to the widespread adoption of AI technology and autonomous systems. Looking at popular media and technology journalism, it’s easy to assume that humans have trouble trusting machines. Digging deeper, however, it becomes obvious that our relationship with machines is complicated. Research suggests that people actually trust machines a lot more than they lead each other to believe. One study even shows that people will follow a guide robot in the case of an emergency even if they have previously described it as a “bad guide”.

As AI practitioners we have the challenge of navigating this complex relationship between people & technology when building AI systems and hoping that people will use, grow to trust and adopt them in their lives. In this article, I try to break down some of the factors that affect this relationship.

illustration by David Plunkert

What exactly is trust?

Trust is defined as the firm belief in the reliability, truth, ability, or strength of someone or something.

Even between people, trust is a complex and often fragile thing, and there is no reason, it would be less complex between people and machines. In her talk “What we don’t understand about trust”, the British philosopher Onora O’Neil points to the three components of trust: competence, honesty and reliability. Additionally, the context of trust adds another layer of complexity. Trust “to do what?” ; I can trust Netflix to recommend my next binge show, but I’ll be a lot more careful with an AI recommendation about my investment or medical treatment.

There are efforts to quantify and measure human trust in machines. One study evaluates the effect of things like how a system is perceived as well as a personal attachment on the overall perceived trust. Other research focuses on the ethical standards, testing process and overall competency of the solution provider as a basis for a machine-trust index.

Historically there have been a number of concerns that have affected people’s motivation to trust machines and artificially intelligent systems. The following are the concerns that I think have the biggest impact on the current status of human-machine trust:

1. Machines will take over

Popular culture could not have more examples of how creative we get with this fear. And it’s not a fear that has only appeared with artificial intelligence. Ever since the first job was done by a machine instead of a person or even the earliest automation and robot machine in ancient Greece, people have expressed concerns about being replaced by machines.

Digging deeper into the research shows that even this fear is not as simple. While a recent survey by the Pew Research Center finds that “65% of Americans expect that, by the year 2065, robots and computers will do much of the work currently done by humans.”, the same survey showed that 80% believe their own job will still exist in 50 years.

“[…] they acknowledge robots are likely to replace human jobs on a massive scale — and yet they don’t personally feel threatened or endangered by them.”

The fear of “machines taking over” also includes the fear of a dehumanized world, the loss of control over the robots and in extreme scenarios humans becoming slaves to the machines. While the increase in automation around us might be contributing to the growth of this fear, it doesn’t take a deep understanding of current AI technology to immediately see how far we are from a singularity or a truly uncontrollable intelligent agent.

2. Everyone makes mistakes

According to studies at Wharton University, we seem to be extra harsh when it comes to forgiving mistakes that an algorithm might make. We are much harder on systems than we are on other people and even more so than we are on ourselves.

“But once people had seen an algorithm make a mistake in our experiments, they were very, very unlikely to use it and didn’t like it anymore.”

One of the reasons that might drive this harsh judgement, is our dependency on machines’ consistency. Consistency is one of the reasons we love machines in the first place, we can rely on them performing the same way and producing the same results every time, and not be affected by external circumstances as people might be.

In the case of mistakes, however, our love for machine consistency works in their disadvantage. We assume that the algorithm is going to keep making the mistake in the future, and don’t believe in its ability to learn and improve as we do with people and ourselves.

This phenomenon of failing to use or trust a system or algorithm after learning about its imperfections is what Dietvorst et al. call “algorithm aversion.”

And of course, the amplification of those mistakes and imperfections by the media, especially with the current hypestorm around artificial intelligence, doesn’t help with overcoming this aversion.

There could be a concern that people are less aware of instances of successful and useful algorithms and AI, while “unfortunate examples have received a disproportionate amount of media attention, emphasizing the message that we cannot rely on technology.”

3. The black box

Another problem that is facing AI in winning people’s trust is the increasing complexity of the algorithms. More and more people have a hard time understanding the technology behind the systems, let alone the details of how an algorithm makes a decision or a recommendation.

The process of how an AI system works is often referred to as the “black box” of AI, and it is not a secret that interacting with things we don’t understand can cause anxiety and fear and makes us feel like we are losing control.

This was the biggest learning from IBM’s Watson Oncology story: the system’s failure to explain to doctors the reasons behind its recommendations was a huge barrier in its adoption in the doctors’ practice.

The black box can potentially also create legal and liability issues. To avoid the black box effect, explainable AI (XAI) is becoming a big research topic alongside the advancement of machine learning. Organizations like IBM and others are investing a lot in defining practices and regulatory compliance for AI explainability.

XAI Concept from DARPA XAI

4. No ghosts in the machine

Research still suggests that people trust machines more than they say they do. This, previously mentioned, study at Georgia Tech showed that participants followed a robot to lead them out of a building in a fire emergency, even when it was guiding them in a different direction other than a clearly marked emergency exit.

Other studies show that people are more likely to hand over their credit card information to computerized agents more than human agents. And we can’t forget the famous Eliza chatbot, which showed that even in the 70s people were quicker to share personal information and intimate details about their lives to a bot than to another human.

These findings could at a first glance appear to contradict the general concern of mistrust in AI and the fear against a faster spread of AI algorithms to support our everyday decisions. But it’s important to realize that mistrust is not in the machine itself. After all, machines do no gossip and don’t have ulterior or evil motives — despite what some popular culture would have you believe. It is the people behind those machines that we mistrust.

“Built into the systems are the human values of their developers, the commercial needs of their creators.”

Bias in algorithm outcomes is the result of biased data fed into the system. Organizations can take advantage of people’s trust in machines for their motives. And the misuse of personal information has to be programmed into the computer by a person. It is the people behind the algorithms that our fear is directed towards, not the machine itself.

This fear can only be addressed by regulations, laws and compliance requirements that protect us from misuse of AI.

Now what?

The good news is that the research aiming at understanding human-machine trust also offers solutions to how this trust can be built and maintained. As mentioned, people are inclined to trust machines more than they say they do, so the challenge is to find ways to calibrate people’s trust in the system.

Like with human relationships, it takes time and interaction to build trust. Honesty and transparency, especially about the limits of the system, are key to setting the right expectations and avoiding people’s harsh judgement of a system’s potential mistakes.

Furthermore, Dietvorst et al. found that people are likely to be more forgiving of an imperfect algorithm if they can modify it — even slightly. This finding speaks to people’s need to regain a sense of control over the decisions and recommendations of a system.

“This suggests that one can reduce algorithm aversion by giving people some control — even a slight amount — over an imperfect algorithm’s forecast.” — Dietvorst

This strategy follows the alternative conceptualization of AI that is referred to as “Augmented intelligence”, which emphasizes that cognitive technology is meant to enhance human intelligence rather than replace it. Augmented intelligence focuses on the idea that instead of fearing the technology, we should learn to work with it in order to achieve our biggest potential.

The concept of augmented intelligence, combined with proper ethics and regulations and explainable AI practices seem to be the current favourable keys to overcoming the historical mistrust that people have built against machines.

In closing, I leave you with Garry Kasparov’s thoughts on the war between humans and machine and his reflections around losing to Deep blue in 1997:

References

Ai Design
Artificial Intelligence
Ai Ethics
Ai Explainability
User Experience Design
Recommended from ReadMedium