avatarPeople + AI Research @ Google

Summary

Generative AI is reshaping people's mental models, and product teams must adapt their designs to accommodate this evolution.

Abstract

As generative AI becomes more prevalent, people's understanding and expectations of AI-powered products are changing. The article discusses the importance of considering mental models when designing AI products to prevent mismatched expectations that could lead to frustration and erosion of trust. The authors emphasize that generative AI differs from non-generative AI, as users directly interact with the model and play a central role in the user interface. This can lead to misunderstandings if it's unclear what content is generated by AI. To help people move beyond thinking of AI as a tool and toward thinking of it as a collaborator, AI innovators must promote novel use cases and imbue the AI with the language and characteristics of a partner.

Opinions

  • Mental models are crucial for UX designers to consider when designing AI products to avoid mismatched expectations and prevent erosion of user trust.
  • People interact with generative AI models directly, which can lead to misunderstandings if it's unclear what content is generated by AI.
  • To help people think of AI as a collaborator rather than a tool, AI innovators must promote novel use cases and imbue the AI with the language and characteristics of a partner.
  • Users' mental models are reshaped when interacting with AI in new contexts, and product teams must identify opportunities to help users calibrate their mental models.
  • Trust building with AI is dependent on how developers create moments where AI can seek permission to know and act on behalf of users.
  • Over time, as users grow more accustomed to AI, the focus will shift toward creating AI that collaborates with users rather than merely executing tasks.
  • It's necessary to help users build trust in generative AI as a partner.

Generative AI is reshaping our mental models of how products work. Product teams must adjust.

By Reena Jana, Mahima Pushkarna, and Soojin Jeong

“Our mental models shape our understanding of the world around us, and our ability to navigate it.” Illustration by Thoka Maer for the People + AI Guidebook

As we’ve been updating the People + AI Guidebook for the age of generative AI– and updating Google’s products and product policies accordingly — we’re also revisiting the history of mental models in the context of this rapidly evolving technology. Focusing on users’ mental models can help UX (user experience) designers adjust and shape the generative AI user experience in products.

First, for context: a brief history of mental models. The concept of “mental models” is widely believed to have originated 80 years ago with a 1943 publication by Scottish psychologist Kenneth Craik, who theorized that people construct microcosmic representations of the world in their minds to help them understand and process their experiences. Donald Norman later applied the concept to interaction design in his book The Design of Everyday Things, shifting the focus on mental models to the minds of product designers. Designers, he wrote, communicate their visions — their mental models — of how a product should work within the product’s design itself.

Our first edition of Google’s People + AI Guidebook published in 2019 explored emerging mental models for AI. In that edition, we discussed balancing exciting marketing messages on AI’s potential with the reality of AI’s technical capabilities and limitations at the time. “Many products set users up for disappointment by promising that ‘AI magic’ will help them accomplish their tasks,” we said. “This kind of messaging can establish mental models that overestimate what the product can actually do.”

Mental models are evolving with AI

Fast forward to this year: Google and Ipsos conducted one of the largest global AI surveys. More than 17,000 participants across 17 countries participated, resulting in a new report: “Our Life With AI.” While popular headlines seem to focus on healthy skepticism and caution, the research shows the public has optimism for the future impacts of AI: “Looking ahead 25 years, respondents around the world believe AI will be a force for good in every sphere surveyed, from healthcare to education to quality of life to addressing poverty and discrimination.”

  • 51% of workers across the 17 countries say AI will have a positive impact on their job when looking out five years
  • 39% of workers who feel their job or industry will be impacted by AI anticipate that tasks will take less time with AI
  • 63% of all workers surveyed believe that in 25 years, AI will have a positive impact on work-life balance
  • 58% of all respondents (workers and non-workers) believe AI will help them have more fulfilling ways to spend time

This study offers updated insights into current mental models of AI’s promise.

But how have mental models evolved as generative AI has emerged and dominated headlines? To summarize what we’ve learned during our research & development stages, and to preview content you’ll find in our forthcoming update of the Guidebook, Reena Jana, PAIR’s editor, chatted with Mahima Pushkarna, PAIR’s design lead, and Soojin Jeong, Research Team Lead for Google’s AIUX team, about what they’ve learned from their research and from leading workshops with product teams across Google and the broader AI ecosystem.

So, as UX design leads working on generative AI products, how do you each define the term “mental model”?

Soojin: Mental models are internal representations that people draw on to explain how something works. Mental models can be taught formally, as a driving instructor teaches students the basics of operating a car. More often, mental models are the results of both shared wisdom (steering wheel controls direction) and individual experience (braking too hard is unpleasant).

Mahima: People form mental models for everything they interact with, including products, places, and people. These are a person’s belief systems that help set expectations for what a product can and can’t do, how they can interact with it, the degree to which they can trust it, and what kind of value they can expect to get from it. People constantly evolve their mental models — such as, when they use a well-loved product in a new context, for a new use case, or discover a new feature. These need not necessarily be accurate descriptions of how products work — but a working mental model helps people predict and control their experiences.

Illustration by Thoka Maer for the People + AI Guidebook

Why is it important for UX designers to consider mental models when designing AI products?

Mahima: Mismatched mental models can look like a person expecting too much of a product that is still being improved, or expecting too little and not getting as much value as they should. These mismatched models can lead to unmet expectations or cause harm, including those that could be experienced unfairly by specific groups of users, frustration, lack of transparency, misuse, and product abandonment — and can ultimately erode user trust.

This can happen when a product focuses on a feature’s net benefits without explaining what the product can or cannot do, and how the product works. It can also happen when teams ignore affordances or do not consider the user experience of earlier or similar versions of the feature.

When over-trusting the accuracy of an AI model’s outcome, people may make decisions with adverse and unintended effects, causing them to lose trust in other unrelated features of the products.

Soojin: Computing has gone through its own long history of mental models, and the first consumer-facing mental models were introduced when graphical user interfaces (GUI) borrowed “desktop” metaphors to guide users on file management. With the introduction of AI, a new set of challenges are being created to provide users with the right frameworks to guide them on how to get the most out of AI. In a sense, mental models have the potential to both narrow and expand how people use new technologies. And AI innovators will need to carefully chart a course for the mental models that best enable users to benefit from AI and reduce any harm.

How are people’s emerging mental models of generative AI products different from non-generative AI mental models?

Mahima: People interact with generative AI models directly. They play a central role in the user interface (UI), with the user much more directly interacting with the AI, and models do more than than play a smaller role, such as simply organizing the UI for the person using an app — e.g. spam filters, or spelling correction. Now, with generative AI, people can ask a model to perform a range of tasks directly. So people react to content generated on the fly by AI, rather than content that is curated, cataloged, and served.

With generative AI products, people need to express their intent through natural language prompts, and increasingly through multimodal inputs. This generative AI experience can be ambiguous and abstract, and culturally insensitive, unlike those with carefully pre-scripted components in the product interface.

People are also interacting with increasingly human-like visual or conversational styles, so it can be easy to ascribe qualities of self-awareness and have a mental model that anthropomorphizes generative AI, which can lead to potential misunderstandings if it’s unclear what is creative content and what isn’t.

Soojin: Generative AI represents a paradigm shift in technological progress — and people’s social experiences. AI was invented, but it’s really more like a discovery. Even research scientists working on it are perpetually surprised by what it can do.

People are currently hearing competing narratives about the nature of AI and its potential impact on society: AI is a familiar and gradual improvement, and generative AI is a revolution, unfamiliar and sudden. Creating a more transparent and informed public discourse about AI can combat misinformation and mitigate confusion.

We need to help people move beyond thinking of AI as a “tool” that one learns to operate and control and toward thinking of it as a “collaborator,” and in the case of generative AI, as a “co-creator.”

A tool is narrowly defined in what it is designed to do; if it doesn’t suit their purpose, people easily walk away or see it as having no use. You don’t nurture and teach a tool. But learning, of course, is precisely what AI does — and product leaders need to elevate and expand how people use AI. If we don’t shift people’s mindset away from AI as a purpose-built tool, many people who don’t make an immediate benefit to AI will likely abandon it before realizing its full potential.

Animation by Ari Alberich for the People + AI Guidebook

What are some practical ways to do so?

Soojin: To change the way people think about AI — to elevate their mental model of it from that of a tool to that of a collaborator — AI innovators must do two things: Promote examples of novel use cases, and imbue the AI with the language and characteristics of a partner, not a tool.

A tool asks, “What can I do for you?” A partner asks, “What are you trying to accomplish, and what are your goals?” It thinks big picture and asks big questions. Collaborative AI (as a partner) steps into the creative process early on, during the inspiration & ideation stage, and helps the user better understand and conceptualize what they want and what they are willing to share, not just how to get it.

Mahima: Because mental models are dynamic, product teams need to identify opportunities and moments at which they can help users calibrate their mental models, especially when they interact with human-like forms that don’t look or sound like them — or are unnatural in their contexts

For example, generative AI can support code generation, translating code between languages, creating new unit tests, and much more. Let’s think about building a generative AI product for developers, such as Google’s Codey, and how you would consider their mental models of AI while designing the product for them. Understanding users’ expectations, in this case developers, can help you reinforce mental models for appropriate contexts — whether it’s about how the product works as a whole, how the AI feature works, or why developers are shown a certain prediction.

When people encounter AI in new contexts, their mental models are reshaped. It’s crucial to help people update their mental models when introducing AI-powered automations in manual tasks. For example, using AI systems to automate code writing can change developer tasks from writing code to reviewing code. Such a shift can be a good signal that an existing mental model will need to be updated.

Consider the scope of the change in mental models and understand which user decisions are influenced by existing mental models. When interacting with AI-powered features, what decisions do users expect to make ? How does this influence their expectations of AI performance? Inversely, how does introducing AI change how people would otherwise make these decisions? When using AI to automate functions, how do users expect their tasks to change?

For example, developers perform different tasks in a single workflow that vary in time-sensitivity, or make decisions of different complexities as they’re using the AI product to fulfill their goals.

Use contextually-appropriate explanations and interactions that factor in people’s existing knowledge and mental models of your AI product. How do expert or novice users utilize different product functionalities? For example, expert AI developers already have a good mental model of AI systems work, so high-level explanations may be redundant or disruptive to their workflows. However, less experienced developers may need guidance that can help them build comprehensive and/or accurate mental models of how AI works. In your user research, find indicators of the levels of explanation that specific users need.

In other words, it’s necessary to help users build trust in generative AI as a partner.

Soojin: As we continue to develop AI, we have the mental models that AI will help us in more meaningful ways. The key to enabling the impact that AI might have is dependent on how we develop a trust building process with AI and AI products.

Through my team’s global user research, the two pillars of trust building work are around “knowledge of users” and the ability to “act on behalf of users.” By creating moments where AI can seek permission “to know” (user intent) and permission “to act” with those goals in mind, we see a clear pathway for AI innovators to earn trust incrementally, over time.

In the short term, permission seeking is the key trust building step toward more consequential use of AI. Over time, as users grow more accustomed to AI, the focus will shift toward creating AI that collaborates with users (co-shaping and re-finding the results together), rather than merely executing tasks.

We’d like to thank Ursula Lauriston of Google’s AIUX team, who edited portions of this post.

Genai
Recommended from ReadMedium