avatarPeople + AI Research @ Google

Summarize

Updating the People + AI Guidebook in the age of generative AI

By Reena Jana and Mahima Pushkarna

Motion Illustration for Google by Ari Alberich.

Half a decade ago, in 2018, a hundred or so Googlers across product and research teams, geographies, and backgrounds convened in person with a shared intent: to co-write a guidebook offering practical guidance and best practices for designing human-centered AI. We were two of those excited, enthusiastic contributors in the initial brainstorm. The 2018 exercise, led by two Google alumni, PAIR (People + AI Research) co-founder Jess Holbrook and User Experience Researcher Kristie Fisher, with the guidance of PAIR co-founders Fernanda Viégas and Martin Wattenberg, resulted in the 2019 external launch of the People + AI Guidebook.

Flash forward to 2023: Simultaneous with the dawn of generative AI (genAI) products and experimental services coming to market this year across the AI ecosystem, we noticed a spike in international traffic and unique viewers of the Guidebook — up 560% between February and August 2023. This data suggested that people around the world had a clear need and were searching for guidance on building people-centric AI.

In addition, since the last update, numerous new international policy directives and frameworks have surfaced on building safe, secure, and trustworthy AI — which is, in other words, human-centered, responsible AI. These include the draft legislation of the AI Act in the European Union, the voluntary industry commitments on AI announced by the US White House that have given way to President Biden’s October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, the G7’s Code of Conduct for organizations developing advanced AI systems, and the NIST (US National Institute of Standards and Technology) AI Risk Management Framework. We realized that while these directives and frameworks state what needs to be done to build responsible genAI from a public policy standpoint, not a lot of resources for the teams designing and building genAI products exist yet on how to do so. We seek to offer some of the how as we update the Guidebook.

Over the next few weeks and months, here on Medium, we’ll offer drafts and specific takeaways from the forthcoming genAI update. We encourage you to comment on these posts and help us refine this work.

The Guidebook is a living, breathing document and collaboration and testing the content is necessary to offer the most useful and timely shared advice.

Before we dive into previews of the Guidebook content, we want to share a basic starting point on how we’re framing our edit of the foundational guidance with fresh suggestions for designing with generative AI. We started by clarifying benefits AI could offer before the recent generative AI advances, and what the recent advances in generative AI can bring, respectively, to products:

Novel models, novel challenges

Our intent for the original Guidebook (updated in 2021, led by Googlers Rebecca Salois and Gabe Clapper, with new design patterns for AI that emerged since the initial launch and product case studies, among other refinements) was always to provide content that is essential, foundational, and evergreen. And much of the core concepts of the original content still is.

The People + AI Guidebook has six chapters, designed to cover different aspects of the product lifecycle.

The six chapters, after all, cover a mix of user experience (UX) and technical concerns, representing the balance between human-centered design for AI and the capabilities of AI as a technology. Roughly half of the chapters could even be considered basic UX best practices rooted in understanding and gaining trust with the audience of your AI product (see our original chapters on User Needs + Defining Success, Mental Models, and Explainability + Trust). The other three chapters might be considered enduring AI practitioner concerns rooted more in technical aspects of AI products, from training data to feedback mechanisms and failures (Data Collection + Evaluation, Feedback + Control, and Errors + Graceful Failure). Our internal debates and questions on how to design generative AI products in a human-centered way inform our amendments to the guidance in these chapters.

As generative AI has been in development for years at Google Research, we’ve been thinking about the topic for some time in the context of experimentation before it was first integrated in consumer-facing services, products, and features. We started drafting the Guidebook updates with some questions with open-ended answers given how nascent generative AI, as a form of consumer-facing AI, is.

The editorial questions we started asking didn’t necessarily have answers when we first asked them:

  • How could we address the fact that by its very nature, generative AI creates net-new outputs all the time?
  • How do we design safe experiences when generative AI can offer “hallucinations,” now a known risk across the generative AI industry?
  • How do we discuss what user interfaces and in-product content are appropriate to explain how generative AI works?
  • How do we help people make informed decisions on whether they want to rewrite a generated text? Or nudge them to fact-check it, serving as a human-in-the-loop?
  • How could we make sure we were proactively considering the user needs of the diversity of people who might use a generative AI product?
  • How do we include people from communities that have been historically marginalized or underrepresented and underserved in the technology industry, to take an equitable approach?
  • How do we know which guidance and recommendations are not useful or even counterproductive for designing with generative AI?

So, to address these and more as new questions arise, we’ve been drawing from several new sources of people working on genAI to address the complexity and risks of a technology in its infancy. Looking for answers to questions on this list have often led us to ask more questions. As such, we’ll be rolling out Guidebook updates over the next several months as we vet and test the content.

Inputs that informed our updates

We’ll start from the beginning of our journey: how we updated our approach for sourcing our material. We’ve taken a broader, 3-pronged approach to researching emerging best practices for generative AI:

  1. Internal AI Principles Reviews. We’ve identified patterns of responsible design seen in Google’s central responsible innovation pre-launch ethics reviews of generative AI models from research teams that are integrating generative AI into services, features, and products. We’re also incorporating trends in common interventions and mitigations for common genAI risks, based on trends in the outcomes of these reviews, which analyze AI-powered products for alignment with Google’s AI Principles (e.g., whether a genAI application has clear social benefit, avoids unfair bias, is built and tested for safety, is accountable to people, incorporated privacy design principles.)
  2. Internal Speculative & Critical Design Workshops. We’re aggregating insights from cross-functional workshops focused on discussing the futures that generative media models can enable or disable. In addition, we’re drawing from trends observed in multi-day Moral Imagination sessions that engage product teams to consider ethics and philosophy in their everyday decision-making by discussing thought-provoking scenarios and science fiction stories.
  3. External AI ecosystem workshops. We know we don’t have all the answers ourselves. So we’re collaborating with a diversity of stakeholders in the broader AI ecosystem to apply a proactive lens on gathering best practices for avoiding unfair bias and other potential sociotechnical harms and supporting generative AI entrepreneurs as they, too, identify the risks and mitigations for these harms. Our workshop partnerships include the Equitable AI Research Roundtable (EARR), which engages experts in law, education, social justice and community engagement, and Google for Startups, with whom we’ve hosted numerous sessions with global founders.

What’s next and how you can be involved

In our next post, we’ll dive deeper into how we’ve applied what we learned, offering takeaways based on the patterns and trends in risk mitigation, internal design workshops, and discussions with external genAI startups.

Most of all, we want to hear from you. Please share your thoughts in the comments, or email us at [email protected] with suggestions or feedback. Follow People + AI Research here on Medium for our most current updates on the Guidebook and more.

Acknowledgements

As the current editor-in-chief and design lead of the People + AI Guidebook, we thank PAIR’s co-leads Lucas Dixon and Michael Terry, and the Responsible AI-Human Centered Technology UX lead Ayça Çakmakli. We also deeply thank PAIR’s co-founders and current advisors Fernanda Viégas and Martin Wattenberg, and Hal Abelson, Jen Gennai, and Marian Croak for their ongoing input and support over many years. We also deeply thank Googlers Maysam Moussalem and Roxanne Pinto, both editors emeritae of the Guidebook. There are hundreds of Googlers and external collaborators who have shared their best practices since 2018, including those who continue to play an important role in contributing to our forthcoming genAI update. Thank you! Special thanks to Googlers Rida Qadri, Renee Shelby, Carmen Villalobos, Nithum Thain, Ian Tenney, Shayegan Omidshaffei, James Wexler, Jessica Hoffman, Adam Pearce, Josh Lee, Marisa Ferrara Boston, Vinita Tibdewal, Crystal Qian, Quinn Madison, Sharoda Paul, Liz Merritt, Josh Lovejoy, David Akers, Adam Boulanger, Ovetta Sampson, Jamila Smith-Loud, Victoria Wirtala, Tymon Kokoszka, Monica Mora, Kylan Kester, Karen Feister, André Barrence, Ben Zevenbergen, Gia Soles, Amanda McCroskery, Elizabeth Churchill, Merrie Morris, Patrick Gage Kelley, Allison Woodruff, Kevin Lozandier, and Devki Trivedi. And finally, a huge shout out to the many Googlers, partnering teams, and founders who participate so thoughtfully in our workshops — your contributions help shape ongoing edits to the Guidebook.

Generative Ai
User Experience
Responsible Ai
Product Design
Ai Design
Recommended from ReadMedium