Designing for AI: Panel Notes

by December 12, 2023

At the Designing for AI panel discussion hosted by Notion, Ryo Lu (Notion), ​Amelia Wattenberger (Adept), ​Omar Lee (Slack), and ​Adam Storr (Hex) discussed how they approach integrating AI models in the software they design. Here's my notes from the conversation.

  • AI integrations can take the form of a separate feature or be interwoven across a product.
  • The right metaphor to use when integrating AI into a product depends on much the user wants to be involved in process. How much control do they want to hand over? it's a spectrum of how much human supervision is needed.
  • On one end of the spectrum, AI is an assistant that takes orders and does things on your behalf. On the other, the human is in control managing the details of what the AI does. In the middle is a copilot.
  • Slack has tried to avoid user the word "assistant" and focus on things that feel useful.
  • Two key areas of AI for Slack are summarization and search. Summarization is important to help to catch-up after they come back to Slack. But they found that more context is often needed to even make summaries useful. They're thinking about personalization to tune the level of detail in summaries.
  • To find the right level of human involvement, understand people's roles and needs. People closer to the problem probably require more control. Others a bit further away, likely need less.
  • How much to involve people may also depend on how familiar they are with AI capabilities. This could change over time where at first they need an onramp, then later when they trust system they can rely on it to operate more independently.
  • Consider starting with smaller actions, then expanding the scope of AI-powered functionality when people get more comfortable.
  • When deciding how to integrate AI features, consider what the AI model can do and the user's context.
  • AI is good at some things and bad at others. AI integrations that are too open-ended can steer people to bad outcomes this can create poor first experiences.
  • Chat is a very flexible interface, people can define how they want to use it and when they want to use it. But it is a very direct interaction with the model itself. There's few affordances to help people understand the capabilities and limitations they are interacting with.
  • Text is a very imprecise medium, it's good for general direction but more controls are needed for specific use cases. In the future, we'll have a lot more powerful interfaces to AI models.
  • Ideally, people don't need to navigate AI capabilities but instead the machines take care of it. Example: sliders in GitHub Copilot for Documentation modified the prompt for users but did not expose the full prompt to users.
  • You can only learn so much about designing for LLMs with mockups, you need to actually interact to AI models to experience their issues and opportunities. Collect feedback internally. Pay attention to surprises.
  • Lots of odd edge cases will show up when you use your products, won't find them unless you use them.
  • AI content generation currently takes a long time but surfacing loading animations isn't really what people need, they need better performance.
  • One possible opportunity to improve perceived performance is to show what's happening step by step as the generation is processing. Another is to pre-compute content so people don't have to wait for it to generate.
  • New AI capabilities that would improve user experience: speed (making them faster), hallucinations may will never be solved so references to source material are helpful, multi-modal abilities.