Breaking Development: Adaptive Input

by Luke Wroblewski July 22, 2013

In his presentation at Breaking Development in San Diego CA Jason Grigsby explored the impact of many kinds of input (speech, touch, mouse, keyboard) on Web application design. Here's my notes from his talk on Adaptive Input.

  • Web browsers on TVs are actually might better than you might think but the input is terrible so no one uses them.
  • Creating interfaces for TV is different: you need bigger fonts, up/down/left/right interactions, etc. But how do you know what a TV is?
  • The tools we've used for responsive design don't work on TV. Screen resolution does not define the optimal user experience. You don't want to serve a TV interface to a laptop even though they have the same screen size. This has big implications on adaptive interface design.
  • In our industry we tend to look at desktop design and mobile design as requiring different interfaces. Because of this, it is sometimes hard to envision how a complex application can work responsively across a wide range of screen sizes. But we've built up tools to make this possible.
  • We've had a consensual hallucination of the Web: that it was a fixed canvas size of 640x480 then 1024x768. In the past few years we've made a lot progress of not thinking like this. We've recognized the Web does not have a fixed width.
  • Now, though, we have another hallucination. That desktops are equal to mouse and keyboard and smartphones are just touch. Once again this isn't true. Smartphones and tablets have keyboards, cursors, and styluses. Touch screen laptop sales jumped 52% the last quarter.
  • Input is much more important to interface design than screen size. The input defines what a design needs to do in order to accomplish a task. At every form factor we see touch, cursors, and keyboards available for input.
  • Input represents a bigger challenge than screen sizes and we're not ready, as an industry to adapt our designs to it yet.
  • More input types are coming like speech. Microsoft's Xbox One uses voice commands to control the user interface. The barriers to use these capabilities on your TV are coming down. This addresses the biggest issue on TVs: input from d-pads and remotes.
  • How would voice control impact the design of our Web pages? Do we need short names for our links displayed when in voice input mode?
  • In Chrome on the Desktop, you can use the Web Speech API to interact with speech recognition in the Web browser. Speech is coming to browsers.
  • It is currently impossible to reliably detect whether or not there is a touch screen connected or not. Google wants to turn on touch events by default even if there is no touch screen.
  • Input is dynamic, people can change input types within a single flow or process.
  • You can respond to different input types when they are being used like making buttons bigger when someone touches the screen or smaller when someone moves a mouse. But we probably need to be more zen about input and embrace the fact it comes in different forms at different times.
  • Keyboard and mouse targets require different sizes to be used comfortably. Touch targets need to be bigger. If you make an interface that is usable for touch, it will be usable with mice.
  • What about those who won't give up on the power users? Those who feel they're more productive with lots of things on the screen. Larger touch targets mean less things on the screen. So power user scenarios and complex user interfaces suffer, right?
  • First, touch will be on people's work machines in the immediate future. Five years from now will you not be able to buy a Windows computer without a touch screen.
  • Second, the real issue here is display density and not touch vs. mouse. Some people want to see more information on a display. That can be accomplished with a display setting. The same kind of display setting can be used to switch between lean back (10 foot) and lean in (18-24 inch) interface designs.
  • This kind of approach allows us to design for people's needs not for a specific form factor or display. This is device form factor and input agnostic design.
  • Adapting to input is not responsive design it might be more like progressive input: how do we adapt to the input people have on their device? But we don't need to have a value judgement: no one input type is necessarily better than others.
  • Instead we need to learn to adapt to what input is there. This is a new area, let's work on it together.