Recently I've been highlighting examples of applications with first person user interfaces. First person interfaces (FPIs) allow people to interact with the real world as they are currently experiencing it. These applications layer information on top of people's immediate view of the world and turn the objects and people around them into interactive elements.
First person interfaces enable people to interact with the real world through a set of "always on" sensors. Simply place a computing device in a specific location, near a specific object or person, and automatically get relevant output based on who you are, where you are, and who or what is near you. Examples include:
- First Person User Interfaces from Google
- Augmented Reality Apps
- First Person UI: Nearest Tube
- First Person UIs on Android
As these examples illustrate, first person interfaces are still in their infancy and many challenges need to be resolved. However, the trend underlying these applications is compelling.
As interface design paradigms have progressed over time, they consistently reduced the amount of abstraction between input and output. From punched cards to the always on sensors that power FPIs -the amount of overhead required to access information and perform actions has decreased exponentially.
This trend is enabling a new class of applications to thrive that allow people to access and manage information with minimum effort and where it is most relevant. Google Vice President of Engineering, Vic Gundotra, said it well: "these are early examples of what's possible when you pair sensor-rich devices with resources in the cloud. [...] But something has changed. Computing has changed. And the possibilities inspire us."