Removing Abstraction

by September 29, 2009

A few weeks ago I had the pleasure of sitting down for an interview with Larry Tesler (early desktop interface pioneer) and Robert X. Cringely (tech writer and documentarian) at the Computer History Museum for a conversation on interface design -its past and future.

One of the more interesting topics that came up during our discussion was the incremental reduction of "abstraction" in computer interfaces over the years.

In the earliest computers, you literally had to get into the mechanics of a machine in order to program it. The introduction of punched cards gave people a way to send instructions to a computer (without requiring mechanical engineering work) but at the expense of several layers of abstraction. Punched cards lived outside the computer, had specific formats that needed to be learned, had to be punched by hand on a key punch machine, and then fed into a card reader before any output could come from the computer. That's a lot of distance between computer input and output.

When command line interfaces (CLI) appeared on the scene, some of this abstraction was removed. Input could be typed directly into a computer (via keyboard) without needing to manipulate and learn punched cards and key punch machines. But a decent amount of abstraction still remained as CLI users needed a specific set of commands and syntax to get the output they wanted from a computer. The objects and applications (services) people used in CLIs were basically invisible to them without the input of cryptic text strings.

Graphical user interfaces (GUI) made many of these elements visible to end users. In a GUI, people could see and interact with representations of documents, applications, and more. Yet there was still abstraction. When using a GUI, you were manipulating a set of icons, tool panels, and interface components in order to change content. How these components worked needed to understood since they were only way to provide input into the system. So when using GUIs people spent a lot of time interacting with and mastering windows, scroll bars, buttons, and more in order to get things done.

Natural user interfaces (NUI) enabled content itself to serve as the interface. Want to see the next photo? Simply slide the current one over. Want to make a photo bigger? Use a quick gesture to expand the size of the image you are looking at. NUIs turned content into things you can manipulate and act on directly. They attempted to reduce the distance between users and content as much as possible through guessable, physical, and realistic interactions.

First person user interfaces (FPI) go a level further by enabling people to interact with objects in the real world through a set of "always on" sensors. Simply place a computing device in a specific location, near a specific object or person, and automatically get relevant output based on who you are, where you are, and who or what is near you. No input required.

As this quick overview highlights, when interface design paradigms have progressed they consistently reduced the amount of abstraction between input and output. From punched cards to always on sensors -the amount of overhead required to access information and perform actions has decreased exponentially. Hopefully, this trend continues...