Thoughts After a Week with Google Glass

by May 3, 2013

A year ago I signed up to get early access to Google Glass. This past week I put the device into real world use and learned a lot about the potential and limitations of this technology.

“Google Glass is the new Apple Newton.” -Jared Spool

If by that Jared means it’s a early prototype of the future, then I agree. Let me explain why...

Almost a week ago I picked up my Glass explorer edition on Google’s campus in Mountain View. Since then I’ve put it into real-world use in a variety of places. I wore the device in three different airports, busy city streets, several restaurants, a secure federal building, and even a casino floor in Las Vegas. My goal was to try out Glass in as many different situations as possible to see how I would or could use the device.

picking up google glass

During that time, Scott Jenson’s concise mandate of user experience came to mind a lot. As Scott puts it “value must be greater than pain.” That is, in order for someone to use a product, it must be more valuable to them than the effort required to use it. Create enough value and pain can be high. But if you don’t create a lot of value, the pain of using something has to be really low. It’s through this lens, that I can best describe Google Glass in it’s current state.

The Value of Glass

Here’s some of the ways I found Glass to be valuable.

  • Instant access to a camera: no need to reach into a pocket and turn on a phone. The camera is as accessible to you as your own face. Even more so with interactions through wink commands.
  • First person perspective: I found myself enjoying filming events as I saw them -through my own lens on the world.
  • Sharing that perspective: the ability to instantly share what I am seeing with other people in real time is awesome.
  • Alternate head space: being able to step into and view a digital world for a few moments throughout the day was both intriguing and useful.
  • My own audio: Glass has a bone transducer that amplifies audio only you can hear. In practice, it’s imperfect. But the potential is clear.
  • Voice control: as usual, Google’s voice recognition is best of breed. I had no problems being understood and transcribed clearly.
  • Heads up directions: I found myself walking down the streets of Chicago with two bags in my arms and enjoying the ability to get directions to where I was going right in front of me.

The Pain of Glass

Here’s how I found Glass to be painful.

  • Limited visual plane: The current Glass interface is limited to a floating rectangle above your head. You don’t have full range of motion and, as a result, no spatial cues or depth in the interface.
  • Constant tweaking: getting the floating screen to display just right requires a lot of adjustment to the Glass frame, how they sit on your nose, how you position your head and more. If any one of these factors is even a bit off, parts of the already small screen are blurred and out of focus. I found myself constantly tinkering to get the display right.
  • Glass is reflective: and light is everywhere. Indoors, outdoors, daytime, and night, there are always lights around you and they reflect off the glass now sitting above your eye.
  • There’s a computer on your face: all the time, a small piece of glass is just visible above your eye (and it reflects light) and a processor is sitting on top of your ear. It’s not heavy but quite noticeable when it gets warm or you’ve had it on for a while. In other words, Glass is noticeably on you - I never really got “used” to it.
  • Where to voice control: while the voice recognition is great, there are few places you actually can make use of it. At the office and in public talking out loud to your glasses is not an option.
  • Social interactions: I forced myself to wear Glass even if I felt uneasy about it, which was in a lot of places. I was downright nervous to have them on in airport security and the casino floor. But even when ordering a coffee at Starbucks, I felt like I was doing something wrong.
  • No image control: while instant access to the camera is amazing, you have little control of the composition of a photo or video. So while I’m capturing first person views of my life, they’re pretty poor quality.
  • Not really private audio: audio that only you can hear could be useful in many ways but sadly Glass’s audio today can pretty much be heard by others and is often hard to hear yourself unless you cusp your hand over your ear. Plus the bone transducer feels like a worm burying into your ear. You even feel it after the glasses are off.

Lukew terminator google glass

All that said, I’m still optimistic of the future of a device like Google Glass. Consider the impact of what it has the potential to enable:

  • Instant access to a camera that captures the world as your eyes currently see it
  • The ability to share that perspective with others in real time
  • An alternate digital plane of information you can view at any time
  • A private audio channel only you can hear
  • The combination of this digital plane and audio to amplify (augment) what you are doing and seeing in the real world

Any of these features alone could be considered magical, but together they’re a vision of the future. Google Glass today is an imperfect prototype of the future because its value does not yet outweigh its pain. But I have to assume that’s why it’s called the Explorer Edition.