VizThink 09: Why Visualizations Work

by Luke Wroblewski February 23, 2009

At VizThink 2009, Tom Wujec outlined Why Visualizations Work with an overview of how the human brain manages visual information and how pictures can help drive deeper understanding of complex issues and information.

  • Cognitive neurologists tell us that the brain sees multiple images of the world at once. Our eyes continually dart around in a process of “visual interrogation” to capture information and create “ ah-ha moments” of understanding.
  • Eyes (our retinal structure): we construct moment-to-moment models of the world. As a result our image of the world isn’t nearly as complete as we think it is because we need to continually dart around to make sense of it.
  • Visual interrogation: eye moves around an image to make sense of it.
  • Vision begins with light entering the eyes. Nerve impulses create a crisscrossing pattern. The majority of these nerves go to the back of brain to the primary visual cortex. It helps us detect simple structures and simple shapes: horizons, lines, etc. This is the earliest stage of visual processing.
  • The primary visual cortex is the starting point for visual processing. Signals then get sent to other parts of brain. Potentially creating around 30 ah-ha moments. Different parts of the brain are responsible for different kinds of visual processing.
  • Ventral stream: part of brain that tells us what something is. There seems to be a language inside the brain that is a signature for making sense of things conceptually. Takes simple raw signals and begins to identify and give meaning to things.
  • Dorsal stream: where the brain makes sense of space and place. If we imagine walking around, we are using the neurons in this part of the brain. Big murals work because the dorsal stream can store large amounts of information.
  • Spatializing information can help store more information than just visualizing. People remember where things are in space.
  • Other parts of the brain process color, motion, large objects, numeracy, etc.
  • Face stream: part of brain that takes abstract images and produces facial recognition. We are hard wired to recognize faces, places, and the structure of objects.
  • The more we understand about how ah-ha moments work, the more we can work them into our design.
  • A tenth of percent of visual signals go to limbic system –where we feel.
  • When we can identify the right image (archetypical) our processing can go straight to limbic system and trigger deep emotions.
  • Visual interrogation boils down to a handful of primary questions. Certain parts of brain naturally want to do this: where, how, location, color, what, shape, size, number, etc -all form mental models of the world.
  • Vision is very selective. Hold mental models of the world which shape what we see.
  • Persistent framework: takes advantage of dorsal stream. Panoramic displays engage the map-making part of brain.
  • But light coming to the eyes and us making sense of it is only half the story. Other half of the story is that there are “more nerve signals traveling toward eyes than away from them.” We reinforce models of the world through whatever information we get from the world.
  • Mental models exist in the imagination but are central to our view of the world. Visualization enables us to enhance mental models. Change & frame how people believe and see the world to be.
  • Make meaning: elevating, enhancing, and transforming mental models. This is the role of design.
  • Design: use images to understand 360 degree thinking; create a culture of prototyping and exploration; elevate collaboration using images (effective design needs effective collaboration); impeccable execution (do work together and do it well).
  • Tom showed visualization's from Autodesk's work at this year's TED conference