Episode 105 of The Informed Life podcast features a conversation with David Rose. David is an entrepreneur, MIT lecturer, author, and pioneer in ambient computing. His work has been featured in MoMA, The New York Times, WIRED, The Economist, The Daily Show, and other esteemed cultural institutions. He’s the author of two books: Enchanted Objects and his latest, SuperSight, which is the subject of our conversation.1
What is SuperSight? As David put it,
[it] is the next evolution of how we will see. If you look at the animal kingdom, there’s a lot of specialization for seeing in the dark or seeing a long distance away, or having super wide-angle vision for understanding where predators are. And I see this next generation of wearable computing will evolve human sight and perception in a way that will give us this kind of information prosthetic to be able to see anything that’s contextually relevant, superimposed over the world.
This manifests is what’s commonly known as ‘augmented reality’ — i.e., overlaying information on top of what our senses convey to us so it appears to be a part of the scene. This is helpful for annotating, explaining, amplifying, clarifying, and otherwise helping us make sense of what we’re seeing and hearing. David:
A simple example would be a light switch today doesn’t have any information except for whether it’s on or off, but you could put information about when was the last time it was turned off, or how much energy have you consumed through this room over the last year, or, you know, you can kind of sprinkle data lightly in an atomized way throughout your environment or work environment to help show guidance or risk or more context or more metadata, over anything in the world.
David’s earlier work on ambient computing was grounded in the idea that making information easily available to people as part of their environment will help them act in a more informed way. AR/XR is the obvious next step from IoT devices in that instead of having individual objects be information conveyors, information becomes overlaid over the entirety of our sensory perception.
This can be done by various means, but in the conversation we delved into two: wearing devices (e.g., headsets) that overlay information over the scene in an individual and personalized way and using projectors to overlay information into real-world spaces. The advantage of the latter is obviously that it can be experience by more than one person at a time.
After discussing the technologies’ methods and merits, the conversation turned more critical, looking at some of the inherent hazards. David documents six such hazards in SuperSight, but we discussed one in the interview: cognitive crutches. This is the idea that if we have a technology doing something for us, our ability to do it ourselves might be impaired. For example, widespread adoption of GPS has made it so many people don’t need to learn to read maps for directions.
Finally, we talked about the promise of SuperSight technologies. David mentioned the ability for people to design and prototype solutions at much larger scale:
if you can give people a view of the shelves in your study, or intersections or front yards or even things regarding – I think there’s a huge opportunity to help people project forward and say, how do the decisions I’m making today affect how I will feel and look like in ten years. We’re all so myopic and we’re living in the moment. I think being able to use this technology to see further and see the consequence will really be helpful.
If you’re interested in AR/XR or design in general, you’ll want to check out this conversation. And read David’s book — it’ll expand your thinking about what reality augmentation is and can be.
Amazon links on this page are affiliate links. I get a small commission if you make a purchase after following these links. ↩