Last week I had an interesting conversation with a product management researcher. I told him why I think “product” is the wrong framing for many digital things and we discussed the concept of information environments. He asked a good question: Why call them environments instead of platforms? After all, “platform” is a well-understood concept in the context of UX design.
While the two terms share a similar intent (getting designers and stakeholders to think more systemically about the work to be done), there is an important difference between them: “platform” implies a technology-centric view of the system while “environment” implies a people-centric view. A platform is something you build upon. An environment is where you have experiences. This is a key distinction as we move to make user-centered design more systemically aware.
I realize the word “environment” brings with it connotations that may court controversy. This is not unintentional. We exist within environments. They host our activities. Our long-term survival hinges on the viability of our environments. It behooves us to develop an attitude of responsible stewardship towards them — whether they are made of stuff or of information.
The object of music is a sequence of sound tones and pauses that stirs feelings in us.
The object of industrial design is a composition of materials that is useful and mass-reproducible.
The object of architecture is a relationship between spaces and forms that gives us shelter and the ability to perform certain functions.
The object of graphic design is a semiotic composition that moves us to action.
What is the object of experience design?
Nobody comes to your information environment with the goal of “using” anything. They come because they want to buy an airline ticket, or transfer money from one account to another, or understand their medical bill.
The more specific you can be when referring to the people who will use the things you design, the easier it’ll be for you to empathize with them.
I’ve heard designers use these words as though they’re interchangeable. They’re not.
Consistency (n): 1) conformity in the application of something, typically that which is necessary for the sake of logic, accuracy, or fairness, 2) the way in which a substance, typically a liquid, holds together; thickness or viscosity
Coherence (n): 1) the quality of being logical and consistent, 2) the quality of forming a unified whole
Consistency can create coherence. But thoughtless, overly rigid consistency can create incoherence. For people to successfully derive meaning from an information environment, the environment must be coherent above all — even if that calls for some inconsistency.
Aim for coherence.
(Definitions from the macOS Dictionary app.)
In systems design there is a rule of thumb known as Gall’s law. It states:
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”
— John Gall
Information environments (websites, apps, etc.) are systems, so this law applies. (I’ve seen it in action.) We acknowledge this when we talk about starting with a minimum viable product (MVP).
One of the main challenges teams face in such projects is arriving at agreement at what constitutes the “minimal” feature set. Designers can — and should — help teams clarify the product vision upfront. This helps make the process less painful.
Once a clear vision is agreed upon, the designer’s role shifts to defender of the vision. This is necessary because there will always be forces pulling things away from the minimal feature set — often for valid reasons.
When the product is real and can be tested, it can (and should) evolve towards something more complex. But baking complexity into the first release is a costly mistake. (Note I didn’t say it “can be”. It’s guaranteed.)
- Clarify vision,
- define minimal,
- defend minimal,
If you design software, you need to know about placemaking. Why? Because the websites and apps you design will create the contexts in which people shop, bank, learn, gossip with their friends, store their photos, etc. While people will experience these things primarily through screens in phones, tablets, and computers, they actually perceive them as places they go to to do particular things.
Your users need to be able to make sense of these information environments so they can get around in them and find and do the things they came for, just as they do with physical environments such as towns and buildings. People need to form accurate mental models of these environments if they are to use them skillfully.
As a discipline, software user interface design has only been around for about sixty years. However, we’ve been designing places for much longer. There’s much we can learn from architecture and urban design to help us create more effective apps and websites. This article is a short case study in the design of a particular physical environment that has valuable lessons for those of us who design information environments: Disneyland.
This post is based on a speech I wrote for two back-to-back keynotes delivered in November 2016 at Interaction South America (Santiago, Chile) and the Italian IA Summit (Rome). The U.S. election was decided on the eve before I flew to Rome.
When architects tour Rome, one of the things they learn is that buildings can last a long time. When I was younger, I had the privilege of studying architecture for two semesters in the city, and one building stood out for me: the Basilica of San Clemente al Laterano, a beautiful church built during the middle ages. I was struck by the fact that the basilica had been built on top of an earlier building: a 4th century church which you can visit by descending to a basement under the main structure. That church, in turn, was also built atop an earlier building which was used by followers of the cult of Mithra, and which you can also visit today. The Basilica of San Clemente has been used as a place of worship for almost twenty centuries. When you visit it, you have a tangible experience of the evolution of Western spiritual practice over that span of time.
Buildings serve more than mere utilitarian purposes, such as keeping us dry from the rain or giving us a safe place to rest. They’re also physical manifestations of the political, social, and cultural environments that produced them. Buildings tell stories about who we are — and who we were — as a people. As an architect, I often think about the longevity and cultural import of buildings and how it contrasts with what I currently design: software. If buildings are among our longest-lived cultural artifacts, apps and websites are among our most ephemeral. Software is changing all the time, sometimes in big ways. For example, iOS 7 introduced a completely new visual design to the iPhone’s operating system. From one day to the next, the feeling of the entire information environment changed. Perfectly functional applications that didn’t immediately implement the new style suddenly looked old and out-of-place. This change was experienced by millions of people, literally overnight.
Powered by Blueprint
Update 2017-01-12: I’ve published a post based on this presentation.
I delivered this presentation at the 2016 IA Summit in Atlanta, Georgia
Last night Futuredraft hosted a great conversation about video games and user experience design as part of our Designer’s Studio series. The speakers were Adaptive Path’s Jesse James Garrett, and Bungie’s Patrick O’Kelley, executive producer of Destiny, a major first-person console game. I was ostensibly moderating the discussion, but really I was just marveling as these two smart guys geeked out on games.
The entire conversation was lively and inspiring, but one thought that has stayed with me is that first-person games, which aim to present rich simulated worlds, are effective entertainment in part because of the psychological distance afforded by the fact that the simulation is taking place on a screen, with indirect input devices. As realistic as the sensory signals may be (and Destiny is amazingly realistic), the experience is still framed by the “real” world. Your psyche can engage in (and enjoy) the game because at a deep level it knows that the experience isn’t real; it’s happening “out there”, within the clear boundaries of the screen and the controllers.
Bringing such an experience closer to the psyche by removing that distance (for example, using seamless VR, where the psyche is in the simulation) could be more disturbing than entertaining. For example, think of the action of quickly switching the camera from first-person perspective to the third-person “behind the character” view, as often happens in these games when you board a vehicle. While this seems like an acceptable transition on-screen, I wonder what it would feel like to unexpectedly jump out of “me” after the simulation has convinced me that “I” am there.
The history of digital user interfaces is a path from abstraction (and therefore detachment) towards a narrowing of the distance between the user, the information environment, and the “real” world: first we were flipping binary bits with switches, then entering characters in text-based terminals, then pointing-and-clicking in metaphor-heavy GUIs, then manipulating information by touching it directly on glass surfaces. It’s a progression from indirect interaction with abstractions towards direct interaction with the actual information being manipulated. VR-based experiences could be the next milestone in this trajectory, and for the most part this is unexplored territory vis-a-vis its impact on our consciousness.
Given simulated experiences that are close to seamless and completely engrossing – and therefore, potentially deeply terrifying – could users suffer mental harm? If so, will these experiences be regulated? What are the ethical implications of convincingly replacing the user’s reality, albeit temporarily?