Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.