Machine Intelligence and the Design of Complex Systems

Adobe’s Patrick Hebron, in an interview for Noema (from September 2020):

If you’re building a tool that gets used in exactly the ways that you wrote out on paper, you shot very low. You did something literal and obvious.

The relationship between top-down direction and bottom-up emergence is a central tension in the design of complex systems. Without some top-down direction, the system won’t fulfill its purposes. However, if it doesn’t allow for bottom-up adjustments, the system won’t adapt to conditions on the ground — i.e., it won’t be actualized as a real thing in the world. What’s needed is a healthy balance between bottom-up and top-down.

Continue reading

The Object-Oriented UX Podcast, Ep. 15

Sophia Prater invited me to be a guest in her Object-Oriented UX podcast. Sophia is a UX design consultant and evangelist for OOUX, which is a methodology that helps teams tackle complex design challenges. She was my guest on episode 63 of The Informed Life, where she explained OOUX in more detail.

However, our conversation in her podcast focused on my book, Living in Information. Among other things, we discussed information environments, the tension between top-down and bottom-up structural directions, and greedy environments. Also, UFOs.

Listen in Apple Podcasts

Listen in Spotify

Listen in Google Podcasts

How to Work with Tension in Design

The ultimate purpose of a design project is to change something. It might be kickstarting sales, making stuff more findable, or addressing a competitive challenge. Whatever it is, the project exists because someone wants something to be different.

Changes reveal tensions. Often, teams are invested in the status quo. For example, sales may want product to introduce new features, while product wants a simpler experience. More capabilities increase complexity, so the two are in tension.

Projects are rife with such tensions — and they often go unacknowledged. Not surprising, since dealing with tensions can be uncomfortable. If you’ve ever been in a meeting with a surly stakeholder, you know how awkward these situations can be.

Continue reading

The Informed Life with Sophia Prater

Episode 63 of The Informed Life podcast features an interview with UX design consultant Sophia Prater. Sophia evangelizes object oriented UX, a methodology that helps teams tackle complex design challenges. In this conversation, we discuss OOUX and how it differs from other methodologies.

Sophia described OOUX as a philosophy “that respects and acknowledges the fact that people think in objects.” What are objects?

An object is a thing that has value to the user. So, when I say objects, I’m not talking about your navbar or your calendar picker or your dropdown. All those things are valuable, but they are a means to an end. And I often say no user is coming to your site for your calendar picker. It could be the best calendar picker in the whole world, but that’s not what they’re coming for. They’re coming for the event, or they’re coming for the people that they can invite to the event.

Continue reading

The Last Illusion

Brian Eno, in an interview from 1995:

[INTERVIEWER:] On the [Nerve Net] album jacket you have a number of terms describing the music. One of those terms is “Godless.”

[ENO:] I’m an atheist, and the concept of god for me is all part of what I call the last illusion. The last illusion is someone knows what is going on. That’s the last illusion. Nearly everyone has that illusion somewhere, and it manifests not only in the terms of the idea that there is a god but that knows what’s going on but that the planets know what’s going on. Astrology is part of the last illusion. The obsession with health is part of the last illusion, the idea that there’s that if only we could spend time on it and sit down and stop being unreasonable with each other we’d all find that there was a structure and a solution underlying plan to it all, for most people the short answer to that is God.

Well, what I want to indicate by that word godless is not only god in the religious sense but I am trying to accept and enjoy the idea that we never will reach that condition of agreement of certainty, that actually we’re unanchored, we’re floating around, and we’re actually guessing. That’s what we’re doing. Everyone is making guesses, and trying to make the best of it, watching what happens and being empirical about it. There won’t be a plan, so godless, like most of those words, have a lot of resonance for me.

The last illusion has a lot of resonance for me too. (Although I don’t use the word “godless” — there’s irony in saying that it’s illusory to believe someone knows what’s going on immediately after declaring yourself an atheist.)

The last illusion is alluring. It’s scary to live unmoored, doing our best with what little information we have on hand. It’s scary to accept responsibility for failures — and successes. It’s scary to be uncertain. So much more comforting to buy into an exculpatory narrative — especially when everyone else is buying too.

And yet, the ultimate cost for such comfort is agency. The more we blame the stars, the gods, the Man, the system, [pick your favorite -ism], or whatever external abstract force for how things turn out, the less compelled we are to plumb our personal role in the matter. By surrendering to pre-packaged explanations, we risk atrophying the one thing we can control: our ability to sense and respond, to evolve.

Which isn’t to say those external forces aren’t real. Some are significant factors in how things turn out. Ideologies have a track record of creating horrible suffering in the world. But they’re not real in the same sense that damned table you stubbed your toe on is real. They’re abstractions — models for interpreting reality.

You can choose how to frame your experiences. Few things have a greater impact on the quality of your life (and the lives of people around you) than the models you adopt. And anyone who claims to have the ultimate model is peddling an illusion. A molder, not a dancer. Caveat emptor.

Godless: An Unpublished Interview With Brian Eno

Book Notes: ‘Framers’

Framers: Human Advantage in an Age of Technology and Turmoil
By Kenneth Cukier, Viktor Mayer-Schönberger, and Francis de Véricourt
Dutton, 2021

Isaac Asimov said, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” We’ve created incredible technologies yet seem unable to coordinate their use towards the common good. Typical responses to this conundrum range from naïve tech utopianism to emotional anti-rationalism.

Framers argues for a third way, one that leverages a human capability unmatched by any algorithm: the ability to think differently about situations by reframing them. By changing how we understand ourselves and our situations, we can approach problems from different perspectives. Given the role of design in making new technologies accessible to humans, designers must understand framing.

The key to approaching the subject is a concept familiar to designers: mental models,

Continue reading

Dark IA?

Tyler Sonnemaker, reporting for Insider:

Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

The report alleges internal stakeholders weren’t clear on the system’s structure:

Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.

Sounds like a concept map would help. But perhaps these issues could be due to more than a lack of understanding:

When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.

Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” — that is, data Google had showing that users were using those settings — “in order to assuage manufacturers’ privacy concerns.”

I don’t know anything about this case other than what is in the media, nor do I have firsthand experience with Android’s privacy settings. That said, these allegations bring to mind “Dark IA” — the opposite of information architecture.

Information architecture aims to make stuff easier to find and understand — implicitly, in service of empowering users. The antithesis of IA isn’t an unwittingly disorganized system, but one organized to inhibit understanding and deprive users of control.

Unredacted Google Lawsuit Docs Detail Efforts to Collect User Location

How to Use Social Media Without Losing Equanimity

I love this tweet from David Perell:

The phrase ‘intellectual grace’ resonates with me. It’s rare these days. Casually perusing social media soon reveals a high level of one-upmanship and unkindness. The most extreme expressions get more engagement, which is the coin of the realm.

That said, I’ve gotten a lot of value from social media — especially during the pandemic. My secret is twofold:

  1. I don’t take it too seriously, and
  2. I apply Postel’s law to my interactions online.
Continue reading

Clarity vs. Confidence: Starting Conceptual Models Right

Few things are as powerful as a good model of a complex domain. A clear representation of the domain’s key elements and their relationships creates alignment. The model becomes a shared point of reference and shorthand for decision-making.

Good models eschew some complexity. But complex domains aren’t simple. A model that aims to encompass a domain’s full complexity will likely fail at building shared understanding. But a model that over-simplifies won’t be useful.

Continue reading