Adobe’s Patrick Hebron, in an interview for Noema (from September 2020):

If you’re building a tool that gets used in exactly the ways that you wrote out on paper, you shot very low. You did something literal and obvious.

The relationship between top-down direction and bottom-up emergence is a central tension in the design of complex systems. Without some top-down direction, the system won’t fulfill its purposes. However, if it doesn’t allow for bottom-up adjustments, the system won’t adapt to conditions on the ground — i.e., it won’t be actualized as a real thing in the world. What’s needed is a healthy balance between bottom-up and top-down.

In a well-designed modular system (e.g., LEGO), top-down structural decisions inform possibility spaces in ways that allow for bottom-up emergence. The LEGO system’s designers can’t predict exactly what users will do with the system, but they can predict how they might do it. The system provides enough structure to relieve agents from the despair wrought by infinite possibilities. Constraints are a generative tool.

The relationship between top-down constraints and bottom-up adaptability is a key area of focus as we move to create smarter, dynamic systems.

The structure of LEGO pieces is static: the system doesn’t change from one moment to the next. Bricks don’t self-assemble. LEGOs must respect the laws of physics. These constraints don’t exist in digital systems. With the addition of machine “intelligence” (i.e., smarter data-informed algorithms) to the mix, digital systems can self-assemble in unpredictable ways.

Top-down structural decisions play a key role in determining the degree to which these systems generate useful constructs. (I.e., serving human and ecological needs.) Hebron makes an important point about the relationship between these “smart” digital systems and their users: we shouldn’t think of them so much as artificial agents coming for our jobs, but as tools that augment our capabilities:

Machine intelligence can provide a counterpoint to human intelligence. We should see this as something akin to the search for extraterrestrial life or the effort to decode dolphin language. We can better understand our own intelligence by contrasting it with other meanings that intelligence might have.

Currently, we have such a poor understanding of what intelligence means outside of ourselves that, given the capability to design a new intelligence, the only thing we can think to create is something like ourselves. This is a limitation in our thinking about tools.

I think a similar design process will unfold with the creation of AI as with any design process. You thought you were going here, but you ended up a bit further over there. You zig, and you zag. The properties of AI are going to come out different than what we thought. This will be far more illuminating to who we are than landing on what we thought we wanted.

Kevin Kelly writes along similar lines: rather than a generalized superhuman AI, consider the possibility of a myriad “alien” intelligences optimized for various tasks domains — some broader than others. Key concern: are those domains relevant to humans needs? A noble — and ultimately, top-down — question.

Against Prediction: Designing Uncertain Tools