A Principle for Self-organizing Systems?

From Wired’s profile of British neuroscientist Karl Friston:

For the past decade or so, Friston has devoted much of his time and effort to developing an idea he calls the free energy principle… With this idea, Friston believes he has identified nothing less than the organizing principle of all life, and all intelligence as well. “If you are alive,” he sets out to answer, “what sorts of behaviors must you show?”

The article elaborates:

The second law of thermodynamics tells us that the universe tends toward entropy, toward dissolution; but living things fiercely resist it. We wake up every morning nearly the same person we were the day before, with clear separations between our cells and organs, and between us and the world without. How? Friston’s free energy principle says that all life, at every scale of organization—from single cells to the human brain, with its billions of neurons—is driven by the same universal imperative, which can be reduced to a mathematical function. To be alive, he says, is to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy.

[Emphasis in the original.]

To the degree that I understand the idea (and the Wired piece acknowledges it’s “maddeningly difficult”), the free energy principle sounds fascinating, deep, and potentially useful. It could help explain the behavior (and therefore the design) of self-organizing systems.

The Genius Neuroscientist Who Might Hold the Key to True AI

Project Focus Mode

For most of my career, I’ve worked on several projects simultaneously at any given time. This means lots of information coming and going from and to different people, keeping track of documents and commitments, scheduling meetings, etc. Most of it happens on my computer, which for almost twenty years has been a laptop. (Meaning: it comes with me.) In the past few years, more mobile devices (e.g., iPhone, iPad) have also joined my toolkit. There’s a lot of things going on in these information environments. Keeping everything organized impacts my effectiveness; the time I spend looking for stuff isn’t valuable to my clients. Early on I realized that the only way I’d be able to do this would be if I developed organization systems, and stuck to them over time.

For example, I always have a “projects” folder on my computer. Each project I take on gets an individual subfolder in there. These folders use consistent naming schemes. These days it’s usually the client name, followed by a dash, followed by a (brief!) unique project name. Why not per-client folders? At one point I realized I had to strike a balance between depth and breadth. Going n-folders deep often meant not locating things as quickly. Of course, over time this folder can get crowded. Eventually, I determined the projects folder only needed to contain active projects; I set up a separate “archive” folder where I moved completed project folders.

Continue reading

Blogging and Social Media

“Sooner or later, everything old is new again.”
— Stephen King

A little over a year ago, I completed the bulk of Living in Information. I’d found my voice, and wasn’t ready to put the microphone down. So I started blogging again. While it may seem old fashioned—and perhaps a bit quixotic—I’m loving it.

I’m in service to ideas. Most aren’t original to me; I just give them a voice. Blogging helps me make them a thing in the world. It compels me to dig deeper than I could if I was writing exclusively in environments designed for other ends.

Facebook is great for finding out what your acquaintances are up to. It’s given the Web enough structure for your high school friends to share photos of their pets. Twitter is great for pithy, context-free hot takes. A disaster for discourse.

These environments prioritize novelty and engagement, not coherence and continuity. They’re designed to hold your attention, not to help you reason. I can’t point you to any viable ideas I’ve posted on​ Facebook or Twitter. They’re there but are now indistinguishable from the detritus.

Don’t get me wrong. I love these social networks and have no plans to leave them in the near term. But that’s because I know their role in my life. (Hopefully, they helped bring you here.)

It’s a cliché, but a true one: I’m often unsure of what I think until I’ve written about it. I’m thrilled to have a venue where I can “think out loud;” where I can give ideas life. And I’m thrilled that you’re here. This place isn’t bustling like the social networks, but that’s a good thing. Hopefully, you’ll find stuff of value without feeling nudged.

A Bit of Structure Goes a Long Way

One of the most important lessons I learned in architecture school was the power of constraints. I’d always assumed that in creative work, complete freedom leads to better, more interesting results. After all, given more latitude you’re likely to try more things. But this turns out to be wrong.

The problem is twofold. For one, there’s the paralysis that sets in when facing a completely blank canvas. What to do? Where to start? Etc. For another, you never really have total freedom in the first place. All creative endeavors must grapple with constraints. There are time limits, budgets, the physical properties of paper, the force of gravity, the limits of your knowledge, the limits of what your society deems acceptable, and more. All of them narrow the scope of what you can do at any given time. Understanding the constraints that influence the project — and learning how to work creatively with them, rather than against them — is an essential part of learning to be a good designer.

But it goes further than this. Sometimes doing good work calls for us to introduce constraints of our own. Think of the difference between great jazz playing and mere noodling around. The interesting improvisations happen against the constraints of rhythm and chord (or mode) changes. The musicians don’t have to respect these framing devices, but doing so makes the work come alive. The band’s rhythm section provides enough structure for the soloists to fly “free” — but always in dialog with the underlying structure, either pro or con.

I often think of the success of Facebook in this light. In some ways, Facebook is the apotheosis of the promise of the original World Wide Web. I remember thinking in the mid-1990s that one day everybody would have a web page of their own. However, the hurdles for doing so were too high at the time: you needed space on a web server to host your site, to learn HTML and web design, to understand all of these concepts. More importantly, you needed to have something to tell the world — a compelling reason to get you to overcome the inertia of not doing anything at all.

Over a third of humanity now has a presence online thanks to Facebook. No doubt this is because Facebook abstracted out the hosting and sharing bits. There’s no further need for you to learn HTML or design, or to find a web host! But of course, many other companies had been doing this before Facebook. What Facebook added to the mix was structure: you’re not just sharing arbitrary free-form stuff, you’re sharing the minutiae of everyday life: photos and short text updates. (Think of how limited text editing and presentation is on Facebook. You can’t even bold or italicize text!) And you’re not just sharing them with anybody who cares to read them; you’re sharing them with the one audience that may care: your friends and family.

This structure underlies the entire system. It provides rails to the experience that make onboarding and day-to-day usage easier. The structure you fill out when you join is the same one you expect to see on my profile; you don’t need to re-learn it when you visit my profile page. Instead, you focus on the differences: the content I’ve posted. Text, photos, metadata about who I am — at least the ones I choose to share using the system’s structural constraints. I can tell you about where I live, or where I went to school, for example.

Tell me more...
Tell me more…

We come to expect these structural constructs in the environment, in much the same way jazz players expect rhythm. At scale, these structural constraints become normative. We play with them and around them but never break or transcend them. For a system such as Facebook (which is financed by advertising), the configuration of these structures is the result of a delicate balance between the things people would be enticed to do with such a system (e.g., share their lives with their friends) and how advertisers want to categorize us. Studying these structures reveal a complex picture of who we are not just as individuals, but as participants in a market economy.

Leaning towards overly-prescribed structure, Facebook has successfully gotten lots of people online — using the infrastructure of the Web, but not its open-ended ethos. Given the importance of structure to bootstrapping creative endeavors, I wonder if it could’ve been otherwise.

The Complexity Gap

Timo Hämäläinen:

The historical evolution of civilisations has been characterised by growing specialisation and the division of physical and intellectual labour. Every now and then, this evolution has been interrupted by a governance crisis when the established organisational and institutional arrangements have become insufficient to deal with the ever-increasing complexity of human interactions.

Some complexity scientists use the term “complexity gap” for this situation. Today’s societies are, again, experiencing a complexity gap. There are serious governance problems at all levels of our societies: individuals suffer from growing life-management problems, corporations struggle to adapt their rigid hierarchies, governments run from one crisis to another and multinational institutions make very little progress in solving global problems. A transition to the next phase of societal development requires closing the complexity gap with new governance innovations. Or else societies may face disintegration and chaos.

According to Mr. Hämäläinen, one way to overcome this complexity gap is by practicing second order science.

Second order science comes to the rescue in a complex world

Towards More Adaptive Information Environments

Atul Gawande has published a great piece in The New Yorker on why doctors hate their computers. The reason? Poorly designed software. Specifically, several of the examples in the story point to information architecture issues in the system. These include ambiguous distinctions between parts of the information environment and taxonomies that can be edited globally:

Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. [Dr. Susan Sadoughi] used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.

The bottom line? Software is too rigid, too inflexible; it reifies structures (and power dynamics) in ways that slow down already overburdened clinicians. Some problem domains are so complex that trying to design a comprehensive system from the top-down is likely to result in an overly complex, overly rigid system that misses important things and doesn’t meet anybody’s needs well.

In the case of medicine (not an atypical one) the users of the system have a degree of expertise and nuance that can’t easily be articulated as a design program. Creating effective information environments to serve these domains calls for more of a bottom-up approach, one that allows the system’s structure to evolve and adapt to fit the needs of its users:

Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.

Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.

Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.

My take is there’s nothing inherent in software that would keep it from being more adaptive. (The notion of information architectures that are more adaptive and emergent is one of the core ideas in Living in Information.) It’s a problem of design — and information architecture in particular — rather than technology. This article points to the need for designers to think about the object of their work as systems that continuously evolve towards better fitness-to-purpose, and not as monolithic constructs that aim to “get it right” from the start.

Why Doctors Hate Their Computers

The Mother of All Demos at 50

On December 9, 1968, Doug Englebart put a ding in the universe. Over 90 minutes, he and his colleagues at Stanford Research Institute demonstrated an innovative collaborative computing environment to an audience at the Fall Joint Computer Conference in San Francisco. This visionary system pioneered many of the critical conceptual models and interaction mechanisms we take for granted in today’s personal computers: interactive manipulation of onscreen text, sharing files remotely, hypermedia, the mouse, windows, and more. It blew everybody’s mind.

Apple’s Macintosh — introduced in 1984 — was the first computing system to bring the innovations pioneered by Mr. Englbart and his team to the masses. Macs were initially dismissed as “toys” — everybody who was a serious computer user knew that terminal commands were the way to go. Until they weren’t, and windows-based UIs became the norm. It took about a decade after the Mac’s introduction for the paradigm to take over. Roughly a quarter of a century after The Demo, it’d become clear that’s how computers were to be used.

We’re now in the midst another paradigm shift in how we interact with computers. Most computer users today don’t work in WIMP environments. Instead of the indirect mouse-pointer interaction mechanism, people now interact with information directly through touchscreens. Instead of tethered devices propped atop tables, most computers today are small glass rectangles we use in all sorts of contexts.

Still, fifty years on The Demo resonates. The underlying idea of computing as something that creates a collaborative information environment (instead of happening as a transactional user-machine interaction) is still very much at the core of today’s paradigm. Every time you meet with a friend over FaceTime or write a Google Doc with a colleague, you’re experiencing this incredibly powerful vision that was first tangibly articulated half a century ago.

A website — The Demo @ 50 — is celebrating Mr. Englebart’s pioneering work in this milestone anniversary. The site is highlighting events in Silicon Valley and Japan to commemorate The Mother of all Demos. If you aren’t in either location, there are several online activities you can participate in at your leisure. If you join online, you’ll be able to commemorate The Demo in a most meta way: by doing so in the type of interactive information environments presaged by The Demo itself.

Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.

The Possible and the Practical

Rodney Brooks on why your promised flying cars (or hyperloops or autonomous vehicles) aren’t likely to happen any time soon:

The difference between the possible and the practical can only be discovered by trying things out. Therefore, even though the physics suggests that a thing will work, if it has not even been demonstrated in the lab you can consider that thing to be a long way off. If it has been demonstrated in prototypes only, then it is still distant. If versions have been deployed at scale, and most of the necessary refinements are of an evolutionary character, then perhaps it may become available fairly soon. Even then, if no one wants to use the thing, it will languish in the warehouse, no matter how much enthusiasm there is among the technologists who developed it.

The Rodney Brooks Rules for Predicting a Technology’s Commercial Success