Climate change has become a defining factor in companies’ long-term prospects. Last September, when millions of people took to the streets to demand action on climate change, many of them emphasized the significant and lasting impact that it will have on economic growth and prosperity – a risk that markets to date have been slower to reflect. But awareness is rapidly changing, and I believe we are on the edge of a fundamental reshaping of finance.
As reported in The Financial Times, BlackRock is backing up this position by changing its investment strategies towards more sustainable opportunities. The company will consider environmental, social, and governance factors along with financial factors when analyzing risk. (A report in Ars Technica explains in more detail the changes BlackRock is implementing.)
The long-term viability of our civilization rests on the sustainability of our ecosystems. For too long our organizations have operated using business models that don’t account for the full impact of their decisions. Finance underlies those decisions, so it gives me hope to see powerful financial actors adopting a more systemic accounting for their investments.
By this time twenty years ago, many of us were feeling relieved. We’d been hearing for months about the near-certain fallout from the “Y2K bug”: widespread computer system failures caused by the practice of shortening years to two digits instead of four (e.g., 99 rather than 1999.) But by mid-January, 2000, it was clear that all would be ok. Or so it seemed.
Some context, in case you weren’t around then. By the mid-1990s, computer systems were already essential parts of our infrastructure. Nobody knew how many of these computers had the bug or what would happen after 11:59 pm on December 31, 1999, when these systems would assume it was now January 1 of year zero. Would there be blackouts? Urban transport cancellations? Airplane collisions? The complexity of such infrastructure-level systems made the consequences impossible to predict. Governments and companies undertook massive and expensive projects to “fix” the problem. FORTRAN programmers suddenly found their skills in demand.
Then nothing happened. By the end of the first week of January 2000, it was clear that either the fixes had been successful or the potential downsides overblown. Those of us who’d been stressing out about the Y2K bug felt relieved and quickly forgot about it.
I took Christmas Day off: no client work, no podcast editing, no writing. Instead, I spent the day playing with my kids. Mostly, we built LEGO sets.
Although I am not an AFOL, LEGO is an important part of my life. I use it in my systems class and have written about some lessons it holds for systems thinkers. More importantly, I love playing with LEGO. It’s my favorite toy — and has been since I was a child.
Yesterday, as I helped my daughter build set #10260, I reflected on why I love the bricks so much. It boils down to the following:
In his book Where the Action Is, Paul Dourish surfaces a key distinction in software: that of the user interface as an abstraction of the implementation details that underly it:
The essence of abstraction in software is that it hides implementation. The implementation is in some ways the opposite of the abstraction; where the abstraction is the gloss that describes how something can be used and what it will do, the implementation is the part under the covers that describes how it will work. If the gas pedal and the steering wheel are the abstraction, then the engine, power train, and steering assembly are the implementation.
Designers often focus on this abstraction of the system — the stuff users deal with. As a result, we spend a lot of cycles understanding users. But for the interface to be any good, designers must also understand the implementation — the system’s key elements, how they interact with each other, its processes, regulation mechanisms, etc.
Sometimes, as with a new (and perhaps unprecedented) system, this implementation itself is in flux, evolving subject to the system’s goals and the needs of the people who will interact with the system. That is, it’s not all front-end: the implementation is part of the design remit; both the implementation and its abstraction are the object of design.
AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.
The phrase “artificial intelligence” is leading us astray. For some folks, it’s become a type of magical incantation that promises to solve all sorts of problems. Much of what goes by AI today isn’t magic — or intelligence, really; it’s dynamic applied statistics. As such, “AI” is highly subject to the data being analyzed and the structure of that data. Garbage in, garbage out.
It’s important for business leaders to learn about how AI works. The HBR post offers a good summary of the issues and practical recommendations for leaders looking to make better decisions when implementing AI-informed systems — which we all should be:
Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.
Organizations never exist on their own; they’re part of an ecosystem, a web of relationships that make it possible for things to get done. Your decisions affect the ecosystem, and the decisions of others affect you.
This has always been so, of course, but the internet has made ecosystems more visible and susceptible to disruption. Transacting has become easier and faster. Changes are often immediate, have more impact, and lead to greater network effects. The balance of power shifts: organizations can leverage connections to go straight to consumers. Alternatively, intermediaries can create new roles for themselves, becoming purveyors of information as much as goods.
There are great opportunities for organizations that can affect system dynamics. But there are also risks — to themselves and to the ecosystem. For example, in a recent interview with economist Tyler Cowen, music critic Ted Gioia talked about the impact internet streaming has had on the music industry:
I’m currently reading Brad Stone’s The Everything Store, a history of Amazon.com. One of the early chapters is about the very early days of the company, which at that point was only selling books. In addition to showing information about products, founder Jeff Bezos wanted the site to include customer reviews of individual books.
Of course, some customer reviews were negative. Mr. Bezos received an angry letter from a book publishing executive, arguing that Amazon was in the business of selling books, not trashing them. But that was not the Amazon way. Per Mr. Bezos,
When I read that letter, I thought, we don’t make money when we sell things. We make money when we help customers make purchase decisions.
These two sentences struck me as a key insight: the particular sale isn’t the ultimate goal of the interaction; building the overall relationship with the customer is.
Long-term thinking is rare in business — especially in a fast-paced environment such as the early web. Nascent Amazon was under a great deal of pressure to prove itself, to grow. Driving more immediate sales would’ve seemed the more prudent approach. And yet, the team chose the long-term relationship. That’s values in action.
In your work, you may sometimes be called to choose between a feature that “drives the needle” in the short term versus one that builds an ongoing relationship. How do you choose? How do you measure the cost either way?
Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.
Quantum supremacy heralds an era of not merely faster computing, but one in which computers can solve new types of problems. There was a time when I’d expect such breakthroughs from “information technology” companies such as IBM. But Google’s tech is ultimately in service to another business: advertising.
Likes are one of the most important concepts of the Facebook experience. Giving users the ability to cast their approval (or disapproval) on a post or comment — and to see how others have “voted” — is one of the most engaging aspects of the system, both for users and content authors. Facebook even uses the Like icon as a symbol of the company as a whole:
On [September 26], the social network said it was starting a test in Australia, where people’s Likes, video view counts and other measurements of posts would become private to other users. It is the first time the company has announced plans to hide the numbers on its platform.
Why would they do this? Because seeing these metrics may have an impact on users’ self-esteem. According to a Facebook spokesperson quoted in the article, the company will be testing the change to see if it helps improve people’s experiences. A noble pursuit. But, I wonder: How would this impact user engagement? If it benefits users but hurts advertising revenue, will Facebook discontinue the experiment?