Reid Hoffman on Language, Big Tech, and Society

Tyler Cowen posted a great podcast of an interview with Reid Hoffman. Mr. Hoffman studied philosophy, and at one point in the conversation they discussed the influence of Wittgenstein on LinkedIn:

COWEN: How did your interest in the late Wittgenstein influence the construction and design of LinkedIn? I’m sure they ask you this all the time in interviews.

HOFFMAN: [laughs] All the time. The question I’ve always been expecting. I would say that the notion of thinking about — a central part of later Wittgenstein is to think that we play language games, that the way that we form identity and community, both of ourselves and as individuals, is the way that we discourse and the way that we see each other and the way that we elaborate language.

That pattern of which ways we communicate with each other, what’s the channel we do, and what’s the environment that we’re in comes from insights from — including later Wittgenstein, who I think was one of the best modern philosophers in thinking about how language is core to the people that we are and that we become.

Later, Mr. Hoffman addresses the relationship between large tech companies and society as a whole:

I agree that the technological companies are a very important Archimedean lever. What I disagree with is that, as those levers become more important to society, I think that we can’t actually, in fact, say, “We should not be in discourse with society about what we’re doing.” Because actually, in fact, once you begin to have society-level impact, not just a, “Oh, look, here’s a new social game or something else, here is a new Zynga game that’s fun to play.”

Okay, who cares? I mean, it’s fun. Maybe hundreds of millions of people do it, but that doesn’t actually impact the way that society operates.

The fact is, as we get to the way that society operates, we need to add in interaction with society, accountability on the design principles and the philosophy by which we are affecting society. We have to integrate in ways by which society can give us feedback. That doesn’t necessarily always mean direction. Maybe sometimes there’s regulation or other things, but an ability to be in conversation with them.

If you look at it, my view would be that, as you have more successful tech companies that have an impact on society as a whole, part of what you should start doing is deliberately engaging in discourse with the various institutions of society in a public way, to say, “Here is the kinds of things that not only are we doing now, but what we’re trying to create in the future.”

He doesn’t explicitly connect the two ideas in the interview, but I will: these places made of language increasingly impact the ways societies work. As a result, the organizations that operate them must engage in active dialog with other social institutions, including governments.

(Mr. Cowen’s podcast, Conversations With Tyler, features insightful interviews with people who are shaping our world. It’s worth your attention.)

Reid Hoffman | Conversations With Tyler

The Pull of the Present

By this time twenty years ago, many of us were feeling relieved. We’d been hearing for months about the near-certain fallout from the “Y2K bug”: widespread computer system failures caused by the practice of shortening years to two digits instead of four (e.g., 99 rather than 1999.) But by mid-January, 2000, it was clear that all would be ok. Or so it seemed.

Some context, in case you weren’t around then. By the mid-1990s, computer systems were already essential parts of our infrastructure. Nobody knew how many of these computers had the bug or what would happen after 11:59 pm on December 31, 1999, when these systems would assume it was now January 1 of year zero. Would there be blackouts? Urban transport cancellations? Airplane collisions? The complexity of such infrastructure-level systems made the consequences impossible to predict. Governments and companies undertook massive and expensive projects to “fix” the problem. FORTRAN programmers suddenly found their skills in demand.

Then nothing happened. By the end of the first week of January 2000, it was clear that either the fixes had been successful or the potential downsides overblown. Those of us who’d been stressing out about the Y2K bug felt relieved and quickly forgot about it.

Continue reading

How Social Media Warps Democracy

Jonathan Haidt and Tobias Rose-Stockwell writing in The Atlantic:

Madison’s design has proved durable. But what would happen to American democracy if, one day in the early 21st century, a technology appeared that—over the course of a decade—changed several fundamental parameters of social and political life? What if this technology greatly increased the amount of “mutual animosity” and the speed at which outrage spread? Might we witness the political equivalent of buildings collapsing, birds falling from the sky, and the Earth moving closer to the sun?

I find the authors’ argument compelling: social media has changed the nature of discourse in democratic societies. It’s not a content problem, but a structural issue driven by our intrinsic want for attention.

Can we recover? Perhaps — but change will require major structural interventions. The article suggests three that seem worthwhile. (Alas, no mention of the nefarious business model driving the major social networks: selling the attention of their users.)

The Dark Psychology of Social Networks – The Atlantic

The Golden Age of Learning Anything

Last weekend I did something I’d never done before: I reupholstered a chair. Here’s a photo of the final result:

Unfortunately, I don’t have a “before” photo to share. But take my word for it: my efforts improved this chair’s condition significantly. Before this weekend, it was unpresentable. Little fingers love to tug on tiny tears in vinyl until they become large, unsightly tears. Alas, it’s cheaper to buy new reproductions such as this one than to have them professionally reupholstered. But my conscience doesn’t let me throw away an otherwise good piece of furniture because of a fixable imperfection.

I’m sharing my weekend project here not to seek your approbation. Instead, I want to highlight that we live in a time when we can learn almost any skill on the internet. I learned to reupholster in a couple of hours through a combination of websites and YouTube videos. I researched and ordered the required materials on Amazon. It took some effort on my part, but it was worth it. I’m surprised at how well the chair worked out, given it was my first time.

As we head into a new year, I keep seeing pundits on Twitter claiming “big tech” is ruining everything. Of course, the real world isn’t as simple as these folks render it. Sure, there are negative aspects to our current large tech platforms — but there are positive ones too. The ability to pick up new knowledge and skills anytime at our own pace very cheaply is among the positives.

You Are Being Tracked

A few months ago, The New York Times obtained a dataset of people’s movements gathered from their smartphones. The initial report on their findings aims to make tangible the risks we’re running by permitting the private mass surveillance industry to continue tracking us as they have:

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

If you lived in one of the cities the dataset covers and use apps that share your location – anything from weather apps to local news apps to coupon savers – you could be in there, too.

If you could see the full trove, you might never use your phone the same way again.

The report makes the implications obvious by identifying specific individuals. For example, the data revealed a Microsoft engineer visiting Amazon’s campus, where he now works. Another — who remains anonymous in the report — is a senior Defense Department official (and his wife,) shown walking through the Women’s March in Washington. It’s not difficult imagine the implications of such information being available to employers, governments, and other powerful organizations.

When we talk about regulating tech, we usually think of large data-driven companies. But data themselves are neutral. Large data sets can help us identify and cure medical conditions, for example. It’s where we apply data — and to what ends — that can get us in trouble. Surreptitiously gathering data about citizens and residents to better persuade us is about the most pernicious thing we can do to a free society. Yet here we are.

Twelve Million Phones, One Dataset, Zero Privacy

Mind Your Manners Online

In case you haven’t seen it, The New York Times has a new-ish section called Smarter Living that offers pointers on how to be effective in our hybrid physical-digital world. A recent article by Victoria Turk is representative; it highlights the importance of good manners online:

As more of our lives moves online, good digital etiquette is critical. Just as we judge people by their behavior IRL — in real life — so we take note when a person’s manners in the digital sphere leave something to be desired.

The article addresses some of the challenges of operating in contexts made of (written) language:

Both the content of your message and its tone will live or die based on what you type on your keyboard, so the gap between, say, landing a joke and causing mortal offense can be perilously fine.

It goes on to suggest ways in which you can be mindful about your online etiquette; all good reminders.

I’m glad to see major publications like the Times acknowledging the contextual nature of our digital environments. Being effective in today’s world requires that we become adept at operating in places made of text. Minding our manners in these places is perhaps more important than in physical environments since written language is so easy to misinterpret. It also sticks around: spoken words are evanescent, but your online posts will be there for a long time.

While the NYT article doesn’t mention it, for me, an important part of minding my manners online is reminding myself that I’m dealing with other people, not just collections of pixels and metadata. These people — different though their positions may be from mine — also experience joy and suffering and all the tribulations of being human. I’m often reminded of this beautiful admonition from Kurt Vonnegut:

Hello, babies. Welcome to Earth. It’s hot in the summer and cold in the winter. It’s round and wet and crowded. At the outside, babies, you’ve got about a hundred years here. There’s only one rule that I know of, babies — ‘God damn it, you’ve got to be kind.’

You Live Your Life Online. Don’t Forget Your Manners

Preserving Open Source Software for the Long Term

We take much of our digital infrastructure for granted; stuff “just works,” day after day. But things can change fast. What would happen if contextual conditions — social, natural, or whatever — radically devolved into chaos? It’d be good to be able to reboot things in the case of a digital dark age. Looking to address this contingency, the Long Now Foundation has partnered with GitHub on its new GitHub Archive program:

Taking its lessons from past examples when crucial cultural knowledge was lost, such as the Great Library of Alexandria (which was burned multiple times between 48 BCE 00640 CE) and the Roman recipe for concrete, the GitHub Archive is employing a LOCKSS (“Lots Of Copies Keep Stuff Safe”) approach to preserving open source code for the future.

The Archive will take periodic snapshots of all public GitHub repositories, and store them in various formats and locations, including the Arctic World Archive in Norway — a code-archiving strategy inspired by Long Now co-founder Stewart Brand’s pace layers framework.

Is it likely that we’ll lapse into a new dark age? I don’t know. Would the GitHub Archive help in such a circumstance? I don’t know… And hope to never have to find out. But I do know it’s important to think about possible futures and plan accordingly. The GitHub Archive is a tangible example of such thinking.

In our sped-up age, it’s more important than ever that we make decisions with longer time perspectives in mind. By partnering with the GitHub Archive project, the Long Now Foundation (of which I’m a proud member) is carrying out its mission to foster long-term thinking. If you’re not a member yet, check out their seminar series about long-term thinking and consider joining.

Long Now Partners with GitHub on its Long-term Archive Program for Open Source Code

Causes of (and Remedies for) Bias in AI

James Manyika, Jake Silberg, and Brittany Presten writing for the Harvard Business Review:

AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.

The phrase “artificial intelligence” is leading us astray. For some folks, it’s become a type of magical incantation that promises to solve all sorts of problems. Much of what goes by AI today isn’t magic — or intelligence, really; it’s dynamic applied statistics. As such, “AI” is highly subject to the data being analyzed and the structure of that data. Garbage in, garbage out.

It’s important for business leaders to learn about how AI works. The HBR post offers a good summary of the issues and practical recommendations for leaders looking to make better decisions when implementing AI-informed systems — which we all should be:

Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.

What Do We Do About the Biases in AI?

Being Open to Unsettling Changes

My post about watching movies at double-speed elicited strong reactions. Some folks seem convinced that giving people the ability to watch movies faster will diminish the viewing experience, and not just for them — for everyone. Why? Because such changes inevitably influence the medium itself.

Consider the way sound changed movies. “Talking” pictures did away with title cards. That was a significant change to the medium, which was wrought by advances in technology. Once it was possible to synchronize sound and pictures, irreversible changes to the medium were inevitable.

Are movies with sound better or worse than what came before? That’s a judgment call. It depends on your point of view. You and I grew up in a world of talking pictures; the silent ones with their title cards seem old and clunky. But they have merits too. Silent films had literate value. Many featured live musical performances, which made them into more of an event than pre-recorded movies. I can imagine somebody who grew up with silent movies could become attached to the way they were.

Continue reading