David Pierce, writing in Protocol yesterday morning:

Starting on Wednesday, any Slack user will be able to direct message any other Slack user. The new system is called Connect DMs, and works a bit like the messaging apps and buddy lists old: Users send an invite to anyone via their work email address, and if the recipient accepts (everything is opt-in), their new contact is added to their Slack sidebar. The conversations are tied to the users’ organizations, but exist in a separate section of the Slack app itself.

Lorenzo Franceschi-Bicchierai, writing in Vice a few hours later:

On Wednesday, Slack launched a new feature that allows users to message anyone else via direct messages, even if the receiver is outside of the sender’s organization. In other words, the feature allows anyone to connect with you privately on Slack. Critically, even if the feature is turned off on your Slack, you’ll still get an email notification and message from anyone trying to connect with you—including people who don’t work with you and can use this feature to sneak harassment into your inbox.

After experts in content moderation, and several other people, complained about this risk, Slack is already backtracking and limiting the feature, admitting it “made a mistake.”

Good on Slack for quickly pumping the brakes on this new ‘feature.’ But why was this released in this form to begin with? My sense is that Slack’s teams think of themselves as adding ‘features’ to a ‘product,’ instead of as stewards of a place where people work. As Andrew Hinton put it,

Few things are as sensitive in a social system as direct access to users and their contact details. Anyone who agrees to use a system does so under the conditions they understand when they sign up. To the degree that it’s clear, the system’s conceptual model establishes a compact with users.

In Slack’s case, the model includes scope of access, which users perceive to be limited to the organization that ‘owns’ their Slack account, plus a few others they’ve intentionally added. Changes to the model must take in many scenarios, including (especially!) unsavory ones. Direct and unmoderated third-party access breaks the model, violating the compact.

This isn’t the first time we see something like this. Ten years ago, Google discontinued Buzz, an attempt to build a social network atop Gmail. A footnote in Google’s history, today we mostly remember Buzz because of its poorly conceived privacy model. Wikipedia has a good summary of what happened:

At launch, Google’s decision to opt-in its user base with weak privacy settings caused a breach of user information and garnered significant criticism. One feature in particular that was widely criticized as a severe privacy flaw was that by default Google Buzz publicly disclosed (on the user’s Google profile) a list of the names of Gmail contacts that the user has most frequently emailed or chatted with. Users who failed to disable this feature (or did not realize that they had to) could have sensitive information about themselves and their contacts revealed. This was later adjusted so that users had to explicitly add information that they want public.

As with Slack’s new feature, Buzz tried to leverage an existing social ‘space’ — Gmail — to kickstart its social graph. Alas, email is a different environment than a more public space like Buzz. People who sign up to an email application don’t expect their contact information to be used more broadly. A list of your most frequently emailed contacts can reveal an awful lot!

The upshot is that adding a new ‘feature’ can transform the nature of the place. As a result, we must approach such changes with great care. In cases like these, conceptual modeling and service design matrices are important tools in helping us avoid unintended consequences. And we should always heed George Santayana’s counsel: “Those who cannot remember the past are condemned to repeat it.”