AI-driven “Design”?

Via Kenny Chen’s newsletter, I learned about Tricycle, a set of tools “that help you design products powered by AI.” I remember seeing tweets last year from Jordan Singer (Tricycle’s creator) that highlighted some of this functionality. Now it looks like Singer is productizing a bundle of GPT-3-powered Figma automation tools.

Continue reading

Crowdfunding as Market Validation

Sam Byford, writing on The Verge:

Amazon is launching the next line of products for its Build It crowdfunding platform. The company has collaborated with fashion designer Diane von Furstenberg on a range of Echo Dot smart speakers that’ll only go on sale if enough people pre-order them within 30 days.

Two thoughts. First, I wasn’t aware that Amazon had launched its own crowdfunding platform. Here’s more about Build It, “a new Amazon program [that] lets you weigh in on which devices we build next”:

Continue reading

Tweaking Users’ Mental Models

Allison Johnson, writing in The Verge:

… the original Apple II version [of the video game Karateka] included a delightful little easter egg from the early days of PC gaming — putting in the floppy disk upside down would boot up the game upside down.

According to [Karateka’s creator Jordan] Mechner, the game’s developers hoped that a few people would discover it by accident, and think their game was defective. “When that person called tech support, that tech support rep would once in a blue moon have the sublime joy of saying, ‘Well sir, you put the disk in upside-down,’” Mechner was quoted as saying in a recent profile, “and that person would think for the rest of their life that’s how software works.”

It may seem disingenuous to suggest users would expect that flipping the software media would cause the software itself to flip. But I’ve been surprised at the many ways people misunderstand how computers work.

Continue reading

Becoming Better Users of Online Information

Shira Ovide reporting for The New York Times:

This week, Amazon acknowledged reality: It has a problem with bogus reviews.

The trouble is that Amazon pointed blame at almost everyone involved in untrustworthy ratings, and not nearly enough at the company itself. Amazon criticized Facebook, but it didn’t recognize that the two companies share an underlying problem that risks eroding people’s confidence in their services: an inability to effectively police their sprawling websites.

Learning from the masses is a promise of the digital age that hasn’t panned out. It can be wonderful to evaluate others’ feedback before we buy a product, book a hotel or see a doctor. But it’s so common and lucrative for companies and services to pay for or otherwise manipulate ratings on all sorts of websites that it’s hard to trust anything we see.

It’s a gross exaggeration to say learning from the masses hasn’t panned out. Overall, online product reviews — uneven as they are — have made us much more informed shoppers. They’re certainly better than the alternatives that existed before the internet. (Mostly: nothing.)

But as with much so much of what we read online, we must develop some skepticism. If a system can be gamed — and if humans have incentives to do so — then the system will be gamed. As a result, we must take what we read as one of several inputs when deciding.

Can platforms improve the accuracy of the information in their systems? Of course they can. As the article notes, Amazon is responding to curtail manipulated reviews and ratings. But we, too, can become better users of such information by honing our online B.S. detection skills.

Amazon’s Open Secret – The New York Times

Machine Intelligence and the Design of Complex Systems

Adobe’s Patrick Hebron, in an interview for Noema (from September 2020):

If you’re building a tool that gets used in exactly the ways that you wrote out on paper, you shot very low. You did something literal and obvious.

The relationship between top-down direction and bottom-up emergence is a central tension in the design of complex systems. Without some top-down direction, the system won’t fulfill its purposes. However, if it doesn’t allow for bottom-up adjustments, the system won’t adapt to conditions on the ground — i.e., it won’t be actualized as a real thing in the world. What’s needed is a healthy balance between bottom-up and top-down.

Continue reading

The Last Illusion

Brian Eno, in an interview from 1995:

[INTERVIEWER:] On the [Nerve Net] album jacket you have a number of terms describing the music. One of those terms is “Godless.”

[ENO:] I’m an atheist, and the concept of god for me is all part of what I call the last illusion. The last illusion is someone knows what is going on. That’s the last illusion. Nearly everyone has that illusion somewhere, and it manifests not only in the terms of the idea that there is a god but that knows what’s going on but that the planets know what’s going on. Astrology is part of the last illusion. The obsession with health is part of the last illusion, the idea that there’s that if only we could spend time on it and sit down and stop being unreasonable with each other we’d all find that there was a structure and a solution underlying plan to it all, for most people the short answer to that is God.

Well, what I want to indicate by that word godless is not only god in the religious sense but I am trying to accept and enjoy the idea that we never will reach that condition of agreement of certainty, that actually we’re unanchored, we’re floating around, and we’re actually guessing. That’s what we’re doing. Everyone is making guesses, and trying to make the best of it, watching what happens and being empirical about it. There won’t be a plan, so godless, like most of those words, have a lot of resonance for me.

The last illusion has a lot of resonance for me too. (Although I don’t use the word “godless” — there’s irony in saying that it’s illusory to believe someone knows what’s going on immediately after declaring yourself an atheist.)

The last illusion is alluring. It’s scary to live unmoored, doing our best with what little information we have on hand. It’s scary to accept responsibility for failures — and successes. It’s scary to be uncertain. So much more comforting to buy into an exculpatory narrative — especially when everyone else is buying too.

And yet, the ultimate cost for such comfort is agency. The more we blame the stars, the gods, the Man, the system, [pick your favorite -ism], or whatever external abstract force for how things turn out, the less compelled we are to plumb our personal role in the matter. By surrendering to pre-packaged explanations, we risk atrophying the one thing we can control: our ability to sense and respond, to evolve.

Which isn’t to say those external forces aren’t real. Some are significant factors in how things turn out. Ideologies have a track record of creating horrible suffering in the world. But they’re not real in the same sense that damned table you stubbed your toe on is real. They’re abstractions — models for interpreting reality.

You can choose how to frame your experiences. Few things have a greater impact on the quality of your life (and the lives of people around you) than the models you adopt. And anyone who claims to have the ultimate model is peddling an illusion. A molder, not a dancer. Caveat emptor.

Godless: An Unpublished Interview With Brian Eno

Dark IA?

Tyler Sonnemaker, reporting for Insider:

Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

The report alleges internal stakeholders weren’t clear on the system’s structure:

Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.

Sounds like a concept map would help. But perhaps these issues could be due to more than a lack of understanding:

When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.

Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” — that is, data Google had showing that users were using those settings — “in order to assuage manufacturers’ privacy concerns.”

I don’t know anything about this case other than what is in the media, nor do I have firsthand experience with Android’s privacy settings. That said, these allegations bring to mind “Dark IA” — the opposite of information architecture.

Information architecture aims to make stuff easier to find and understand — implicitly, in service of empowering users. The antithesis of IA isn’t an unwittingly disorganized system, but one organized to inhibit understanding and deprive users of control.

Unredacted Google Lawsuit Docs Detail Efforts to Collect User Location

Modeling for Automated Organization

Zach Winn reporting in MIT News:

MIT alumnus-founded Netra is using artificial intelligence to improve video analysis at scale. The company’s system can identify activities, objects, emotions, locations, and more to organize and provide context to videos in new ways.

Netra’s solution analyzes video content to identify meaningful constructs in service of more accurate organization. This improves searchability and the pairing of video content with relevant ads. How does this work?

Netra can quickly analyze videos and organize the content based on what’s going on in different clips, including scenes where people are doing similar things, expressing similar emotions, using similar products, and more. Netra’s analysis generates metadata for different scenes, but [Netra CTO Shashi Kant] says Netra’s system provides much more than keyword tagging.

“What we work with are embeddings,” Kant explains, referring to how his system classifies content. “If there’s a scene of someone hitting a home run, there’s a certain signature to that, and we generate an embedding for that. An embedding is a sequence of numbers, or a ‘vector,’ that captures the essence of a piece of content. Tags are just human readable representations of that. So, we’ll train a model that detects all the home runs, but underneath the cover there’s a neural network, and it’s creating an embedding of that video, and that differentiates the scene in other ways from an out or a walk.”

This notion of ‘vectors’ is intriguing — and it sounds like an approach that might be applicable beyond videos. I imagine analyzing the evolution of such vectors over time is essential to deriving relevant contextual information from timeline-based media like video and audio. But I expect such meaningful relationships could also be derived from text.

Systems that do this type of analysis could supplement (or eventually replace) the more granular aspects of IA work. Given the pace of progress in ML modeling, “big” IA (especially high-level conceptual modeling) represents the future of the discipline.

Improving the way videos are organized | MIT News | Massachusetts Institute of Technology