Via Kenny Chen’s newsletter, I learned about Tricycle, a set of tools “that help you design products powered by AI.” I remember seeing tweets last year from Jordan Singer (Tricycle’s creator) that highlighted some of this functionality. Now it looks like Singer is productizing a bundle of GPT-3-powered Figma automation tools.
Tricycle seems like an intriguing use of machine learning for design work. I’m excited to try it, and have signed up to the waiting list. I’ll report back if/when I’m granted access. For now, I’ll say I don’t believe this is an example of AI designing. Instead, a closer analogy for what’s going on here is high-level programming.
At their most basic (no pun intended), computers don’t understand English commands. If you want to “talk” directly to computers, you must learn binary — the mathematics of ones and zeros, which are at the core of how they work.
Binary is hard for humans to grok, so software engineers have created programming languages and compilers/interpreters that translate intent from human-friendly terms to commands the computer understands.
Programming languages can be categorized from low level (i.e., close to binary), such as assembly language, to high level (i.e., close to English), such as Python. Higher level languages are built atop lower level languages; they translate English-like intents to commands that are more reasonable to computers.
Tricycle seems to be bumping the user interface of design tools (Figma, in this case) up to a higher level of abstraction. Rather than learning Figma’s “native” UI, users can type English phrases such as “blue box” and “navigation bar with a camera icon.”
Like a programming language interpreter, GPT-3 translates the designer’s intent from a language they’re already familiar with (English) to one they need to learn (Figma’s information architecture, as manifested in its UI.) This can be easier for a new/busy designer, much like Python is easier and faster to work with than assembly language.
But that’s not “designing” — at least not any more than compiling Python code is “programming.” In both cases, all the system does is translate human intent into a lower level of abstraction. Sure, the process saves time — but the key is getting the intent part right. I’ll be convinced the system is “designing” when it can produce a meaningful output to a directive like “change the product page’s layout to increase conversions.”
Although I don’t discount the possibility of such systems existing, I expect they’re far in the future. Currently, AI-driven design tools seem like a more convenient UI for expressing intent — but not much more than that.