<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
  <generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator>
  <link href="https://jarango.com/feed.xml" rel="self" type="application/atom+xml" />
  <link href="https://jarango.com/" rel="alternate" type="text/html" />
  <updated>2026-04-09T09:08:08-07:00</updated>
  <id>https://jarango.com/feed.xml</id>
  <title type="html">Jorge Arango</title>
  <subtitle>Information Architecture Consulting &amp; Training Services </subtitle>
  <author>
    <name>Jorge Arango</name>
  </author>

  
  
  <entry>
    <title type="html">Traction Heroes Ep. 33: Perceptions</title>
    <link href="https://jarango.com/2026/04/06/traction-heroes-ep-33-perceptions/" rel="alternate" type="text/html" title="Traction Heroes Ep. 33: Perceptions" />
    <published>2026-04-06T00:00:00-07:00</published>
    <updated>2026-04-06T00:00:00-07:00</updated>
    <id>https://jarango.com/2026/04/06/traction-heroes-ep-33-perceptions</id>
    <content type="html" xml:base="https://jarango.com/2026/04/06/traction-heroes-ep-33-perceptions/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/qwp0kiLlDTM" allowfullscreen=""></iframe>
</div>

<p>Here’s a tricky situation: you start reading someone through a negative lens, which changes how you interact with them. They respond in kind, which seems to confirm your negative views. Cue vicious cycle.</p>

<p>In any situation, you are both observer <em>and</em> participant, whether you realize it or not. And often, you’re responding not just to the person in front of you, but to your story about them.</p>

<p>This mind-bending topic was the subject of <a href="https://www.tractionheroes.com/2439976/episodes/18958536-perceptions">episode 33</a> of <a href="https://www.tractionheroes.com"><em>Traction Heroes</em></a>. Harry brought a reading from Nir Eyal’s <a href="https://www.amazon.com/Beyond-Belief-Science-Backed-Limiting-Breakthrough-ebook/dp/B0FW9VBQP9/"><em>Beyond Belief</em></a> to set up the conversation. Here’s one of the key bits:</p>

<blockquote>
  <p>More often, it’s our brains creating problems because none exist. Since perception follows belief, we perceive the problems we look to find and if we can’t find them, our brain skews the data to fit the brief. If you believe your partner is constantly criticizing you, innocent comments transform into attacks. If you believe your boss doesn’t value you, any feedback becomes proof of your perceived inadequacy. This cycle becomes dangerous when it reinforces our negative beliefs, locking us into a belief-driven feedback loop that distorts reality and quietly builds a prison of our own making.</p>
</blockquote>

<p>I’ve been there, and I’m sure you have too. You may have even unwittingly flipped someone’s “bozo bit,” leading to a strain in the relationship that can be hard to undo.</p>

<p>The question is: what can you do about it? As with so many other topics we’ve discussed in the podcast, it comes down to self-awareness: having the wherewithal to step back and realize you’re layering meaning onto situations.</p>

<p>Easier said than done! For one thing, you want to perceive clearly to avoid misreadings.  But you don’t want to lapse into paranoia, which can also cast a negative valence.</p>

<p>Often, our misperceptions become obstacles to gaining traction. Surfacing them is a start, but we also explored practical suggestions in the podcast. Check it out:</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18958536-perceptions"><em>Traction Heroes episode 33: Perceptions</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[How our beliefs about others shape our relationship with them, for better or worse — and what to do about it.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Finding Our Way Podcast, Ep. 69</title>
    <link href="https://jarango.com/2026/03/30/finding-our-way-podcast-ep-69/" rel="alternate" type="text/html" title="Finding Our Way Podcast, Ep. 69" />
    <published>2026-03-30T00:00:00-07:00</published>
    <updated>2026-03-30T00:00:00-07:00</updated>
    <id>https://jarango.com/2026/03/30/finding-our-way-podcast-ep-69</id>
    <content type="html" xml:base="https://jarango.com/2026/03/30/finding-our-way-podcast-ep-69/"><![CDATA[<audio controls="" src="https://findingourway.design/wp-content/uploads/2026/03/FOW-e69-Jorge-Arango-II.mp3"></audio>

<p>Sure, AI can help you move faster. But are you moving in the right direction? How do you know? These are the key questions <a href="https://jessejamesgarrett.com">Jesse James Garrett</a>, <a href="https://www.petermerholz.com">Peter Merholz</a>, and I explored in <a href="https://findingourway.design/2026/03/27/69-in-a-world-of-ai-what-is-the-work-really-about-ft-jorge-arango/">episode 69</a> of their <a href="https://findingourway.design"><em>Finding Our Way</em> podcast</a>.</p>

<p>Leadership entails acting intelligently — i.e., moving in the right direction for the right reasons. This requires seeing clearly. Tools can help… or they can make it harder while <em>seeming</em> to help.</p>

<p>The question is, how do you do it? I’m a big fan of understanding the technology firsthand. But we must also understand how the technology changes the nature of the work.</p>

<p>AI calls for moving up the abstraction stack. It’s similar to what happened with computer programming, which went from flipping bits to assembly language to higher-level languages and now coding agents. The question before design and product leaders isn’t whether this shift will happen to design: it’s whether they’re ready to lead at the right level.</p>

<p>A bifurcation is coming. The organizations that figure out the role the technology plays in this shift will thrive. Those who do it poorly will crank out work faster — but it’ll be increasingly misaligned with the business’ needs.</p>

<p>As I said near the end of the episode, if you come out of any of these conversations feeling like you’ve got the answer, you’re probably wrong. The technology is changing too fast. What you can get is a clearer read on the context. Hopefully, this conversation helps.</p>

<p><a href="https://findingourway.design/2026/03/27/69-in-a-world-of-ai-what-is-the-work-really-about-ft-jorge-arango/"><em>Finding Our Way, Ep. 69: In a World of AI, What is the Work Really About? (ft. Jorge Arango)</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Artificial Intelligence" /><category term="Business &amp; Leadership" />
    <summary type="html"><![CDATA[A conversation about what AI really demands of design and product leaders.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Robots in the Garden</title>
    <link href="https://jarango.com/2026/03/27/robots-in-the-garden/" rel="alternate" type="text/html" title="Robots in the Garden" />
    <published>2026-03-27T00:00:00-07:00</published>
    <updated>2026-03-27T00:00:00-07:00</updated>
    <id>https://jarango.com/2026/03/27/robots-in-the-garden</id>
    <content type="html" xml:base="https://jarango.com/2026/03/27/robots-in-the-garden/"><![CDATA[<p><em>This post is based on a talk I delivered at the third PKM Summit in Utrecht on March 20, 2026.</em></p>

<p>I’m pleased to talk PKM in a city where Erasmus of Rotterdam — one of my intellectual heroes — spent some of his early years. This presentation has two purposes. First, I’ll give you a different frame to think about personal knowledge management. Then, I’ll share ways to use AI effectively in that context. I expect Erasmus would’ve been tickled!</p>

<p>Last year, Lou Rosenfeld sent me an email asking if I’d seen a post by Joan Westenberg titled <a href="https://www.joanwestenberg.com/i-deleted-my-second-brain-692aa40d59d5f06dd5131e43/"><em>I Deleted My Second Brain</em></a>. Back in 2024, Lou published my book <a href="https://dulynoted.fyi"><em>Duly Noted</em></a>, which shows you why and how to build a PKM system. Westenberg’s post argued against doing just that. Naturally, Lou wondered what I thought.</p>

<p>I won’t recap the whole post. The TL;DR: the author built an elaborate PKM system. After some time, the system wasn’t producing the expected results, so they got rid of it. Rather than summarize further, I’ll cite a couple of representative passages:</p>

<blockquote>
  <p>In trying to remember everything, I outsourced the act of reflection. I didn’t revisit ideas. I didn’t interrogate them. I filed them away and trusted the structure. But a structure is not thinking. A tag is not an insight. And an idea not re-encountered might as well have never been had.</p>
</blockquote>

<p>And:</p>

<blockquote>
  <p>When I first started using PKM tools, I believed I was solving a problem of forgetting. Later, I believed I was solving a problem of integration. Eventually, I realized I had created a new problem: deferral. The more my system grew, the more I deferred the work of thought to some future self who would sort, tag, distill, and extract the gold.</p>

  <p>That self never arrived.</p>
</blockquote>

<p>It’s a good and nuanced post, and you should read it. That said, it doesn’t cover new ground. Even when I was writing the book in 2022–23, there were already posts with titles like <em>Note-taking Became a Full-time Job, so I Stopped</em>, <em>Personal Knowledge Management Is Exhausting</em>, and — my favorite! — <em>Personal Knowledge Management Is Bullshit</em>.</p>

<p>They all trace a similar arc: lured by visions of increased productivity, the author builds an elaborate PKM system. They spend lots of time capturing notes, tagging, linking, organizing, etc. But the expected results never come. Eventually, the author decides the effort is futile, and gives up. Deleting the system brings a sense of relief and renewed agency, which they feel compelled to share.</p>

<p>I’m not here to diss these people. PKMs aren’t for everyone. If it’s not for you, the sooner you stop, the better — perhaps. But I also believe mindset influences the value you get from these systems. And unfortunately, the most common framing for PKMs sets the wrong mindset. It’s the metaphor in the title of Westenberg’s post: <em>second brain</em>.</p>

<h2 id="problems-with-the-second-brain-metaphor">Problems with the “second brain” metaphor</h2>

<p>In <em>Metaphors We Live By</em>, Lakoff and Johnson explain that metaphors don’t just reveal how we <em>talk</em> about things; they also reveal and inform how we <em>think</em> about things, deep down. Which is to say, metaphors matter. And I’ve come to believe the “second brain” metaphor leads to bad thinking about PKMs.</p>

<p>Before proceeding, I’ll say upfront that I admire Tiago Forte’s work. His PARA taxonomy has influenced me. And on the upside, thinking of PKMs as a “second brain” has brought lots of people into the fold.</p>

<p>That said, I think the “second brain” metaphor has three problems:</p>

<ol>
  <li><strong>It implies delegating cognition.</strong> The promised outcome is a prosthetic mind. That is, the system will relieve you of thinking and (especially!) long-term recall. (Westenberg: “I believed I was solving a problem of forgetting.”)</li>
  <li><strong>It sets expectations PKMs can’t meet.</strong> This isn’t a promise current PKMs — even with AI — can deliver. The system won’t “extract the gold,” at least not for a long time and after a lot of work on your part.</li>
  <li><strong>These are bad expectations to begin with.</strong> Even if PKMs could do this, you shouldn’t want this. If you want to think better, your goal shouldn’t be to delegate your thinking: It should be enabling your <em>first brain</em> to work better.</li>
</ol>

<h2 id="enter-the-knowledge-garden">Enter the knowledge garden</h2>

<p>A more fruitful metaphor for PKMs is that of a <em>garden</em>. Many of us already talk about our PKM systems as “places” where we do focused work. This is why I like the garden metaphor: it’s about building a <em>context for you to think in</em> rather than <em>a thing that thinks for you</em>. But it goes beyond that.</p>

<figure class="image">
  <img src="/assets/images/2026/03/garden-arches.jpg" width="100%" alt="Pathway lined with colorful flowers leads under arched green trellises toward a house, against a backdrop of lush trees and bright blue sky. Photo by [Veronica Reverse](https://unsplash.com/@vereverse?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText) on [Unsplash](https://unsplash.com/photos/single-perspective-of-pathway-leading-to-house-qYwyRF9u-uo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText)" />
  <figcaption><p>Photo by <a href="https://unsplash.com/@vereverse?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Veronica Reverse</a> on <a href="https://unsplash.com/photos/single-perspective-of-pathway-leading-to-house-qYwyRF9u-uo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>
</figcaption>
</figure>

<p>There are different kinds of gardens for different purposes. Some are for pleasure, while others are for growing food. Some are industrial; others artisanal. What they all have in common: things grow there. And it doesn’t happen overnight, but after much toil in the soil. For a garden to fulfill its purpose — whatever it might be — it must be stewarded over a long time.</p>

<p>Also, a garden’s structure can’t be rigidly top-down. While <em>some</em> structure is needed, the place’s form emerges over time as it meets real-world needs. Thinking about PKM as a productivity hack leads to overemphasizing upfront structures and workflows at the expense of the more patient approach required by organic processes.</p>

<p>Finally, for many gardeners, the fruit is only part of their garden’s value. Gardening is pleasurable <em>per se</em>. It’s not something they do just because they want to eat. After tall, it’s cheaper and easier to go to the supermarket. Instead, they garden because they find it fulfilling.</p>

<p>Many a garden’s ulterior purpose is providing the kind of groundedness that comes from putting your hand in the soil and nurturing living things. The fruit that comes from such a place tastes better than the one you buy from the store — even if (or perhaps <em>because</em>) you’ve put a lot of work into it.</p>

<p>A garden provides solace and recreation — the opposite of the anxiety that overhangs systems built as productivity hacks. My PKM system provides solace and recreation. So I call it my “knowledge garden,” riffing on the popular digital garden metaphor and Andy Matuschak’s evergreen notes, among others.</p>

<p>I approach my knowledge garden with Field Notes’s tagline in mind: “I’m not writing it down to remember it later, I’m writing it down to remember it now.” I don’t keep a PKM to remember things later, but because writing, structuring, and connecting ideas is how I <em>think</em>. That the words are there for recall later is a bonus, not the main attraction. <em>Clearer thinking</em> is the “gold,” the notes merely record it happened.</p>

<p>But if the point is creating a place for your first brain to work better, that raises an increasingly pressing question: what role should AI — which is being explicitly framed as a prosthetic mind — play there?</p>

<h2 id="ai-as-amanuensis">AI as amanuensis</h2>

<p>To explain how to use AI in a knowledge garden, I’ll refer back to what I wrote in <em>Duly Noted</em>. While I wrote the bulk of the book before ChatGPT came out, the final chapter covers this topic. I’ll offer a quick summary here, but I’ve shared a <a href="https://jarango.com/2022/12/18/three-roles-for-robots/">longer post</a> should you want to dive deeper.</p>

<p>When thinking about your relationship with AI in general, it helps to consider a spectrum. On one end, you reject the technology completely: you don’t want it anywhere near your notes. On the other end, the AI completely replaces you.</p>

<figure class="image">
  <img src="/assets/images/2026/03/rig-spectrum-0.png" width="100%" alt="Horizontal arrow graphic with two directional points. Left end labeled 'Total rejection,' right end labeled 'Total replacement.' " />
  <figcaption>
</figcaption>
</figure>

<p>Neither extreme is desirable, so most approaches fall somewhere on the spectrum. Toward the “rejection” end, you use AI merely as a <em>copy editor</em>, correcting your spelling and grammar. Millions of people already use tools like Grammarly, and likely won’t object to using AI in this capacity.</p>

<figure class="image">
  <img src="/assets/images/2026/03/rig-spectrum-1.png" width="100%" alt="Horizontal line diagram with 'Total rejection' on the left, 'Total replacement' on the right, and a red dot labeled 'Copy editor' near the left side. " />
  <figcaption>
</figcaption>
</figure>

<p>Toward the other end, you use AI to write for you. I call this role the <em>ghost writer</em>, although for many people it’s become a <em>ghost thinker</em>.</p>

<figure class="image">
  <img src="/assets/images/2026/03/rig-spectrum-2.png" width="100%" alt="A horizontal line with arrows at both ends labeled 'Total rejection' on the left, 'Total replacement' on the right, and 'Ghost writer' marked in red near the right. " />
  <figcaption>
</figcaption>
</figure>

<p>I get most value around the middle of the spectrum.</p>

<figure class="image">
  <img src="/assets/images/2026/03/rig-spectrum-3.png" width="100%" alt="Scale with a pointer labeled 'Amanuensis' positioned in the middle between the two extremes of 'Total replacement' and 'Total rejection'. " />
  <figcaption>
</figcaption>
</figure>

<p>There’s a historical precedent here. Some early modern scholars employed live-in secretaries to do various tasks for them: researching, indexing, archiving, retrieving, organizing, translating, summarizing, and running errands. While not as famous as their employers, these people were often seen more as collaborators than anonymous servants. They were called <em>amanuenses</em>.</p>

<figure class="image">
  <img src="/assets/images/2026/03/erasmus-cousin.jpg" width="100%" alt="Ancient engraving showing two men in scholarly attire sit at a table writing. The text 'COGNATVS' and 'ERASMVS' appears above them. Ornate windows and decorations surround them.
 Erasmus of Rotterdam working alongside his amanuensis Gilbert Cousin, via [Wikimedia](https://de.wikipedia.org/wiki/Gilbert_Cousin#/media/Datei:Cognatus-erasmus.tiff)" />
  <figcaption><p>Erasmus of Rotterdam working alongside his amanuensis Gilbert Cousin, via <a href="https://de.wikipedia.org/wiki/Gilbert_Cousin#/media/Datei:Cognatus-erasmus.tiff">Wikimedia</a></p>
</figcaption>
</figure>

<p>I consider amanuensis to be the ideal role for AIs in your knowledge garden.</p>

<h2 id="robots-in-the-garden">Robots in the garden</h2>

<p>This sounds nice in theory, but how does it work in practice? To find out, I undertook a major personal project last year. It’s something I’d always wanted to do: read through the humanities — the major texts that have shaped civilization: Homer to Faulkner, Plato to Freud, the <em>Book of Job</em> to the <em>Communist Manifesto</em>.</p>

<p>Daunting, right? Of course, it’s impossible to do comprehensively in a year. I followed Ted Gioia’s <a href="https://www.honest-broker.com/p/a-12-month-immersive-course-in-humanities">excellent syllabus</a>, which curates key texts, artworks, and musical masterpieces into 52 weeks with reasonable limits. (As you’ll see below, I added cinema to the mix.)</p>

<p>There were two goals to the project. Primarily, I aimed to learn about the main ideas that have shaped our world. But this was also an opportunity to explore how AI might help with such an undertaking. I blogged what I learned every week, including how I used AI.</p>

<p>It was a messy process. That’s what you do in a garden! And the outcome wasn’t an enthusiastic endorsement of AI. Instead, I landed at a map of roles and modalities for how AI can help at different points in the spectrum. Let’s look at nine of these roles.</p>

<h3 id="1-tutor">1. Tutor</h3>

<p><img src="/assets/images/2026/03/rig-role-1.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot in a tweed jacket pointing at a book. Image by Nano Banana 2." /></p>

<p>The simplest role for AI is as a tutor. You ask it to explain a difficult concept, clarify a confusing passage, translate jargon, etc. I mostly did this via the standard chat UI (although I created a ChatGPT project to preserve context for the course.)</p>

<p><em>Example:</em></p>

<p>While reading Freud’s <em>The Interpretation of Dreams</em>, I came across three unfamiliar German terms: <em>es</em>, <em>ich</em>, and <em>über-ich</em>. ChatGPT helpfully explained these are more commonly known as <em>id</em>, <em>ego</em>, and <em>superego</em> — three terms I already understood.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>I just read [PASSAGE]. I understand [X] but I’m confused about [Y]. Can you explain [Y] in plain terms, without assuming I have background in [FIELD]?</p>
</blockquote>

<h3 id="2-validator">2. Validator</h3>

<p><img src="/assets/images/2026/03/rig-role-2.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot wearing an accountant’s green visor and holding a checklist. Image by Nano Banana 2." /></p>

<p>Another basic role for AI is validating your understanding. To do this, you ask it to review your notes for errors or gaps, do basic fact checking, or critique your reasoning. Again, you can do this via the chat interface, but I also experimented with passing my notes in Obsidian using the Copilot plugin and in Emacs using gptel.</p>

<p><em>Example:</em></p>

<p>After reading <em>The Epic of Gilgamesh</em>, I wrote a note in Obsidian summarizing its plot. When I asked ChatGPT to critique my summary, it pointed out that I’d given the central character a redemption arc that isn’t present in the text. I’m so accustomed to the standard hero’s journey, that I projected it onto the book — and an LLM helped me correct this ‘hallucination.’</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>Here are my notes on [WORK]. What important ideas did I miss or underemphasize? Don’t rewrite my notes — just flag the gaps.</p>
</blockquote>

<h3 id="3-connector">3. Connector</h3>

<p><img src="/assets/images/2026/03/rig-role-3.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot in a long tan coat holding a looking glass in one hand and a book showing connections between shapes on the other. Image by Nano Banana 2." /></p>

<p>Here’s yet another role you can easily do via chat: identifying thematic, philosophical, or narrative parallels between works. Note I wrote “works” — it’s fun and illuminating to ask for connections across media, genre, time, etc.</p>

<p><em>Example:</em></p>

<p>I watched Francis Ford Coppola’s <em>The Conversation</em> on the same week I read <em>Oedipus Rex</em>. For fun, I asked ChatGPT for possible parallels between the two works. Its reply was enlightening: it pointed out how the protagonists of both stories undertook an obsessive investigation that uncovered terrible knowledge.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>I’ve been reading [WORK A] and [WORK B]. What philosophical or thematic threads connect them? I’m looking for non-obvious resonances, not surface similarities.</p>
</blockquote>

<h3 id="4-orienter">4. Orienter</h3>

<p><img src="/assets/images/2026/03/rig-role-4.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot in an explorer jacket holding a compass in one hand and a wayfinding sign on the other. Image by Nano Banana 2." /></p>

<p>This role is something of an inversion of the <em>validator</em>. Instead of asking for feedback on your notes after reading a text, here you ask the AI for guidance before reading. You’re looking for framing, historical context, high level outlines, etc. — ideally, without spoilers.</p>

<p><em>Example:</em></p>

<p>Before reading Nietzsche’s <em>Beyond Good and Evil</em> and Tolstoy’s <em>The Death of Ivan Illych</em>, I uploaded both books to NotebookLM, which created a podcast for me that explained their thematic contexts. Listening to this podcast in my daily walk helped me better understand the readings.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>I’m about to read [WORK] for the first time. Give me enough context to make sense of it — historical background, key arguments, things to watch for — but don’t spoil the experience of discovering it myself.</p>
</blockquote>

<h3 id="5-recommender">5. Recommender</h3>

<p><img src="/assets/images/2026/03/rig-role-5.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot holding a pile of books in one hand and a document that says ‘recommendations’ on the other. Image by Nano Banana 2." /></p>

<p>This is a useful role for deepening your understanding of a subject: asking for related works that reflect similar themes. It’s also a use case where I noticed considerable improvements in LLM performance over 2025.</p>

<p><em>Example:</em></p>

<p>Early in 2025, I read Confucius’s <em>Analects</em>. Perplexity was ahead in web-backed interactions at the time, so I asked it for a list of classic Chinese movies that reflected Confucian values. It responded with five suggestions, some of which it hallucinated. But one of them, <em>Spring in a Small Town</em>, was a bona fide classic — and I likely wouldn’t have learned of it without an LLM. (Later in the year, other chatbots gained this ability and hallucinations dropped across the board.)</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>I just finished [WORK]. Recommend three films that explore similar themes or ideas. Prioritize films with strong critical reputations — I’d rather have one great recommendation than five mediocre ones.</p>
</blockquote>

<h3 id="6-adversary">6. Adversary</h3>

<p><img src="/assets/images/2026/03/rig-role-6.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot with an evil smirk, wearing a purple cape and holding a document that says ‘objections.’ Image by Nano Banana 2." /></p>

<p>Here’s a fun role: asking for an LLM to push back on your position or steelman the opposing point of view. The idea is to expand your understanding by bringing your assumptions to the surface and challenging them.</p>

<p><em>Example:</em></p>

<p>After watching <em>Modern Times</em>, I asked ChatGPT to correct my understanding of the movie as a work of Marxist propaganda. The LLM convinced me that the film is in fact more of a humanist statement than a political one. As a result of this interaction, I changed my mind on Chaplin’s work.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>Here are my notes on [TOPIC]. Please help me see it through the lens of someone who might be sympathetic to [OPPOSING POSITION] without fully realizing it. What could I improve? Where is my argument weakest? [paste notes]</p>
</blockquote>

<h3 id="7-analyst">7. Analyst</h3>

<p><img src="/assets/images/2026/03/rig-role-7.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot wearing a lab coat and holding a notebook and a pipe. Image by Nano Banana 2." /></p>

<p>This role will also help you appreciate a work from a different perspective. It’s easy: you ask for the LLM to apply a specific critical lens to a reading. Common lenses include Freudian, Marxist, feminist, Girardian, etc.</p>

<p><em>Example:</em></p>

<p>The same week I read Freud, my son and I watched <em>Predator</em>, the 1980s sci fi film starring Arnold Schwarzenegger. For fun, I asked ChatGPT to analyze the film through a Freudian lens. The result was both enlightening and hilarious.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>Apply a [Marxist / feminist / postcolonial / Jungian] reading to [WORK]. What does this lens reveal that a neutral summary would miss?</p>
</blockquote>

<h3 id="8-mapper">8. Mapper</h3>

<p><img src="/assets/images/2026/03/rig-role-8.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot wearing a beret and a military jacket and holding up a map. Image by Nano Banana 2." /></p>

<p>This one’s a bit more esoteric. Some people — me included — are primarily visual: diagrams and drawings aid our understanding. Concept maps can be especially helpful. I’ve built an Agent Skill to allow LLMs like Claude draw concept maps. (<a href="https://github.com/jorgearango/llmapper-skill">Download it from Github</a>.)</p>

<p><em>Example:</em></p>

<p>I used this mapping skill to generate a concept map of Virginia Woolf’s <em>To the Lighthouse</em>. It’s not especially insightful, but more of a proof point of using LLMs in a more visual modality.</p>

<p><em>Suggested prompt:</em></p>

<p>(Note: install my LLMapper Skill before issuing this prompt)</p>

<blockquote>
  <p>Generate a concept map for [WORK] centered on the question: “How does the novel’s treatment of [THEME] illuminate [BROADER QUESTION]?”</p>
</blockquote>

<h3 id="9-reflector">9. Reflector</h3>

<p><img src="/assets/images/2026/03/rig-role-9.png" width="240" style="float: right; margin: 0 0 20px 20px" alt="Cartoon drawing of a green robot dressed as an ancient philosopher staring at a mirror. Image by Nano Banana 2." /></p>

<p>This final role is different. Whereas the others took as the object of inquiry a particular work — e.g., a novel or a movie — this last one takes as the object <em>your knowledge garden itself</em>. That is, you point the LLM to a series of notes to analyze patterns over time and suggest improvements.</p>

<p><em>Example:</em></p>

<p>I fed all 52 weekly posts from my humanities crash course to Claude Code, and asked it to identify the various roles in which I used AI for learning throughout the year. Its answers — with some curation from me — are the roles you just read.</p>

<p><em>Suggested prompt:</em></p>

<blockquote>
  <p>Here are my notes from [X weeks/months] of reading on [TOPIC]. What patterns do you notice in what I pay attention to? What do I seem to find most interesting, and what do I seem to avoid or underweight?</p>
</blockquote>

<h2 id="takeaways">Takeaways</h2>

<p>This list isn’t comprehensive. I’m still experimenting and would love to learn from your experiments as well.</p>

<p>To wind down, I’ll summarize with three key points:</p>

<ul>
  <li><strong>Don’t try to build a brain. Instead, grow a garden.</strong> Metaphors matter. Stop thinking of your PKM system as a prosthetic mind. Instead, think of it as a place you can go to think.</li>
  <li><strong>Use AI to help you think and learn better.</strong> AI can help you think better. But use it intentionally. Aim to land somewhere between “replacement” and “rejection.”</li>
  <li><strong>Think calm… and long-term.</strong> Learning to think better is a lifelong project. Your knowledge garden is where it happens.</li>
</ul>

<p>This isn’t a productivity hack. Results won’t come in a year. Results might not come after seven years. Thinking in terms of “results” might be wrong altogether.</p>

<p>You’re building a place to think. And not just any place: a living place that changes and grows over time. It’ll be messy. Good. But if you work on it, it’ll grow more beautiful and fruitful over time. You’ll get peace and satisfaction, even in its imperfection. And the sooner you start, the more material you’ll have for the AIs to help.</p>

<p>I’ll close with this passage from Montaigne, which I hope captures the spirit of what I’ve told you today:</p>

<blockquote>
  <p>When I dance, I dance; when I sleep, I sleep; yes, and when I walk alone in a beautiful orchard, if my thoughts have been concerned with extraneous incidents for some part of the time, for some other part I lead them back again to the walk, to the orchard, to the sweetness of this solitude, and to myself. Nature has in motherly fashion observed this principle, that the actions she has enjoined on us for our need should also give us pleasure; and she invites us to them not only through reason, but also through appetite. It is wrong to infringe her laws.</p>
</blockquote>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Arts &amp; Humanities" /><category term="Personal Knowledge Management" />
    <summary type="html"><![CDATA[A different way to think about your PKM system — and roles AI can play in it.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2026/03/brady-robots.png" />
    <media:content medium="image" url="/assets/images/2026/03/brady-robots.png" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 32: MacGuffins</title>
    <link href="https://jarango.com/2026/03/23/traction-heroes-ep-32-macguffins/" rel="alternate" type="text/html" title="Traction Heroes Ep. 32: MacGuffins" />
    <published>2026-03-23T00:00:00-07:00</published>
    <updated>2026-03-23T00:00:00-07:00</updated>
    <id>https://jarango.com/2026/03/23/traction-heroes-ep-32-macguffins</id>
    <content type="html" xml:base="https://jarango.com/2026/03/23/traction-heroes-ep-32-macguffins/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/ecM3cpuye\_A" allowfullscreen=""></iframe>
</div>

<p>When Harry and I first discussed the possibility of starting a podcast, I started an outline of traction-generating ideas I wanted to discuss with him. <a href="https://www.tractionheroes.com/2439976/episodes/18870344-macguffins">Episode 32</a> covers one of my favorites: the <a href="https://en.wikipedia.org/wiki/MacGuffin">MacGuffin</a>.</p>

<p>What is a MacGuffin? I quoted a couple of passages from Dan Hill’s book <a href="https://www.amazon.com/Dark-Matter-Trojan-Horses-Vocabulary/dp/0992914639"><em>Dark Matter and Trojan Horses</em></a>, where I first learned about MacGuffins:</p>

<blockquote>
  <p>The MacGuffin comes with a particular provenance. The phrase is attributed to Alfred Hitchcock, and has become associated with him ever since. The dictionary defines it as “an object, event, or character in a film or story that serves to set and keep the plot in motion despite usually lacking intrinsic importance.”</p>

  <p>And in Hitchcock’s words:</p>

  <p>“A MacGuffin you see in most films about spies. It’s the thing that the spies are after. In the days of Rudyard Kipling, it would be the plans of the fort on the Khyber Pass. It would be the plans of an airplane engine, and the plans of an atom bomb, anything you like. It’s always called the thing that the characters on the screen worry about but the audience don’t care… It is the mechanical element that usually crops up in any story.”</p>
</blockquote>

<p>I’ve <a href="https://jarango.com/2018/05/24/information-architecture-as-macguffin/">written about MacGuffins before</a>, and won’t restate that here. The TL;DR: in business, they are projects or artifacts that further both strategic and tactical goals simultaneously. While they have tactical value, their real worth comes from the interactions that produce the MacGuffin.</p>

<p>For example, I cited a project where different teams — some of which were in tension with each other — collaborated to produce a user experience journey map. While the final artifact (the map) had value per se (i.e., it informed product design,) the gold came from alignment and better relations between the teams.</p>

<p>Both Harry and I admitted that we’ve participated in more MacGuffin projects unwittingly than by design. But as Hill describes in his book, they can be used as a strategic “play” to precipitate change. As such, a MacGuffin can be a valuable means to gaining traction.</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18870344-macguffins"><em>Traction Heroes episode 32: MacGuffins</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[Considering a strategic “play” for gaining traction in complex projects.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 31: Mindfulness</title>
    <link href="https://jarango.com/2026/03/09/traction-heroes-ep-31-mindfulness/" rel="alternate" type="text/html" title="Traction Heroes Ep. 31: Mindfulness" />
    <published>2026-03-09T00:00:00-07:00</published>
    <updated>2026-03-09T00:00:00-07:00</updated>
    <id>https://jarango.com/2026/03/09/traction-heroes-ep-31-mindfulness</id>
    <content type="html" xml:base="https://jarango.com/2026/03/09/traction-heroes-ep-31-mindfulness/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/2UvDITJg_Lg" allowfullscreen=""></iframe>
</div>

<p>There are episodes in my career I’m not proud of. Most have something in common: I lost my cool. More accurately, I let myself be driven by an emotional response to a stressful situation.</p>

<p>I’m not alone. Harry kicked off <a href="https://www.tractionheroes.com/2439976/episodes/18805811-mindfulness">episode 31</a> of <em>Traction Heroes</em> with a story from his career where he did something “graceless” (his word!) that — if given the chance — he wouldn’t do again.</p>

<p>We’ve all been there. But  I believe Harry and I have become better at managing these situations. How? That was the focus of our conversation. Harry brought a short reading from Henry Shukman’s <a href="https://www.amazon.com/Original-Love-Four-Inns-Awakening-ebook/dp/B0CKTB3228"><em>Original Love</em></a>:</p>

<blockquote>
  <p>We have become more adept at grounding ourselves in the here and now. If emotions come up, then we disentangle the threads of inner experience more deftly. Thoughts and feelings can be overwhelming when they come braided together, especially when they proliferate.</p>

  <p>It’s easier to bring attention to body sensation, to contractions in the torso, and to sight, sound, and breath, in order to return to the here and now, rather than being lost in stories and emotions. To do this requires a lowering of defenses: a small but significant opening of the heart.</p>
</blockquote>

<p>If the reading doesn’t give it away, the secret is meditation. Shukman is an American Zen master who teaches mindfulness through various channels, including books and an app called <a href="https://www.thewayapp.com">The Way</a>.</p>

<p>As I mentioned during the episode, meditation is one of the most important skills I’ve learned. My longstanding meditation practice has changed how I attend to whatever is happening — and that’s fundamental to gaining traction.</p>

<p>Check out our conversation for more.</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18805811-mindfulness"><em>Traction Heroes episode 31: Mindfulness</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[How to manage impulsive emotional responses that can sabotage your efforts.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Look for the Adults</title>
    <link href="https://jarango.com/2026/03/01/look-for-the-adults/" rel="alternate" type="text/html" title="Look for the Adults" />
    <published>2026-03-01T00:00:00-08:00</published>
    <updated>2026-03-01T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/03/01/look-for-the-adults</id>
    <content type="html" xml:base="https://jarango.com/2026/03/01/look-for-the-adults/"><![CDATA[<blockquote>
  <p>In terms of sheer volume of words, factoids, and data of all kinds, this is surely an information age. But in terms of understanding, wisdom, spiritual clarity, and civility, we have entered a darker age.</p>

  <p>— David Orr, <em>Verbicide</em> (1999)</p>
</blockquote>

<p>You know how Mr. Rogers said that in a disaster, you should look for the helpers? I have a similar hack. In complex, challenging, ambiguous, momentous situations – and there are many these days! – I look for the adults.</p>

<p>The adults are people who:</p>

<ul>
  <li>
    <p>make clear-headed assessments using relevant, timely, accurate information — including (but not solely) what their guts tell them.</p>
  </li>
  <li>
    <p>respect people’s worth, dignity, and agency.</p>
  </li>
  <li>
    <p>acknowledge nuance and are open to compromise and win-win scenarios.</p>
  </li>
  <li>
    <p>understand the context and history of the situation.</p>
  </li>
  <li>
    <p>grok the difference between urgency and importance.</p>
  </li>
  <li>
    <p>acknowledge that perfect is the enemy of good.</p>
  </li>
  <li>
    <p>have enough relevant life experience to stand behind their words.</p>
  </li>
  <li>
    <p>value taste — and have the confidence to express theirs.</p>
  </li>
  <li>
    <p>move pragmatically, without drama or posturing.</p>
  </li>
  <li>
    <p>act with integrity.</p>
  </li>
  <li>
    <p>don’t avoid hard truths.</p>
  </li>
  <li>
    <p>deal with reality (rather than platonic ideals.)</p>
  </li>
  <li>
    <p>demonstrate (rather than boast) competence.</p>
  </li>
  <li>
    <p>have no agenda higher than steering the group skillfully through the mess.</p>
  </li>
  <li>
    <p>accept responsibility.</p>
  </li>
  <li>
    <p>don’t take themselves <em>too</em> seriously.</p>
  </li>
</ul>

<p>Meeting all these criteria seems like a big ask. But they often go together.</p>

<p>Adulthood isn’t a given. There are plenty of “grown-ups” — some in influential positions — who don’t live up to the title.</p>

<p>I try to show up as an adult. I don’t always succeed — but I’m clear on the goal.</p>

<p>If anyone else is “more adult” in the situation, I follow them.</p>

<p>And if I can’t identify other adults, bringing them forth is the goal.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Ethics &amp; Values" /><category term="Systems Thinking" />
    <summary type="html"><![CDATA[A hack for navigating challenging, ambiguous, momentous situations — of which there are many now.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2026/03/earth.jpg" />
    <media:content medium="image" url="/assets/images/2026/03/earth.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Open-Ended Sessions: How Are You Feeling?</title>
    <link href="https://jarango.com/2026/02/27/open-ended-session-how-are-you-feeling/" rel="alternate" type="text/html" title="Open-Ended Sessions: How Are You Feeling?" />
    <published>2026-02-27T00:00:00-08:00</published>
    <updated>2026-02-27T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/02/27/open-ended-session-how-are-you-feeling</id>
    <content type="html" xml:base="https://jarango.com/2026/02/27/open-ended-session-how-are-you-feeling/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/4FYXZEkE5ag" allowfullscreen=""></iframe>
</div>

<p>In the second of our <a href="https://www.youtube.com/playlist?list=PLZeu-R3TlcIxKLsNGXtzAF-k4SQj-As4h">“Open-Ended” livestreams</a>, Greg and I discussed the anxiety many design and product leaders are feeling from AI-driven changes. The intent wasn’t to offer suggestions, but to think out loud about what we’re observing.</p>

<p>That said, we surfaced a couple of important insights:</p>

<ul>
  <li>
    <p>Organizational structures must change. How? Greg suggested empowering smaller (e.g., two-pizza) teams with enough agency to move and learn quickly.</p>
  </li>
  <li>
    <p>In response to the “AI will replace SaaS” narrative, I countered that for many products, information architecture is the moat.</p>
  </li>
</ul>

<p>We’d love to know what you think; please leave comments in the <a href="https://www.youtube.com/live/4FYXZEkE5ag">YouTube video</a>.</p>

<h2 id="links">Links</h2>

<p>We referenced several articles and one book during the conversation:</p>

<ul>
  <li>
    <p><a href="https://aboutexperiences.substack.com/p/the-cognitive-cost-of-ai"><strong>The cognitive cost of AI</strong></a> by Giu Vicente</p>
  </li>
  <li>
    <p><a href="https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data?ab=HP-hero-latest-1&amp;__readwiseLocation=&amp;giftToken=12050438461771433707670"><strong>Why AI Adoption Stalls</strong></a> by Keith Ferrazzi, Wendy Smith, and Shonna Waters</p>
  </li>
  <li>
    <p><a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html"><strong>The A.I. Disruption We’ve Been Waiting for Has Arrived</strong></a> by Paul Ford</p>
  </li>
  <li>
    <p><a href="https://craighepburn.substack.com/p/welcome-to-the-intelligence-era?r=1uelnl&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true&amp;__readwiseLocation="><strong>Welcome to the Intelligence Era</strong></a> by Craig Hepburn</p>
  </li>
  <li>
    <p><a href="https://rosenfeldmedia.com/books/managing-priorities/"><strong>Managing Priorities</strong></a> by Harry Max</p>
  </li>
</ul>

<h2 id="transcript">Transcript</h2>

<p><em>(AI generated.)</em></p>

<p><strong>Jorge</strong>: Well, hello Greg. I think we are—let me refresh—yep, so we are live, sir. It’s good to see you.</p>

<p><strong>Greg</strong>: Nice to see you, Jorge. Happy Thursday!</p>

<p><strong>Jorge</strong>: Happy Thursday to you as well. I’m having a weird echo.</p>

<p><strong>Greg</strong>: Well, anyway, we’re here today to talk a little bit about what’s going on with sort of this zeitgeist moment. It feels like there are a bunch of messages kind of moving through our communities. Jorge and I have been talking a lot about this stuff, and we thought we would get together and run one of our Unfinishe sessions—Open-Ended is what we call them—but I thought maybe we could start and talk a little bit about Unfinishe, and then we’ll get into today’s topic, which is really about the psychological tax of AI initiatives and how all of us are feeling. But before we do that, Jorge, what is Unfinishe?</p>

<p><strong>Jorge</strong>: Unfinishe is an emergent practice that you and I have taken on to develop to help teams navigate this new era. I think that that’s kind of like the highest level description that I can offer. What teams might mean might be up for grabs; it’s emergent, right? But we are trying to be responsive to what we are hearing in our various communities and contexts. It’s very clear that everyone is cognizant at this point of the fact that we are in a different space. This technology is massively disruptive, and it requires new approaches and new thinking, so that’s clear. The other thing that’s become increasingly clear is that many of us—and I’ll put you and I in this—are trying to come to grips with how to navigate this time skillfully. You and I bring particular perspectives and life experiences to bear on this problem that we believe are helpful to folks. So that’s my kind of 10,000-foot view on what Unfinishe is. What would you answer? How would you answer that question?</p>

<p><strong>Greg</strong>: Yeah, I mean, I echo what you’re talking about. I think part of it is the opportunity to disrupt ourselves and explore the meaning of these new tools and do it in a way where we can sort of be all in, but at the same time be intentional and try to understand what it might mean, and then share what we learn with folks. I think we named this endeavor Unfinishe with the ‘D’ missing on purpose because I think one of the things that we’re all experiencing is that the moment you feel like you’re on solid ground, the ground shifts, and we need to find an understanding of what to do next. I think the journey we’re on is to help organizations and teams navigate that. We’re taking the experience we’ve had in our careers, but we’re also super willing to experiment and adapt. We’re trying to be curious and mindful in the practice. So that’s how I might answer that. Maybe that’s a good segue for today’s conversation, which is, you know, there’s a lot of anxiety around what’s going on with these tools. We’re starting to experience it in our own work, but we’re also seeing it in the teams that we help. There seems to be a conversation bubbling up in the zeitgeist around AI right now about what it might mean. I think there are also some seminal moments that have happened recently that have demonstrated that we’re actually in a new place. You know, this isn’t the announcement of ChatGPT two and a half years ago. This is the arrival of coding tools, the rapid improvement of the models, and the fact that we’re now starting to see teams use these things. There have been some really salient conversations around that. So that’s what we’re starting for here today, and we want to help and have a conversation around it. Also, folks online, you’re welcome to come and ask questions. We’re going to try to be vulnerable and transparent, if possible, about our own insecurities and feelings. This is an experiment, and we’re glad that folks are here with us today.</p>

<p><strong>Jorge</strong>: And for a bit of context for folks who, for whom this might be the first live stream of ours that they join, this is only the second one that we’ve done. Right. And in the Unfinishe spirit, this is a very open-ended conversation. It is very loosely structured. I would say there are not going to be any decks. There are no pitches. That’s not what we’re doing here. What we’re doing is we’re trying to think through the moment that we’re in, and we’re trying to think out loud. Because the time does require kind of fast responses, I think that we can’t be too precious about what we’re doing right now. So with that in mind, you said that we want to be vulnerable and that we’re both feeling a bit of anxiety. I’m going to kind of pinch and zoom on that. The live stream you titled it “How Are You Feeling?” How are you feeling, Greg?</p>

<p><strong>Greg</strong>: Yeah, I mean, there have been a couple of articles that have encapsulated my experience lately. I would say I’m both super intrigued and excited and super freaked out at the same time. And what do I mean by that? I mean, I’m enamored by the capabilities that I have at my fingertips and blown away by the things I’m able to accomplish with the tools that I’m using. I’m also recognizing that I don’t have good boundaries with how I operate with Claude, which is the tool that I use, Anthropic’s AI. At the end of the day, my brain is like, I’ve gone through a lot of work, and I’m wondering if it’s sustainable. I’m mixed about all this stuff; it’s exciting, and I’m enabled to do some really incredible things. But at the same time, I’m trying to track if I’m being changed by this experience.</p>

<p><strong>Jorge</strong>: It might be worth calling this out because some folks tuning in, this might be the first time they hear from you. I think that we have slightly different backgrounds. I would say that your background, your trajectory, and your career has been mostly around design leadership, very senior roles, managing teams and organizations, whereas my background is more as an individual contributor for hire. I’ve been a consultant for the bulk of my career, and I’ve been brought in to do very specific things. I’m just calling that out because I hear you talk about this being torn between excitement and apprehension, and I’m feeling like that too. But I think I’m feeling like that for maybe different reasons than you are. How does this tension show up in your work as a design leader?</p>

<p><strong>Greg</strong>: Yeah, I mean, I think there are a couple of things. Paul Ford wrote something recently about feeling obsolete at some level and at the same time superpowered. Right? I have some of those feelings. I’m able to help a couple of companies right now from a design leadership perspective, and I can help them in really fast ways that would have taken weeks to accomplish, and I can do it in like days. That’s really great, but at the same time, it feels like the flattening of my expertise. It’s an interesting moment to see how we show up. I think there’s some anxiety around that. I might flip the bit for you, and you’ve been spending decades thinking about how humans navigate information. Does AI feel like an extension of that work or a threat to that work? How does that fit into how you operate in this moment?</p>

<p><strong>Jorge</strong>: Well, the first thing that I’ll say here is that anything I say today, I say with more interest than conviction, meaning my mind is still exploring this, and I’m trying to develop my positions. What is very clear to me is that large language models in particular change our relationship to information considerably. I realized this; I’ve been working with AI—in general, what we call AI—for a long time with client projects. But when ChatGPT was released, I kind of went all in and said, “Okay, let’s see how this can help me do the work of an information architect.” It became very clear to me very quickly that the work I was doing needed to change and was going to change. You talked about acceleration as one of the things you’re experiencing in your design leadership role. I also felt that this is going to greatly accelerate certain processes. It’s also going to change how we interact with information. The object of the things that we design is likely going to change, but that might take a bit longer. I don’t know that I felt as threatened; I’ve felt more excited. I’ve been more excited than I’ve been threatened, I think, by all this stuff. There’s a flip side to it, which is the fact that there’s a lot more information being generated. Not all of it is useful, perhaps. These tools have the potential to generate a lot of misinformation. But this is the kind of upside bit. I might sound like I’m taking a very kind of positive perspective here. The more I worked with these tools, the more evident it became to me that their effectiveness is highly reliant on the information that you are giving the tool. Initially, there was this idea of prompt engineering, and then people realized it’s not just a prompt; there’s more stuff that you’re feeding the AI. The phrase became “context engineering.” To me, the upshot of all that is that language models are as useful as the information that they’re given to work with, and I suspect that people who do information architecture work have a big role to play in creating and structuring the information that gets fed to the LLMs. That’s going to have a very important effect on the degree to which the systems produce good results. So I’m excited. It is a time of great change, and great changes always produce anxiety, so I’m feeling anxious too, but I think I’m also feeling like, my gosh, there’s so much potential here—unexplored potential, right?</p>

<p><strong>Greg</strong>: Yeah, and I think you and I did a consulting arrangement last fall where we helped an organization sort of organize their business information. I think you’re right that there’s this notion of understanding how work gets done and what content exists in an organization. Most organizations can articulate that very well; they just kind of tacitly know this is how they operate. These systems work better if you can be clear and crisp about the terminology. I’ll use a fancy word: the ontology or the model of information in it. I think for folks like you—who I love the fact that you called yourself an architect of information now versus an information architect…</p>

<p><strong>Jorge</strong>: An architect of intelligence.</p>

<p><strong>Greg</strong>: That’s right, architect of intelligence. I think there’s some truth to that because I think one of the things that we need to talk about—one thread that needs to be in this conversation—is to be intentional about how you use these tools. One way to alleviate anxiety is to understand the structure of the entity that you work for, the organization that you work for, or the thing that you’re trying to accomplish—so that you can make conscious decisions when you interact with these tools, and then you know your intent. That’s where these tools are actually really valuable. If your intent is clear, the quality of the answers that they generate or collaborate with you on improves, and that’s where you can start to have a conversation that leads you to new insights or new outcomes. That’s the part that I think is super fascinating. Every day I’m surprised by something. There’s something I’ve done, and I’m just sort of like, “Oh my gosh, how did I do that? Wow, how did it do that?” That’s part of it. Is there something that you’ve noticed about yourself, though? Have you changed at all in terms of how you’re operating with these things?</p>

<p><strong>Jorge</strong>: I’ve always been very hands-on with the tools that I use, and one of my directions early on with this stuff was I did not want to learn about it or just learn about it in the abstract; I wanted to have hands-on experience. I think that I’ve been maybe more hands-on with code than I have been more recently in my career just because I’ve been really trying to lift the hood on this stuff to get a sense of how it works. You referenced the Paul Ford op-ed piece in The New York Times earlier. We have been having conversations with other folks and also reading stuff that people have been publishing. One of the things that I read in one of the articles that you and I were discussing on Slack over the last week or so is something that resonated with me, which is the idea that all of a sudden you have this tool that lets you do so much stuff that you tend to fill your day with stuff. It’s the kid in a candy store thing where, left unchecked, you end up with a really bad bellyache. I don’t remember which one of the articles it was. I think you shared this one where this person was saying, “You know, it’s taken over. Now I’m thinking about it during my lunch break and thinking about how I can prompt this thing.” Or, you know, “Before I go to sleep, I want to leave it doing something overnight.” There’s so much potential. All of a sudden, there’s an unlocking of so much potential that we want to—well, and then there’s the incentive to move very fast, to take advantage of that potential. We run the risk of not leaving enough space to be mindful about what we’re doing, to prioritize what we’re doing. I’m saying this because I am feeling that. I’m feeling like there’s so much that I can do. Let’s do it all! Now that we have these things that can do it for me, I’m feeling a little burned out by that. I’m suspecting that other people are as well based on what I’m reading.</p>

<p><strong>Greg</strong>: Yeah, I think that, first of all, we’re hitting a cognitive barrier. I mean, humans can only process so much information. Individually, I think there’s a challenge. I’m feeling exactly the same thing. I generate, you know, I’ll take some information, I’ll process it, and I’ll work with Claude to tune it up in a way that makes sense to me. I’ll get a very professional document. Part of my process is I usually print them. I know it’s very old school, but I find that I don’t edit very well if I’m just looking at a screen. If I look at a piece of paper, I can distance myself for a second and read it, take some notes, and then go back, and that’s kind of the way that I operate. But I’m starting to build these very useful and deep content pieces for the customers that I’m working with that are highly valuable. But I’m filling my day with like doing that work. Earlier in our conversation, I was talking about how sometimes my brain is just like, “Oh, I’ve done… I can’t process it anymore.” One thing I’m noticing—I don’t know if others are noticing this online, but if you are, let us know. One of the things in organizations is the socialization of ideas. We’re used to operating, especially in product development teams, at a certain clock speed. There’s a group of people who start working on an idea, and they start building prototypes and making, and they’re learning in that process. Then they need to bring other people along as that idea starts to gain momentum to empower those people to contribute to or execute aspects of that idea or that project to move it forward. Part of that is human nature; you want to co-create and be a participant in it. Part of it is you need to understand the decisions that have been made so that you can operate and feel like part of something. I think the velocity that some of these tools allow you to operate at is not just about the individual’s cognitive ability to manage; there’s anxiety around it that fits the organization’s ability to grok or understand and then ingest so that they can focus on, “Okay, this is how I can contribute or I can join the conversation.” I worry about that because I feel like we haven’t learned good boundary skills with these tools. It’s a little bit like a version of doom scrolling where you generate an endless amount of stuff. How much of it is still relevant the next day? Maybe not as much as you think, right? I think that I have some anxiety about being in that. One of the things I’m anxious about is that we’re going to have to learn new behaviors to manage that. What does that feel like, and how does that change us? A lot of people talk about discernment; that’s an important skill. Anyway, it’s a long-winded way of saying I think we’re only capable of grokking so much in a day.</p>

<p><strong>Jorge</strong>: Yes, and I think we’re talking about it kind of at the individual level, right? We can do all this stuff, so we’re doing it all, right? There is an organizational variation of this, which is we have this design or product team which maybe is not growing. I see some folks posting job openings on LinkedIn, but if anything, I think the tendency has been for teams to shrink. All of a sudden there’s an insurgent request for new features and capabilities. There’s this drive to AI all the things. You have AI, so it’s easy to do, right? It’s like, no, it’s not easy to do. Now we’re overloaded with stuff. I’m thinking you were talking about this and our mutual friend and my podcast co-host, Harry Max, wrote a book on prioritization, right?</p>

<p><strong>Greg</strong>: Yeah.</p>

<p><strong>Jorge</strong>: What you’re pointing to is that we need to, on the one hand, move fast because this does indeed call for a fast coming to grips with the capabilities and constraints of the technology. But we need to do it in a way where we’re focusing our energy, our limited resources, on the things that matter most. It feels to me like right now, for a lot of organizations, at least from what I’m hearing, there’s not very good prioritization happening. It’s more like let’s throw everything against the wall and see what works coming out the other end. I’m kind of making a note here; that might be one practice that we could encourage folks to do to be more conscious as a team of the things that they are taking on and to take it on with the dual purpose of building useful things for people—obviously, we want to create value—but we have to keep in mind that part of what we’re doing here is also becoming competent with the new tools.</p>

<p><strong>Greg</strong>: We have to create new skills, yeah. I think you’re—oops, I just unplugged myself. I can still hear you, though. Okay, I’m back. I think you’re right. One of the things that I think many teams are struggling with is that these tools also allow us to do each other’s jobs, right? So the notion of a product organization creates a lot of anxiety around that. You know, I’m a designer, but the engineering team can now write code for the UI. I’m an engineer, but the design team can now write code. I’m a product leader, and I can do both of those things. I’m a designer who can write a PRD, right? Those are very specific to the product development process. The notion of how we work is also in radical change because the boundaries between the disciplines are fuzzier. We need to be open and in a conversation around exploring it together versus in our disciplines; at least that’s my belief. I led a workshop with a client recently on who does what, how, why, and when? It wasn’t really to say that design only owns design and product owns product and engineering owns engineering; it was, “Hey, these tools allow us to be in each other’s camps.” There may be appropriate moments for us to be in each other’s camps. We don’t have the capacity to do something with the staffing we have, but as a team, we can use these tools to help us fulfill that capacity. We need to be in dialogue about that. One of the things I’ve learned is that discipline and having expertise still really matters, right? Discernment is a powerful thing. Just because someone can write code doesn’t mean the experience is a good one. Someone who has the ability to look at that and say, “Here’s how I might modify that because I have expertise in this area” is valuable. Similarly, on the product side, product market fit is still required—just because you can ask these tools to help you find product market fit doesn’t eliminate the need to have people on the team who have experience in bringing products to market, working with customers, and understanding how you create motion and market demand. All the things of modern product development or building things are still in play. But we have a lot of anxiety about whether our roles are still important. Going back to your central point, I think smaller teams are probably what’s going to be. Smaller, more empowered teams are going to be the future, and the smaller, more empowered teams can punch above, using a boxing metaphor, their weight. There are two reasons for that: one, because the tools allow you to do that, and the second goes back to my notion of cognitive dissonance and being able to communicate as a team. You need to have the intimacy of a small group to be able to share your thinking at the speed that this thinking is happening. It starts to break down if you’re in a larger organization that has organized people doing pieces of the work. I think the future is more empowered teams with more agency and clarity about what they’re about, and then just let them do their thing.</p>

<p><strong>Jorge</strong>: And smaller—I heard you say as well, right?</p>

<p><strong>Greg</strong>: And smaller, that’s right.</p>

<p><strong>Jorge</strong>: Yeah. Do you have like, we all know about the two pizza team—the Amazon pizza thing. Do you have a size in mind?</p>

<p><strong>Greg</strong>: Yeah, it’s not bigger than that. I think the notion of the two pizza team is that you all know each other, and you have a human relationship with each other, right? You have the ability to communicate and anticipate and complete each other’s thoughts and know who’s good at certain things. I think it breaks down once you go above that.</p>

<p><strong>Jorge</strong>: I wanted to circle back to something you said because it made me shudder a little bit. You said something like designers are writing PRDs, and all of a sudden, we don’t need as much expertise because we can all do these roles.</p>

<p><strong>Greg</strong>: Yeah.</p>

<p><strong>Jorge</strong>: One bit of caution that I would drop in here is that a common mistake that many people make is to confuse the outcome of a piece of work—the artifact that comes out the other end—with the value of the work. I’m thinking of an exercise that I was part of many, many years ago, which is something a lot of designers have done. We were part of this workshop where we locked ourselves in a conference room for two days and made this enormous wall-sized journey map, right?</p>

<p><strong>Greg</strong>: Yeah.</p>

<p><strong>Jorge</strong>: The artifact that came out of that diagram was valuable per se because it informed a lot of important design decisions. But the artifact was only part of the value that the company got out of that. The other part of the value was the alignment that happened by getting a group of—I think it was like 24 people—to work together for two days building the artifact. If you could just prompt Claude to feed it a bunch of research and then say, “Draw me the journey map for this thing,” you might get a really useful diagram in the end. It might even be better than the one that the people put together. But you’d be missing out on the opportunity for people to use the artifact as a MacGuffin to have conversations that need to happen. It’s a little bit like the stone soup thing, right? The story about the stone soup that I’m sure people have heard. We’ve gone from having a bunch of basically stones to get important conversations to happen to now having the equivalent of the Star Trek replicator where you say, “Just give me chicken soup,” and you get the plate of chicken soup, but then you don’t get the collaboration that happens in making the soup, right? That collaboration is really important.</p>

<p><strong>Greg</strong>: Just to build on that, I think one of the risks that we have is that we spend our day collaborating with AI and not with each other. It’s very easy to do. It came up in one of the workshops I recently led that folks were in a product team spending less time talking with each other and more giving each other things to read. Not all the things that they were giving each other were as tuned as they could have been, but they felt very clear to the person who had been participating with their AI assistant. I think we are at risk of finding our way into that relationship with the AI versus finding our way into a relationship with the cross-functional peers that we work with. Again, it goes back to the healthy boundaries. I think we need to find how we manage that, and I have anxiety around that because I spend a lot of time with these things. I think there was another piece of the story that we wanted to talk about today. There was an article that I really loved, and I excerpted part of it last week, and a lot of people responded to it by Hepburn on going fast and how this was a moment for generalists to be really successful. I felt seen in that article, and at the same time, I recognized that maybe it was a little bit of wishful thinking on my part. I think we’re all guilty of finding the things that reflect well on our own personal point of view that reinforce our vision of ourselves. There was a piece in that about moving fast—not about velocity, but it was more about, you know, one of the things I think we’re in this moment is, some people are using the tools effectively, and they’re using them with their teams and gaining a certain sense of momentum. They’re being intentional about it, learning how to do it, and course correcting. Others are not, and I think there’s some anxiety around that too because some organizations don’t enable teams to do that. Are you feeling like you’re left behind? I think for my own self, I have anxiety around keeping up. I know there are people who are way more into this than I am, and so therefore my keeping up is a worry.</p>

<p><strong>Jorge</strong>: The article you’re referring to is a Substack post called “Welcome to the Intelligence Era” by Hepburn. What I’m going to do is, when we release a recording of this, I’m going to add links to these various posts in the description for the video. The metaphor Hepburn uses for this speed thing is learning to ride a bicycle. He makes a good point that one of the risks you run when learning to ride a bicycle is that you try to take it too slow. If you’ve ridden a bicycle before, you know that it’s not until you reach a certain speed that you can maintain your balance. He’s advocating for getting up to a certain speed to get your bearings. He doesn’t say this in the article, but there’s a flip side to this: if you’re learning to ride a bicycle and you strap a jet engine to the bicycle, you’re going to be really stressed out, right? You’re probably going to get in an accident. I think there’s a Goldilocks thing here; I’m trying to reflect back what’s emerging from this conversation. You’ve already said we need smaller teams that have greater agency. It also sounds like they need to focus; they need to prioritize the stuff that they’re working on. There’s the notion of speed—meaning they need to move fast. Maybe the phrase is they need to move fast enough, but it’s possible to move too fast. The organization, the team, the individuals might not be able to handle being asked to do so much so fast with such new stuff because, to your point earlier, there’s cognitive load involved.</p>

<p><strong>Greg</strong>: Yeah, I think the velocity conversation has many vectors to it, too, some of which are super anxiety-producing. You hear a lot of leadership in the Valley right now talking about speed and how fast we have to deliver product outcomes. Now that Claude can write most of the code, we can go 10 times faster. I don’t think that’s necessarily what we’re talking about. By the way, I think there’s a huge risk in going faster; it doesn’t necessarily mean that you get to a good outcome. At the same time, I think what Hepburn is talking about is you need to dive into understanding how these tools work because they are changing the way that we work and, for each of us, they’re changing who we are and the roles that we have and the impact we can make. We can push back on it if it’s going too fast, but you really don’t learn how to use them unless you’re using them. The advice I have for folks is to get your hands into it and be using it. Then you can be intentional about how you want to use it. One of the opportunities I think in this space now is that especially in product development, a lot of time was spent on execution and not enough time on defining the outcome or the product fit. Now, I think we can use these tools to do a lot more discovery earlier and have more clarity about what problem we’re trying to solve, why that problem is valuable to the end customer or end user, and get validation that we’re solving the right problem. Execution—building that piece—should be something that can go much faster. This inverts how we look at the work that we do, and that part of it is exciting to me. But it’s different.</p>

<p><strong>Jorge</strong>: I want to maybe pinch and zoom into the word invert. But before we do that, I want to circle back to the chat. We have a couple of comments in the chat, and I think the first one here is relevant to what you’re just talking about now. So RPUXD671 says, “I agree. We can’t be too precious. Yes, and we need to show up with calm and help the teams we’re advising through trade-offs they face.” Here’s the question: how do you hang on to and transmit that calm through teams?</p>

<p><strong>Greg</strong>: Yeah, that’s right. I think a couple of things are important. One is being curious, right? Having a culture of curiosity, being conscious that you don’t know the answer, and being public about it. One of the challenges is when we think we know the answer, and then it pivots and changes—it just undermines team health. It’s a notion that we’re on a collective journey together, and we’re going to explore and find out where we’re headed. Those are some things I would consider. I think there needs to be—you said this earlier around how to prioritize the efforts you have because you can go everywhere all at once and not get anywhere. Practice some exercises around what are the experiments you’re going to do as an organization or as a team and create some space for evaluating the success of those experiments. This is something you and I did with one of our customers last fall, where we sat and kind of helped them understand how they worked, looked at the activities and workflows that were important to their success, and helped them stack rank the things that we felt AI could help them with. Instead of doing all of them, we said, “Let’s pick one and do that.” How did that work? Did we learn something? Okay, let’s go do the next one. I think a structured approach could help teams have a little bit more comfort.</p>

<p><strong>Jorge</strong>: I think this question was framed around how do you, as a leader, communicate with your team? That’s the way I read it anyway. But I think what you’re saying also applies to how do you manage up, right? Because as a chief design officer, as a VP of design or product, you are reporting into the organization’s leadership. They have expectations—whether fair or not—that this stuff is going to change things quickly, right? It’s worth acknowledging that leaders need to manage their teams and the mood of their teams, but they also need to manage upwards, right?</p>

<p><strong>Greg</strong>: Yeah, and there have been all kinds of crazy statements made in the last two years around the possibilities, role definition, and how product is going to be made based on the lens of where a leader might come from. I think we’re learning right now that those lenses are incomplete. You bring up a really important point; this curiosity and openness and adaptability need to be shared when you manage a team down or when you’re working with people collectively. It also needs to be flipped on the opposite conversation: what are we learning right now? What advantages is this giving us, and what challenges are we creating? There are challenges being created. Many teams are spending a lot of effort on AI, but their productivity isn’t improving. Many teams are spending a lot of tokens, but their costs are going up in the organization. Some organizations are letting people go because they think AI will fill the gap, but they’re letting them go before they figured out how to do that work. Those are the things that I think are building anxiety right now. The sad part is that we’re having the wrong conversation. People are talking about, “Here’s our current business model; here’s how we work, and now we can just do it faster and more simply.” The conversation I want to have in organizations is, “Here’s the community of people we serve. Here’s how we can deliver better outcomes for them. Here’s how we can grow our business, and here are the new things we can do with the people that we have that are valuable to that constituency.” I just don’t think we talk about that enough.</p>

<p><strong>Jorge</strong>: We have another comment here. It’s not a question, but it’s a comment from Albie underscore G. They say, “I agree with Greg. Without intention, it’s easy to lose control of the output. Planning and guardrails are essential.” I will chime in here and say, even though your name is checked in this comment, I want to point out that when you talked about smaller teams, you did not use the word control; you used the word agency. That is an important distinction. As I hope is becoming evident from this conversation, one of the footballs that is being tossed around the field right now is precisely control—control over the outputs, control over the process—which is part of why there’s this anxiety happening. I think it’s going to be important to live with the—I’m going to use the word—discomfort that comes from not feeling like you have full control over the output. What you want, I don’t think that you want control; I think you want agency. That’s my take anyway.</p>

<p><strong>Greg</strong>: My personal belief is that teams do better when folks have agency. I’ve always tried to build organizations where there’s clarity, and the gift you’re giving is, “Here’s where we’re trying to go. You figure out how to get there.” I do think where I’ve seen AI being used well is in groups that are willing to experiment and communicate and not try to own or control the process of how it works. Instead, they have a conversation with each other about how it’s impacting the way that they’re operating and how the outcomes for which they are responsible are improving or not improving by using the tools. That’s the part that I think is fascinating. My hope is that we’re responsible about it, and we have these conversations, but it’s not easy, and sometimes we don’t have the frameworks to have those conversations. I think you and I have talked a lot about this, and it’s part of what we’re trying to do here with Unfinishe: help people have healthy conversations around how they can use these tools in their environments and provide some structure that allows them to make progress.</p>

<p><strong>Jorge</strong>: That makes a lot of sense. We have only about nine minutes left here. If folks who are viewing have any questions or comments, please do post them in the chat. Greg and I want to have conversations about this. We have been monitoring what people are writing, but we are also having conversations one-on-one with folks in organizations. If you want to talk with us, we would love to meet up to compare notes and have a quick meeting. I’m flashing on the screen a URL that you can go to set up time; we’d love to hear from you. If you are watching this now and have any questions, please do post them in the chat. Let’s start rounding the bend here. We are running out of time. Our intent here, as we said at the top of the hour, was not to offer a very structured conversation; this is really kind of thinking out loud. It does seem to me that there are a few points that are worth noting. The first is acknowledging that we are in a time of anxiety, and I keep tying this time to the early part of the web when the web first came out. That was a time of big disruption, a big new technology; it was clear to many of us that it was going to change things much like it is now. I don’t remember there being this level of anxiety of, “It’s going to replace my…” I mean, there are a few people who saw the writing on the wall that I wouldn’t be making any more printed financial reports for organizations because all that stuff is becoming digitized. That was pretty clear. For the most part, there wasn’t the level of replacement anxiety that we’re feeling now. It does feel like there is angst, and there’s an HBR article that I’ll include in the description that names it “AI Angst” and outlines what that means and why it might be caused.</p>

<p><strong>Greg</strong>: I would just build on that. This is a moment where our identity is challenged. Each of us, no matter what you do, has made decisions in your life and constructed a story around your expertise. That is part of who you are. This moment can feel very unsettling because a lot of that narrative can be challenged. How do we manage through that? I think about this moment personally—I used to lead large teams, and my identity was a chief design officer. Now I’m not doing that anymore. Now I’m helping organizations as a fractional leader. I come in and support teams and do some work. You and I are doing this work for helping organizations prioritize. I’m coming to terms with that: what does the new version of me look like moving forward with these capabilities and tools? It’s not the chief design officer that I used to be. That’s unsettling. I spent a whole lifetime building that narrative. I have adult kids, and I have curiosity about how that happens. The attitude you have to have is to be curious, mindful, and intentional. I don’t know. What are you anxious about in these final moments?</p>

<p><strong>Jorge</strong>: I’m smiling because this hits so close to me. I’ve been calling myself an information architect for almost three decades at this point. Information architecture is so deeply part of my identity. A few days ago, someone posted on LinkedIn saying, “Oh, I had a conversation with someone who was talking about getting into information architecture.” They asked where to begin, and I said, “Look at this person’s work. Look at this person’s work.” There were three references, and the third was me. It stated something like, “Look at Jorge’s website, but he’s more focused on AI and LLMs these days than information architecture.” I felt like, is that true? I immediately wrote back and said, “It is true that a lot of my efforts have been focused in this direction, but I don’t see it as a replacement of my identity. To me, it’s the contrary. I don’t think you can be an information architect and not be all over this stuff, because it’s so obviously important.” The way that I put it on my website is that information architecture changes as a result of AI, and AI is made better as a result of information architecture. With all these SaaS replacement narratives from the mainstream media, my canned retort is that information architecture is your moat. You can’t just replace a system that has a lot of carefully structured information; it’s not going to be replaced by a chatbot with no context. I’m seeing an evolution of my identity rather than a replacement of it, so I don’t feel as much anxiety there. Where I do feel anxiety is the question of, how do I make a living doing this? Because, to your point, if nothing else, the perception out there is that now that we have these tools, they can structure information for you. Yes, but there are a bunch of asterisks following that. My last three years have been about investigating those asterisks. I think that’s going to be true for a lot of knowledge work. That’s a big part of the anxiety here: the narratives out there say you’re going to be out of a job. I’m not entirely sold on that, because I think these are tools that will definitely change the work, but they still need expertise to produce really good results. That’s where I stand right now on that stuff.</p>

<p><strong>Greg</strong>: I love that. I think this goes into, you are on the bicycle or not, like we talked about earlier. I think there are some things that, I don’t know if it will happen, but if the cost to deliver a software outcome reduces significantly which is where we’re headed—the amount of engineering required, the tooling that allows you to deploy something is lowering—does that mean less work for all of us? Or does it just mean that there’s a whole set of new use cases that were too expensive to solve before are now solvable? I don’t know that the equilibrium around that will be. My hope is that we’re intentional about the problems we’re trying to solve in this world and that these tools allow us to solve more of them. I think you’re right: the architecture of intelligence, the organization of the information, and the organization toward the outcomes that matter will be a skill set that’s really important in the future. Not everyone will gravitate towards that, but I think folks like you will be very valuable. You are very valuable.</p>

<p><strong>Jorge</strong>: Thank you. You are very valuable too, Greg. We are out of time. I just want to acknowledge there are a couple of comments in the chat we can get to after we release the recording, but there’s one comment that speaks to this from RPUXD671 again: “It’s not going to be replaced, well, by a chatbot, but some organizations will try.” I’ll say this: we are living through the very early days of this, and there are going to be all sorts of really poor decisions made. We’re going to try all sorts of things that are not going to work, and we just have to go through it. This is the bicycle thing: you have to keep going, and you have to find stability. We are out of time, unfortunately. It’s been brilliant catching up as always. Thank you.</p>

<p><strong>Greg</strong>: Awesome. Thank you all for joining us today. We’ll try another one of these soon.</p>

<p><strong>Jorge</strong>: I’ve flashed the slide on the screen. If you want to set up time to talk with us, please visit unfinishe.com/connect, and you can set up some time. All right. Thank you, sir.</p>

<p><strong>Greg</strong>: Thanks, Jorge. See you soon. Bye.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Technology &amp; Innovation" /><category term="Artificial Intelligence" />
    <summary type="html"><![CDATA[A conversation about the anxiety many product and design leaders are feeling due to AI-driven changes.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 30: Jobs To Be Done</title>
    <link href="https://jarango.com/2026/02/23/traction-heroes-ep-30-jobs-to-be-done/" rel="alternate" type="text/html" title="Traction Heroes Ep. 30: Jobs To Be Done" />
    <published>2026-02-23T00:00:00-08:00</published>
    <updated>2026-02-23T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/02/23/traction-heroes-ep-30-jobs-to-be-done</id>
    <content type="html" xml:base="https://jarango.com/2026/02/23/traction-heroes-ep-30-jobs-to-be-done/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/zn0HXV6PUNs" allowfullscreen=""></iframe>
</div>

<p>For <a href="https://www.tractionheroes.com/2439976/episodes/18721067-jobs-to-be-done">episode 30</a> of the <a href="https://www.tractionheroes.com"><em>Traction Heroes</em></a> podcast, I thought we’d try something a bit different. Rather than discuss a text in abstract terms, I brought to Harry a concrete situation where I’m struggling to gain traction. I wanted to see if looking at it through the lens of a classic business idea could help.</p>

<p>So I read the following passage from <a href="https://www.amazon.com/Competing-Against-Luck-Innovation-Customer-ebook/dp/B01BBPZIHM"><em>Competing With Luck</em></a> by Clay Christensen, Karen Dillon, Taddy Hall, and David S. Duncan:</p>

<blockquote>
  <p>Is innovation truly a crapshoot? Or is innovation difficult because we don’t know what causes it to succeed? I’ve watched so many smart, capable managers wrestle with all kinds of innovation challenges and nagging questions, but seldom the most fundamental one: What causes a customer to purchase and use a particular product or service?</p>

  <p>…</p>

  <p>customers don’t buy products or services; they pull them into their lives to make progress. We call this progress the “job” they are trying to get done, and in our metaphor we say that customers “hire” products or services to solve these jobs. When you understand that concept, the idea of uncovering consumer jobs makes intuitive sense.</p>

  <p>…</p>

  <p>We define a “job” as the progress that a person is trying to make in a particular circumstance. This definition of a job is not simply a new way of categorizing customers or their problems. It’s key to understanding why they make the choices they make. The choice of the word “progress” is deliberate. It represents movement toward a goal or aspiration. A job is always a process to make progress, it’s rarely a discrete event. A job is not necessarily just a “problem” that arises, though one form the progress can take is the resolution of a specific problem and the struggle it entails.</p>

  <p>…</p>

  <p>To summarize, the key features of our definition are:</p>
  <ul>
    <li>A job is the progress that an individual seeks in a given circumstance.</li>
    <li>Successful innovations enable a customer’s desired progress, resolve struggles, and fulfill unmet aspirations.</li>
    <li>They perform jobs that formerly had only inadequate or nonexistent solutions.</li>
    <li>Jobs are never simply about the functional—they have important social and emotional dimensions, which can be even more powerful than functional ones.</li>
    <li>Because jobs occur in the flow of daily life, the circumstance is central to their definition and becomes the essential unit of innovation work—not customer characteristics, product attributes, new technology, or trends.</li>
    <li>Jobs to Be Done are ongoing and recurring. They’re seldom discrete “events.”</li>
  </ul>
</blockquote>

<p>I read a bit more, but you should be able to grok by now that I’m talking about Jobs To Be Done. It’s such an important idea! At its core: don’t focus on a product’s superficial manifestations, but the ultimate needs it serves.</p>

<p>What happens when there’s a dissonance in a service’s ultimate JTBD and how the market “hires” for that job? That’s the question we explored in this episode — using my consulting practice as the study subject.</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18721067-jobs-to-be-done"><em>Traction Heroes episode 30: Jobs To Be Done</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[How do we market a product or service in a way that conveys its real value to customers?]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Don’t Let Your Tools Distort Your Reality</title>
    <link href="https://jarango.com/2026/02/12/dont-let-your-tools-distort-your-reality/" rel="alternate" type="text/html" title="Don’t Let Your Tools Distort Your Reality" />
    <published>2026-02-12T00:00:00-08:00</published>
    <updated>2026-02-12T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/02/12/dont-let-your-tools-distort-your-reality</id>
    <content type="html" xml:base="https://jarango.com/2026/02/12/dont-let-your-tools-distort-your-reality/"><![CDATA[<p>There are extreme, scary claims being made about AI — the speed, extent, and pervasiveness of the changes it’ll bring. Much of the hype is coming from people in tech, especially software developers. They’re seeing themselves replaced, and extrapolating to the rest of the world.</p>

<p>This is a mistake. But it’s understandable. When you’re deep in your tools, they can distort how you perceive reality. Let me give you an example.</p>

<p>Before going all-in on web design, I made architectural 3D renderings using an amazing software product called <a href="https://en.wikipedia.org/wiki/Autodesk_3ds_Max">3D Studio</a>.</p>

<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/y8azQEceSJo" allowfullscreen=""></iframe>
</div>

<p>It was a threefold process. First, you had to get the volumes right. Then, you had to get the lighting right. Finally, you had to get the surfaces right. All three had to work together to create realistic simulations. PCs weren’t as powerful then, so I had to carefully tweak variables like the number of facets, the reflectivity and coarseness of surfaces, and the 2D bitmaps to wrap on them.</p>

<p>One day, after a long project, something weird happened: The tool started mediating my experience of reality. I’d walk around wondering how I’d simulate the scene before me: the light reflecting off the breakfast table, the dishes’ reflective gloss, the steam from the coffee cup. I started dreaming about how I’d render particular kinds of marble or a fuzzy carpet.</p>

<p>I knew that under the hood, these were just numbers. But it felt like I’d unlocked an amazing new ability. I could create believable worlds beyond what a camera could capture. Not merely intellectually: I could <em>feel</em> the scene’s parameters. This wasn’t just a superpower — the tools had changed my relationship to reality.</p>

<p>But it was a delusion. To state the obvious, that’s not how reality works. It’s just how you <em>model</em> it. Yes, that’s still very powerful. Simulations can be very useful. And if the web hadn’t happened, I could’ve had a career in game design. But there’s a tangible difference between a building and a <em>simulation of a building</em> — and I was in so deep that that difference had started blurring.</p>

<p>In retrospect, this was a scary time. I felt elated — but wasn’t seeing clearly. I’d lapsed into parsing reality through the tool’s affordances rather than the other way ‘round.</p>

<p>I think about this whenever I see software engineers raving about AI. Today’s LLMs are amazing at software development. And yes, software mediates and enables lots of useful activities. But coding is a very narrow use of language, one with characteristics that make it unlike most others. You can’t extrapolate from there to the rest of society (except, perhaps, the law.)</p>

<p>AI will reshape our world. But the transition will happen more gradually than many people are assuming. There’s a wide gap between software development and most of what we care about, which is <em>much</em> fuzzier than what can be rendered in code. Some of the most important things can’t be objectively represented with words at all — much less systematically codified.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Artificial Intelligence" /><category term="Cognition &amp; Psychology" />
    <summary type="html"><![CDATA[When faced with hype, consider the sources — and whether they're seeing clearly.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">The Art of Action</title>
    <link href="https://jarango.com/readings/the-art-of-action/" rel="alternate" type="text/html" title="The Art of Action" />
    <published>2026-02-09T00:58:08-08:00</published>
    <updated>2026-02-09T00:58:08-08:00</updated>
    <id>https://jarango.com/readings/the-art-of-action</id>
    <content type="html" xml:base="https://jarango.com/readings/the-art-of-action/"><![CDATA[<p>How do you effectively guide a large organization toward a particular goal?</p>

<p>This book offers answers from military history — in particular, the 19th Century modernization of the Prussian army under its chief of staff <a href="https://en.wikipedia.org/wiki/Helmuth_von_Moltke_the_Elder">Helmuth von Moltke</a> (1800–1891) and his predecessor <a href="https://en.wikipedia.org/wiki/Carl_von_Clausewitz">Carl von Clausewitz</a>, author of the influential <a href="https://en.wikipedia.org/wiki/On_War"><em>On War</em></a>.</p>

<p>The gist: organizations (e.g., armies) aren’t as intelligent as the sum of the people who comprise them. The organization’s structure greatly affects its effectiveness. As Bungay puts it,</p>

<blockquote>
  <p>unless the structure of the organization broadly reflects the structure of the tasks implied by executing the strategy, the strategy will not be executed.</p>
</blockquote>

<p>Also,</p>

<blockquote>
  <p>if you are serious about the strategy, in the case of conflict you have to change the structure.</p>
</blockquote>

<p>You’ll be familiar with this idea in UX if you’ve read <a href="https://rosenfeldmedia.com/books/living-in-information/"><em>Living In Information</em></a>. (I wish I’d read Bungay’s book before writing <em>LII</em>.)</p>

<h2 id="overcoming-friction">Overcoming Friction</h2>

<p>Von Clausewitz realized armies in the battlefield encounter “friction” — real world conditions cause confusion, delays, inconveniences, etc. — that makes top-down control ineffective. There are three main gaps to be overcome:</p>

<ul>
  <li><strong>Knowledge gap:</strong> the delta between plans and outcomes</li>
  <li><strong>Alignment gap:</strong> the delta between plans and actions</li>
  <li><strong>Effects gap:</strong> the delta between actions and outcomes</li>
</ul>

<p>You can’t overcome these gaps by brute force (i.e., even stricter hierarchical control.) Organizations are complex adaptive systems; you must intervene mindfully. Rather than dictate from the top down, you must establish levels to mediate between strategy and on-the-ground execution.</p>

<h2 id="levels-of-command">Levels of Command</h2>

<p>Instead of granular hierarchical control, von Moltke encouraged informed independent thinking. The idea: foster cohesion while allowing for effective command and control. It manifested in three levels of command:</p>

<ol>
  <li>The highest level comprises short and direct orders</li>
  <li>The next level down takes those and adds the appropriate level of detail necessary</li>
  <li>The lowest level, which entails execution, requires adapting the level up to condition on the ground</li>
</ol>

<p>The approach is called “mission command” or, in the context of business, “directed opportunism.” That is, units on the ground are given leeway to execute toward a clearly specified (but not over-specified) direction.</p>

<h2 id="the-role-of-strategy">The Role of Strategy</h2>

<p>Strategy sets the direction — how we’ll win given the resources, capabilities, and constraints that affect us and our adversaries. It’s eminently practical and essential:</p>

<blockquote>
  <p>Strategy is a system of expedients. It is more than science, it is the application of knowledge to practical life, the evolution of an original guiding idea under constantly changing circumstances, the art of taking action under the pressure of the most difficult conditions.</p>
</blockquote>

<p>For von Moltke, strategy was “a practical art of adapting means to ends.” (Wikipedia)</p>

<blockquote>
  <p>Strategy … demands a certain type of thinking. It sets direction and therefore clearly encompasses what von Moltke calls a “goal,” “aim,” or “purpose.” Let us call this element the aim. An aim can be an end-point or destination, and aiming means pointing in that direction, so it encompasses both “going west” and “getting to San Francisco.” The aim defines what the organization is trying to achieve with a view to gaining competitive advantage. How we set about achieving the aim depends on relating possible aims to the external opportunities offered by the market and our internal capabilities.</p>
</blockquote>

<p>The essence of strategy is <em>focus</em> — choosing where we’ll put our efforts and resources. As Bungay puts it,  “Strategy is about fighting the right battles, the important ones you are likely to win. Operations are about winning them.”</p>

<h2 id="operations">Operations</h2>

<p>Von Moltke was the first to realize there’s a level needed between strategy and tactics. He called this middle management “operations,” and it’s role was to translate strategy into action.</p>

<p>This requires both strategic thinking and operational direction. The operations layer mediates between them, feeding information up from the field and down from the directive layer.</p>

<h2 id="clarity">Clarity</h2>

<p>Von Moltke led with directives. This requires clarity — in thinking and (especially) in communication. You won’t achieve cohesive movement if people are confused about where you’re aiming.</p>

<blockquote>
  <p>The true strategist is a simplifier of complexity. Not many people can consistently do it well.</p>
</blockquote>

<p>It’s not enough to simplify complexity. You must also communicate directions clearly.</p>

<blockquote>
  <p>An important corollary of unity of effort is the emphasis on clarity and simplicity. What matters about creating alignment around a strategy is not the volume of communication, but its quality and precision. In order for something to be clear, it must first be made simple.</p>
</blockquote>

<h2 id="takeaways">Takeaways</h2>

<p>Toward the end of the book, Bungay summarizes the book’s argument in ten pithy points:</p>

<blockquote>
  <ol>
    <li>We are finite beings with limited knowledge and independent wills.</li>
    <li>The business environment is unpredictable and uncertain, so we should expect the unexpected and should not plan beyond the circumstances we can foresee.</li>
    <li>Within the constraints of our limited knowledge we should strive to identify the essentials of a situation and make choices about what it is most important to achieve.</li>
    <li>To allow people to take effective action, we must make sure they understand what they are to achieve and why.</li>
    <li>They should then explain what they are going to do as a result, define the implied tasks, and check back with us.</li>
    <li>They should then assign the tasks they have defined to individuals who are accountable for achieving them, and specify boundaries within which they are free to act.</li>
    <li>Everyone must have the skills and resources to do what is needed and the space to take independent decisions and actions when the unexpected occurs, as it will.</li>
    <li>As the situation changes, everyone should be expected to adapt their actions according to their best judgment in order to achieve the intended outcomes.</li>
    <li>People will only show the level of initiative required if they believe that the organization will support them.</li>
    <li>What has not been made simple cannot be made clear and what is not clear will not get done.</li>
  </ol>
</blockquote>

<p>I learned about <em>The Art of Action</em> from my friend Harry Max when we recorded <a href="https://www.tractionheroes.com/2439976/episodes/18640119-delusion">episode 29 of <em>Traction Heroes</em></a>. It’s become a new favorite — one I’ll refer to (alongside <a href="/readings/playing-to-win/"><em>Playing to Win</em></a>) when working on organizational strategy.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Business &amp; Leadership" /><category term="Systems Thinking" />
    <summary type="html"><![CDATA[How do you effectively guide a large organization? This book provides answers from military history.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 29: Delusion</title>
    <link href="https://jarango.com/2026/02/09/traction-heroes-ep-29-delusion/" rel="alternate" type="text/html" title="Traction Heroes Ep. 29: Delusion" />
    <published>2026-02-09T00:00:00-08:00</published>
    <updated>2026-02-09T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/02/09/traction-heroes-ep-29-delusion</id>
    <content type="html" xml:base="https://jarango.com/2026/02/09/traction-heroes-ep-29-delusion/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/fjf0wpFbGoM" allowfullscreen=""></iframe>
</div>

<p>Harry and I have been doing the <a href="https://www.tractionheroes.com"><em>Traction Heroes</em></a> podcast for over a year, and themes are starting to emerge. The most prominent is the importance of perceiving reality clearly. I’m often reminded of Richard Feynman’s quip, “The first principle is that you must not fool yourself — and you are the easiest person to fool.”</p>

<p>For <a href="https://www.tractionheroes.com/2439976/episodes/18640119-delusion">episode 29</a>, Harry brought the following passage from Stephen Bungay’s <a href="/readings/the-art-of-action/"><em>The Art of Action</em></a>:</p>

<blockquote>
  <p>At its most simple, executing strategy is about planning what to do in order to achieve certain outcomes and making sure that the actions we have planned are actually carried out until the desired outcomes are achieved.</p>

  <p>In a stable, predictable environment it is possible to make quite good plans by gathering and analyzing information. We can learn enough about the outside world and our position in it to set some objectives. We know enough about the effects any actions will have to be able to work out what to do to achieve the objectives. We can then use a mixture of supervision, controls, and incentives to coerce, persuade, or cajole people into doing what we want. We can measure the results until the outcomes we want are achieved. We can make plans, take actions, and achieve outcomes in a linear sequence with some reliability. If we are assiduous enough, pay attention to detail, and exercise rigorous control, the sequence will be seamless.</p>

  <p>In an unpredictable environment, this approach quickly falters. The longer and more rigorously we persist with it, the more quickly and completely things will break down. The environment we are in creates gaps between plans, actions, and outcomes:</p>

  <ul>
    <li>
      <p>The gap between plans and outcomes concerns <em>knowledge</em>: It is the difference between what we would like to know and what we actually know. It means that we cannot create perfect plans.</p>
    </li>
    <li>
      <p>The gap between plans and actions concerns <em>alignment</em>: It is the difference between what we would like people to do and what they actually do. It means that even if we encourage them to switch off their brains, we cannot know enough about them to program them perfectly.</p>
    </li>
    <li>
      <p>The gap between actions and outcomes concerns <em>effects</em>: It is the difference between what we hope our actions will achieve and what they actually achieve. We can never fully predict how the environment will react to what we do. It means that we cannot know in advance exactly what outcomes the actions of our organization are going to create.</p>
    </li>
  </ul>

  <p>Although it is not common to talk about these three gaps, it is common enough to confront them. It is also common enough to react in ways that make intuitive sense. Faced with a lack of knowledge, it seems logical to seek more detailed information. Faced with a problem of alignment, it feels natural to issue more detailed instructions. And faced with disappointment in the effects being achieved, it is quite understandable to impose more detailed controls. Unfortunately, these reactions do not solve the problem. In fact, they make it worse.</p>

  <p>There is a model for creating a link between strategy and operations and bridging the three gaps. It involves applying a few general principles in continually changing specific circumstances. They are not difficult to understand, but their implications are profound.</p>
</blockquote>

<p>I’ve read the book since recording this episode, and it’s become a new favorite. Here’s the gist as it pertains to this conversation: strategy is essential, but it must translate to action. And conditions on the ground can change very fast, so leadership can’t overspecify directions.</p>

<p>That assumes clear perception at every level. But it’s not easy; cognitive biases get in the way. Whether you’re leading or executing, you must overcome self-delusion. There are several ways of doing this. I suggested using AI to challenge assumptions and Harry offered an insightful question to promote honest introspection.</p>

<p>Check out our conversation for more.</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18640119-delusion"><em>Traction Heroes episode 29: Delusion</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[A conversation about how to give better directions by perceiving reality more clearly.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">AI Starting Points</title>
    <link href="https://jarango.com/2026/02/03/ai-starting-points/" rel="alternate" type="text/html" title="AI Starting Points" />
    <published>2026-02-03T00:00:00-08:00</published>
    <updated>2026-02-03T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/02/03/ai-starting-points</id>
    <content type="html" xml:base="https://jarango.com/2026/02/03/ai-starting-points/"><![CDATA[<p>A friend is getting back into UX research after a two-year gap. Recognizing AI is a big deal, she asked where to start learning. Great question! Other people are surely wondering as well, so I’m sharing my answers here.</p>

<p>Caveats:</p>

<ul>
  <li>These answers reflect <em>how I learn</em>; YMMV.</li>
  <li>You’re aiming for employment, so we’ll focus on practical knowledge.</li>
  <li>I wrote this in early Feb 2026 — it won’t age well.</li>
</ul>

<p>Finally, you’re behind the curve, so I’ll abandon nuance and be blunt, broad, and brief. Let’s go!</p>

<h2 id="1-get-hands-on-asap">1. Get Hands-on ASAP</h2>

<p>Don’t bother with AI-powered products that automate particular tasks. Most are front-ends to specialized prompts, contexts, and data sources. In time, most will be replaced by general-purpose systems.</p>

<p>Learn first principles instead. If nothing else, sign up for a paid <a href="https://claude.ai">Claude</a> or <a href="https://claude.ai">ChatGPT</a> account. Learn to use projects, Custom GPTs, and other such “advanced” features.</p>

<p>But you should quickly go beyond chatbots. This entails <a href="https://www.python.org">coding</a> and (at a minimum) working in your computer’s <a href="https://en.wikipedia.org/wiki/Command-line_interface">command-line interface</a>. The CLI is multi-decades old, there are lots of guides online.</p>

<p>Once you’re comfortable with the command line, install Simon Willison’s <a href="https://github.com/simonw/llm">llm</a>. Play around. Build little automations for everyday tasks. When you’ve “got” llm, install <a href="https://claude.com/product/claude-code">Claude Code</a>. Use it on something larger than a papercut.</p>

<p>Plain text is the lingua franca here; learn to represent data in <a href="https://daringfireball.net/projects/markdown/basics">Markdown</a> and how to structure context. If your computer has enough memory, install a local model. (If it doesn’t, consider getting a more powerful machine.) Explore <a href="https://huggingface.co">Hugging Face</a>.</p>

<p>If you get stuck, look in <a href="https://www.youtube.com/@AndrejKarpathy">YouTube</a>. Lots of hands-on guides there. And of course, Claude/ChatGPT can be fabulous tutors. Set up a project in either one to support your learning journey.</p>

<p>Self-serving: if you’re attending the <a href="https://www.theiaconference.com">IA Conference</a>, register for my <a href="https://www.theiaconference.com/sessions/hands-on-ai/">AI Hands-on workshop</a>. I occasionally teach an online cohort as well; sign up to <a href="https://jarango.com/newsletter/">my newsletter</a> to be notified.</p>

<p>Bottom line: Start making as quickly as possible. Use AI to solve real problems. Document your experiments to show potential employers. (<a href="https://jarango.com/ai/">Here are mine</a>.)</p>

<h2 id="2-follow-the-right-people">2. Follow the Right People</h2>

<p>You need theory too. Alas, the field is changing too fast for books. (With one exception, noted below.) Your best bet is to follow the right people.</p>

<p>“Right” means</p>

<ol>
  <li>they know what they’re talking about,</li>
  <li>regularly share useful/insightful stuff, and</li>
  <li>aren’t grossly biased for or against the technology.</li>
</ol>

<p>That sounds stupid, but there’s lots of hucksterism and ideological bloviating around AI.</p>

<p>Here’s who I follow:</p>

<ul>
  <li>
    <p><a href="https://simonwillison.net"><strong>Simon Willison</strong></a>: the prototypical alpha geek, experimenting hands-on and generously sharing what he learns.</p>
  </li>
  <li>
    <p><a href="https://www.oneusefulthing.org"><strong>Ethan Mollick</strong></a>: a practical academic who deeply understands the tech; his book <a href="/readings/co-intelligence/"><em>Co-Intelligence</em></a> is the best general-purpose intro.</p>
  </li>
  <li>
    <p><a href="https://www.youtube.com/@AndrejKarpathy"><strong>Andrej Karpathy</strong></a>: OpenAI co-founder; shares (long!) videos on how the technology works.</p>
  </li>
  <li>
    <p><a href="https://garymarcus.substack.com"><strong>Gary Marcus</strong></a>: highly critical of the hype; understands the technology’s potential but (rightly) calls out its shortcomings.</p>
  </li>
</ul>

<p>I’m skeptical of:</p>

<ul>
  <li>Academics in the “soft” sciences</li>
  <li>Mainstream journalists</li>
  <li>Anyone else with a megaphone and likely to lose status</li>
  <li>AI company execs (and others looking to inflate valuations)</li>
  <li>Doomsday prophets</li>
</ul>

<p>Yes, there are ethical, environmental, legal, financial, etc. questions. Lots of people have strong opinions on these subjects, but there’s little solid data. You’re looking for practical advice, so I suggest putting aside these concerns for now.</p>

<h2 id="3-re-think-your-work">3. Re-think Your Work</h2>

<p>As you learn about AI, ruthlessly consider the impact on your work. Look to replace yourself. I assure you, others are. Do it first.</p>

<p>Research is one of the areas of UX that will most be transformed. I guesstimate 80–90% of “traditional” jobs will disappear. New roles will look more like management than traditional IC roles.</p>

<p>Ask yourself:</p>

<ul>
  <li>How would I delegate this task to an AI “intern”?</li>
  <li>What information do they need to do it right?</li>
  <li>How would I measure results?</li>
  <li>How would I give them feedback?</li>
</ul>

<p>Assume the intern has pattern-matching superpowers and knows more about what you’re delegating than you do. What it doesn’t have is humanity and common sense.</p>

<p>That’s where you come in.</p>

<h2 id="be-the-human-in-the-loop">Be the Human in the Loop</h2>

<p>Research is sense-making: gathering relevant data about a problem space and asking the right questions to generate insights that support good decisions. AI can greatly augment humans in the process. It can’t fully replace them — yet.</p>

<p>But it’s not a question of whether organizations will use AI in UX research. They already are. The question is <em>how</em> they do it. Experienced practitioners will guide them to better insights cheaper and faster, while doing it ethically and in service to human goals. But only those who grok the technology will have a say.</p>

<p>Get cracking.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Artificial Intelligence" /><category term="User Experience" />
    <summary type="html"><![CDATA[Opinionated suggestions for UX researchers getting started with AI in early 2026.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2026/02/compass.jpg" />
    <media:content medium="image" url="/assets/images/2026/02/compass.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 28: Going Meta</title>
    <link href="https://jarango.com/2026/01/26/traction-heroes-ep-28-going-meta/" rel="alternate" type="text/html" title="Traction Heroes Ep. 28: Going Meta" />
    <published>2026-01-26T00:00:00-08:00</published>
    <updated>2026-01-26T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/01/26/traction-heroes-ep-28-going-meta</id>
    <content type="html" xml:base="https://jarango.com/2026/01/26/traction-heroes-ep-28-going-meta/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/l5KkDfoEI2c" allowfullscreen=""></iframe>
</div>

<p>Why does your business exist? Is it to make money? Or is there a higher motive? The answer will define its culture and longevity.</p>

<p>This key question was the focus of <a href="https://www.tractionheroes.com/2439976/episodes/18559645-going-meta">episode 28</a> of the <a href="https://www.tractionheroes.com/2439976/episodes/18559645-going-meta"><em>Traction Heroes</em> podcast</a>. It was prompted by the following fragment from <a href="https://www.oreilly.com/pub/a/tim/articles/beyondbook_0400.html"><em>Beyond the Book</em></a>, an essay by Tim O’Reilly:</p>

<blockquote>
  <p>I like to compare business (or life for that matter) to an extended road trip. Say you want to travel America by the back roads. You need gas for your car, food and water for your body Especially before heading across Death Valley or the Utah salt flats, you’d better be darn sure that you have enough gas in your tank. But you certainly don’t think of your trip as a tour of gas stations! What’s the real purpose behind what you do?</p>

  <p>Why then do so many companies think that they are just in the business of making money? At O’Reilly, our products aren’t just books, conferences, and web sites: they are tools for conveying critical information to people who are changing the world. Our product is also the lives of the people who work for us, the customers who are changed as a result of interacting with us, and all the “downstream effects” of what we do.</p>

  <p>When I started the company, my stated business goal was a simple one: “Interesting work for interesting people.” Above all, we wanted to be <em>useful</em>. Our financial goals were just to keep afloat while doing something worthwhile.</p>
</blockquote>

<p>This metaphor of a business as something more than a “tour of gas stations” has influenced my life and career. But I suspect many people see gas as an end in itself. I wanted Harry’s take, and our conversation didn’t disappoint. (Turns out Harry worked at O’Reilly!)</p>

<p>We’re probing our deepest values through the show. I hope these conversations are as valuable to you as they are to me. (If they are, please leave a review in your favorite podcast app — it helps the show!)</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18559645-going-meta"><em>Traction Heroes episode 28: Going Meta</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[Why does your business exist? The answer will determine how it plays out in the long term.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Dabble No More: Toward Disciplined AI Adoption</title>
    <link href="https://jarango.com/2026/01/15/dabble-no-more-toward-disciplined-ai-adoption/" rel="alternate" type="text/html" title="Dabble No More: Toward Disciplined AI Adoption" />
    <published>2026-01-15T00:00:00-08:00</published>
    <updated>2026-01-15T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/01/15/dabble-no-more-toward-disciplined-ai-adoption</id>
    <content type="html" xml:base="https://jarango.com/2026/01/15/dabble-no-more-toward-disciplined-ai-adoption/"><![CDATA[<p>Recently, I had a conversation with an architecture studio lead that went something like this:</p>

<blockquote>
  <p><strong>Architect:</strong> We’re using AI in the studio.</p>

  <p><strong>Me:</strong> Oh yeah? What are you doing?</p>

  <p><strong>Architect:</strong> A few things. [Person A] is using one of those meeting bots to transcribe meetings. And [Person B] is feeding renderings into ChatGPT to explore materials and colors. Clients are impressed.</p>

  <p><strong>Me:</strong> Interesting. Anything else?</p>

  <p><strong>Architect:</strong> Yes, [Person C] has used ChatGPT to create social media posts. Although we haven’t really scaled that.</p>
</blockquote>

<p>This is actually a composite of several similar conversations, and I’ve changed the details — but the spirit stands. I believe this short dialog accurately represents how many service organizations are embracing AI: by <em>dabbling</em>.</p>

<p>Dabbling — or, more gently, “undisciplined adoption” — is experimenting with AI without understanding how information actually flows through the organization to create value. Instead, team members use AI ad-hoc on whatever interests them most. It can happen officially (i.e., using company-provided licenses) or unofficially (bringing their own.)</p>

<p>While dabbling has upsides, it also carries significant risks. It also precludes getting the most value out of AI. Let’s explore how.</p>

<h2 id="upsides-of-dabbling">Upsides of Dabbling</h2>

<p>I can think of at least three pros to dabbling with AI:</p>

<ul>
  <li>
    <p><strong>Quick learning.</strong> By now, most folks in service industries have heard about AI. Many are wondering how it might help their business. But reading about a technology isn’t the same as using it. Dabbling gets them rolling quickly: Setting up an account is easy, and getting a useful reply to a prompt highly satisfying. A nudge to go deeper — good!</p>
  </li>
  <li>
    <p><strong>Low friction.</strong> Basic LLM accounts are free and the pro versions around $20/month — not a big commitment. A ChatGPT account and YouTube will get you rolling. No need for big culture change initiatives, reorgs, or IT investments. And unless your IT department put the kibosh on it, you won’t be stepping on anyone’s toes.</p>
  </li>
  <li>
    <p><strong>Nice spread.</strong> AI is a general purpose technology: it can help with research, production, marketing, finance, etc. With different people experimenting, as in the example above, you’ll get glimmers of possible applications.  Letting a thousand (or, more likely, half a dozen) flowers bloom will give you a sense of what the garden might include.</p>
  </li>
</ul>

<p>Given these “pros,” it’s understandable why firms dabble: it’s a nonthreatening way to get started on the journey.</p>

<h2 id="but-its-not-all-roses">But It’s Not All Roses</h2>

<p>Dabbling is better than nothing. But it has significant downsides:</p>

<ul>
  <li>
    <p><strong>No governance.</strong> Let’s start with the scariest. Undisciplined AI use is a privacy and security risk. Unless you have a properly configured pro account, your chats will likely be used to train models. Meaning, your private data might show up as an answer to someone else’s prompt. There are good reasons why your IT team wants visibility and control!</p>
  </li>
  <li>
    <p><strong>Learnings don’t scale.</strong> Yes, dabbling lets team members get into AI. But that learning won’t be evenly distributed. And their focus will be on narrow problems (e.g., crafting a social media post, tweaking a rendering) that can’t be leveraged more broadly. They’ll likely have no plans or means to feed data back into the org’s broader data repositories.</p>
  </li>
  <li>
    <p><strong>Wrong mental model.</strong> Fast learning doesn’t mean <em>good</em> learning. By dabbling, team members will come to understand AIs as freestanding tools whose abilities reside in vendors’ clouds. They’ll assume utility lies in the chatbot’s cleverness rather than how they leverage structured information. This is a bad mental model. AIs should be understood as adding smarts to (and with) their firm’s IT infrastructure.</p>
  </li>
  <li>
    <p><strong>Opportunity cost.</strong> By focusing on “paper cut” problems, org leaders can boast that the company is already “using AI.” As a result, they’ll fail to invest in projects that have greater upside potential — something that can only happen when they consider initiatives as holistic responses to strategic directions. By dabbling, the org gets a false sense of closure while leaving lots of value on the table.</p>
  </li>
</ul>

<h2 id="what-to-do-instead">What To Do Instead</h2>

<p>Ok, so dabbling isn’t a good strategy. But that doesn’t mean you shouldn’t use AI at all. So how should you proceed instead?</p>

<h3 id="1-identify-your-businesss-soul">1. Identify your business’s “soul”</h3>

<p>Start where your organization shines. What makes it stand out from competitors? What’s the secret sauce? Where does it create most value?  Don’t threaten those things. Instead, look to automate the chores that keep you from delivering your particular kind of value in a timely and cost-effective manner.</p>

<h3 id="2-define-your-knowledge-pipeline">2. Define your knowledge pipeline</h3>

<p>And how do you do that? To begin with, you must grok the organization’s “knowledge pipeline” — how information is created, transformed, passed on, searched, used, etc. All businesses generate and consume data: leads, proposals, research, responses, invoices, documentation, etc. The more structured this data, the easier it’ll be to integrate into AI-powered workflows.</p>

<h3 id="3-understand-ais-real-capabilities">3. Understand AI’s real capabilities</h3>

<p>Many people are pushing unrealistic ideas of what AI can do. The reality is that while LLMs are a powerful general-purpose technology, you can’t just point them to a problem and say “fix this” — at least not in a scalable, and repeatable way. Understanding what the technology can do <em>today</em> is essential to designing systems that create real value consistently, rather than one-off automations.</p>

<p>By mapping how information flows through the organization, where the real value lies, and what AI can (and can’t) do well, you can determine how it might best alleviate information bottlenecks — without threatening your people.</p>

<h2 id="a-real-world-example">A Real-world Example</h2>

<p>Recently, Greg and I helped an architecture studio define a coherent direction for their AI use. Outlining the studio’s knowledge pipeline led to an interesting discovery: a significant portion of their time was spent responding to questions during the construction administration (CA) phase of projects.</p>

<p>Given current LLM capabilities, we determined that helping build CA dossiers would be a good place to start. It’s a time-consuming task that few people want to do, but which must be done to deliver value. But it’s also far enough removed from the studio’s core deliverable — excellent architectural design — that it doesn’t threaten their soul.</p>

<p>This isn’t the “sexiest” use of AI, the sort one brags about. But it solves a real problem in a scalable and repeatable way. It enhances the overall value to clients and improves working conditions for team members. It’s a win-win all around — but you don’t get there by dabbling.</p>

<h2 id="moving-ahead--with-discipline">Moving Ahead — With Discipline</h2>

<p>Dabbling isn’t dangerous just because it’s uncontrolled. It’s dangerous because it gives the firm a false sense of progress. It teaches people to think about AI in the wrong way — as a clever ad hoc tool rather than as part of a broader system — while distracting them from more fruitful explorations.</p>

<p>The opposite of dabbling isn’t stasis; it’s moving ahead in a disciplined way. Starting undirected is natural and easy. But eventually, you must move more deliberately and strategically. The goal of using AI shouldn’t be replacing what makes you special. Instead, it should be freeing your people so they can deliver excellence — and enjoy the process.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Artificial Intelligence" /><category term="Business &amp; Leadership" />
    <summary type="html"><![CDATA[Experimenting with AI is a starting point. But creating real value requires direction and discipline.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2026/01/blue-blocks.jpg" />
    <media:content medium="image" url="/assets/images/2026/01/blue-blocks.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 27: Eliciting Information</title>
    <link href="https://jarango.com/2026/01/12/traction-heroes-ep-27-eliciting-information/" rel="alternate" type="text/html" title="Traction Heroes Ep. 27: Eliciting Information" />
    <published>2026-01-12T00:00:00-08:00</published>
    <updated>2026-01-12T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/01/12/traction-heroes-ep-27-eliciting-information</id>
    <content type="html" xml:base="https://jarango.com/2026/01/12/traction-heroes-ep-27-eliciting-information/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/zC4jBoS-6Mo" allowfullscreen=""></iframe>
</div>

<p>How do you know what you know? Some things we learn through books or courses. Others, we must suss out ourselves — often, by interviewing people.</p>

<p>But that can be a toss-up. For one thing, they might not be willing to share. But they might also not be clear on the ideas themselves, or be able to put them into words. Fortunately, we can develop a skill that helps: <em>elicitation</em>.</p>

<p>In <a href="https://www.tractionheroes.com/2439976/episodes/18484855-eliciting-information">episode 27</a> of the <a href="https://www.tractionheroes.com"><em>Traction Heroes</em> podcast</a>, Harry brought a reading from John Nolan’s book <a href="https://www.amazon.com/Confidential-Uncover-Competitors-Business-Quickly/dp/006661984X"><em>Confidential</em></a>. It’s all about elicitation, and has its roots in a surprising field. Here’s part of the reading to give you a taste:</p>

<blockquote>
  <p>If we stop to think about it, almost everyone has a need to get information in today’s world. Often, that information is in the hands and minds of people who, for a variety of reasons, aren’t always the most cooperative. Sometimes, the people with the information are what the psychological and psychiatric community referred to as “resistant patients.” The more, resistant the client or patient, and therefore, the less effective the intervention, the greater the chance that the response will be defensive, misleading, and untruthful. If we’re limited to one or two sets of skills, our chances of collecting the information decreases significantly.</p>

  <p>The most common styles of obtaining information are interrogation and interviewing. Both styles are question-based. The less elegant the question, the greater degree of suspicion, uncooperativeness, and downright dismissal. These are separate and distinct from elicitation.</p>
</blockquote>

<p>So what is elicitation? Tune in to our conversation find out. (And while you’re listening, please leave a rating and/or review — it helps other folks find the show.)</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18484855-eliciting-information"><em>Traction Heroes episode 27: Eliciting Information</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[A conversation about obtaining information from unhelpful interlocutors.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">The Moylan Arrow: IA Lessons for AI-Powered Experiences</title>
    <link href="https://jarango.com/2026/01/06/the-moylan-arrow-ia-lessons-for-ai-powered-systems/" rel="alternate" type="text/html" title="The Moylan Arrow: IA Lessons for AI-Powered Experiences" />
    <published>2026-01-06T00:00:00-08:00</published>
    <updated>2026-01-06T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/01/06/the-moylan-arrow-ia-lessons-for-ai-powered-systems</id>
    <content type="html" xml:base="https://jarango.com/2026/01/06/the-moylan-arrow-ia-lessons-for-ai-powered-systems/"><![CDATA[<p><a href="https://www.wsj.com/business/autos/ford-gas-arrow-inventor-jim-moylan-6b2ef066?st=wwpyRk&amp;reflink=desktopwebshare_permalink">Jim Moylan died recently</a>. He was the Ford engineer who proposed that little arrow on the fuel gauge of most cars that indicates the cap’s location. It’s handy when you’re pulling into a gas station to refuel, especially when you’re driving an unfamiliar car.</p>

<p>The <a href="https://en.wikipedia.org/wiki/Fuel_gauge#Moylan_arrow">Moylan arrow</a> is such an obviously useful idea that it was immediately implemented by Ford and widely adopted by other manufacturers. It’s also an excellent example of good <a href="https://jarango.com/what-is-information-architecture/">information architecture</a> — and one that provides important lessons as we navigate the AI age.</p>

<h2 id="how-is-this-information-architecture">How Is This Information Architecture?</h2>

<p>Information allows us to act more skillfully. Imagine you come to a fork on a road. Without a sign, you’d need a compass or a great sense of direction to choose correctly. But with a clear sign, you’d quickly know which road to take. The sign reduces ambiguity.</p>

<p>The Moylan arrow, too, disambiguates a choice. Pulling in on the wrong side of the pump is an annoying inconvenience. By making the driver smarter, the arrow improves the car’s UX. Critically, it does so without much cost to the manufacturer. That’s why it’s become pervasive.</p>

<p>“But,” you may protest, “this isn’t IA; it’s user interface/icon design.” That’s partly true. As usual, users experience IA in an interface. The arrow wouldn’t be as effective if it wasn’t clear and recognizable. Visuals — the choice of symbols (an abstracted gas pump and a triangle) and colors (usually white on black) — are key.</p>

<p>But there’s more to it than that. A big part of the arrow’s effectiveness is its location: on the dashboard, next to the fuel gauge — exactly where you’re looking when your car needs refueling. Consider how much less effective it’d be if it were only noted in the owner’s manual.</p>

<p>The Moylan arrow works because it’s:</p>

<ul>
  <li><strong>Clear</strong>: legible and understandable</li>
  <li><strong>Findable</strong>: located where you’re already looking</li>
  <li><strong>Relevant</strong>: provides the exact answer you need</li>
  <li><strong>Contextual</strong>: available when needed, but “quiet” otherwise</li>
  <li><strong>Obvious</strong>: doesn’t need further instructions</li>
  <li><strong>Cheap</strong>: of negligible cost to manufacturers</li>
</ul>

<p>The arrow isn’t just a clear icon. It disambiguates a key structural distinction of the car. The mental model is clear: most current <a href="https://en.wikipedia.org/wiki/Internal_combustion_engine">ICE</a> cars have their fuel cap on either the left or right side. The question is, “which is it for <em>this</em> car?” The answer is obvious once you know where to look — and it’s cognitively respectful (i.e., it doesn’t scream, “LOOK AT ME!” while you’re driving.)</p>

<p>Which is to say, the Moylan arrow:</p>

<ol>
  <li>answers a latent question (“Which side is the fuel cap on?”)</li>
  <li>at a time when the user is making a key decision (pulling in to a gas station)</li>
  <li>by showing them just what they need (left or right side)</li>
  <li>where they expect to find it (on the dashboard, next to the fuel gauge)</li>
  <li>cheaply, efficiently, and respectfully.</li>
</ol>

<p>That’s classic information architecture.</p>

<h2 id="what-does-this-have-to-do-with-ai">What Does This Have to do With AI?</h2>

<p>This is the <em>opposite</em> approach to many of today’s AI-powered systems. The arrow is low tech (just a bit more paint/pixels!) and therefore relatively cheap. It does just one job — resolving structural ambiguity — effectively and efficiently. It’s there when needed and blends into the background otherwise.</p>

<p>Admittedly, its elegance is due in great part to the binary, static, and universal nature of the information it conveys. The cap can only be in one of two positions: left or right. These concepts are unambiguously represented with arrows across cultures. (The pump is more complicated but still recognizable.) Also, the information is static: the cap won’t change sides between fuelings.</p>

<p>This is a very constrained set of requirements. But compare the Moylan’s solution with many AI products today, especially those with chat interfaces. Rather than a constrained structure within an expectable construct (dashboard → fuel gauge → [left|right] arrow), chats offer completely open-ended interfaces. This may be appropriate for systems that require extraordinary flexibility, but it’s overkill otherwise. And while flexibility adds power, it opens the door to complexity and errors. (Consider the risk of hallucinations!)</p>

<p>Chat interfaces also have higher latency than more structured UIs. Conversational interfaces require explicit instructions — either spoken or typed — before they can provide utility, and getting there may take multiple rounds. To put it bluntly: for many tasks, <a href="https://jarango.com/2023/05/18/thinking-with-words/">chat UIs are inefficient</a>. Compare this with the low latency inherent in Moylan’s “ambient” approach: just glance and turn the wheel.</p>

<p>Finally, many AI-powered products call too much attention to themselves. The value to the user (e.g., avoiding the inconvenience/embarrassment of pulling in to the wrong side of the pump) takes a back seat (sorry!) to the fact the product now “has AI.” Lacking good system models, users can only guess at what pressing the pervasive “sparklies” and “copilot” buttons might do. Many users recoil when products add complexity through seemingly gratuitous features.</p>

<h2 id="what-can-we-learn-from-this">What Can We Learn From This?</h2>

<p>I’m not poo-pooing chat UIs. They’re appropriate for some use cases. But they’re also overused. I expect this is because of two reasons:</p>

<ul>
  <li><strong>Chat = AI</strong>. Many people associate chat UIs with AI, so they expect conversational interactions.</li>
  <li><strong>Laziness</strong>. It’s easier to graft a chatbot onto a product than redesign its IA to accommodate new capabilities.</li>
</ul>

<p>Both reasons are bad. If you believe your system’s value will come from making it more “intelligent,” it’ll likely turn out overwrought. Users get most value from systems that help them effectively and efficiently and otherwise get out of the way. They don’t want to “AI all the things”; they just want the <em>right</em> information <em>when</em> and <em>where</em> they need it. Everything else is noise.</p>

<p>Rather than ask, “how might we add AI to this system?,” consider the following questions:</p>

<ul>
  <li>What is the person trying to do?</li>
  <li>Do they understand the system?</li>
  <li>What’s keeping them from choosing skillfully?</li>
  <li>What questions do they have? Which come up repeatedly?</li>
  <li>Which structural distinctions are ambiguous?</li>
</ul>

<p>These are information architecture questions. AI might play an important role in answering them — even in real time, as the user interacts with the system. But it won’t happen by simply “adding AI.” Instead, you must understand the user’s needs as they work with the system. Then, you can determine where to judiciously apply AI.</p>

<p>Also, rather than an open-ended UI (such as a chat,) consider whether your system might be better served by a UI that offers clear distinctions and affordances. Buttons and menus don’t just give users means to act: they also help them understand the system. A thoughtful IA will make your AI-powered product easier to use — and likely do it more cheaply and elegantly than a chat UI.</p>

<h2 id="closing-thoughts">Closing Thoughts</h2>

<p>I doubt Jim Moylan thought of himself as an IA. But that doesn’t matter. We can study manifestations of an area of practice retrospectively even if they weren’t explicitly produced as such. (For example, we think of many ancient buildings as “architecture” even though their designers didn’t think of themselves as architects in our current sense.)</p>

<p>As the practice of designing AI-powered systems matures, I expect we’ll move away from general-purpose interfaces to systems that use AI on the back end while presenting a more traditional UX. There’s room for delight and intelligence in simple, less open-ended systems. The Moylan arrow is an excellent example.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Artificial Intelligence" /><category term="Information Architecture" /><category term="Design &amp; Architecture" />
    <summary type="html"><![CDATA[How traditional structural principles can inform the design of AI-powered products and services.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2026/01/corolla-moylan-arrow.jpg" />
    <media:content medium="image" url="/assets/images/2026/01/corolla-moylan-arrow.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">My Theme for 2026</title>
    <link href="https://jarango.com/2026/01/01/my-theme-for-2026-agency/" rel="alternate" type="text/html" title="My Theme for 2026" />
    <published>2026-01-01T00:00:00-08:00</published>
    <updated>2026-01-01T00:00:00-08:00</updated>
    <id>https://jarango.com/2026/01/01/my-theme-for-2026-agency</id>
    <content type="html" xml:base="https://jarango.com/2026/01/01/my-theme-for-2026-agency/"><![CDATA[<p>I’m starting off 2026 with a post that likely won’t interest you at all. I’m only sharing it to 1) think it through and 2) make myself accountable. It’s inspired by this video:</p>

<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/Q10H5RA3eCA" allowfullscreen=""></iframe>
</div>

<p>Among other things, Pink suggests adopting a one-word theme for the year: “Not a sentence. Not a paragraph. A single word that captures the kind of year you want and the kind of person you want to become.” The point: focus. Rather than squander time and attention on whatever comes along, the word reminds you to align efforts in a coherent direction.</p>

<p>I love it. So here’s my theme word for 2026: <em>Agency</em>.</p>

<p>You may cringe. The word “agent” picked up a bit of a stench in 2025. But agency is more important than ever — on several levels. I’ll unpack a few of them here to give you a sense of how I expect this will unfold for me. That said, I’ll keep it high level. (I’m keeping my SMART goals private.)</p>

<h2 id="which-agency">Which Agency?</h2>

<p>My Mac’s dictionary provides three definitions for “agency.” Here’s a summary:</p>

<ol>
  <li>An organization that provides a particular service, as in an advertising agency.</li>
  <li>An action or intervention meant to produce a particular effect, as in supernatural agency.</li>
  <li>“The ability to make decisions and act independently.”</li>
</ol>

<p>That last one is verbatim and the sense in which I mean it. Let’s unpack it.</p>

<ul>
  <li>
    <p><strong>Ability</strong>: freedom entailed by structural compatibility with proposed courses of action plus no external fetters. (E.g, I can walk around my neighborhood but unable take flight — not because anyone forbids it, but because I lack wings.)</p>
  </li>
  <li>
    <p><strong>Decide and act</strong>: our context and abilities imply latent possibilities; we must be able to choose among them and change our behavior accordingly.</p>
  </li>
  <li>
    <p><strong>Independently</strong>: we must be able to decide and act within our abilities and toward desired goals <em>without coercion</em>.</p>
  </li>
</ul>

<p>This is the heart of the matter: it must be through our own volition. As long as we can choose freely, our choice can include going along with others. But once we’re forced to act in particular ways either through structural constraints or physical threats, we’re no longer independent.</p>

<p>Hopefully, it’s becoming clear why I think agency is so important in 2026. More than ever, technological and social forces are changing our abilities to decide and act. Some of these changes are positive — i.e., they increase our scope. But many erode our independence.</p>

<p>Personally and professionally, I want to increase <em>human</em> agency — including my own. This doesn’t mean a free-for-all. We live in societies and must strive for their well-being and longevity. And that sometimes calls for sacrificing personal desires. (Again, without coercion!)</p>

<p>The drive toward more agency for myself and my fellow humans will inform all my efforts this year. But there are three areas in particular I expect to be affected: the subject of my work, my information flows, and how I deliver value. Let’s zoom in.</p>

<h2 id="agency-over-artificial-systems">Agency Over Artificial Systems</h2>

<p>There was much hype in 2025 about AI agents. While some people <a href="https://simonwillison.net/2025/Sep/18/agents/">wrote thoughtfully</a> about the subject, there was also much vacuous marketing. Lots of products overpromised and underdelivered: many “agents” were actually fairly standard workflows enhanced with LLMs.</p>

<p>It remains to be seen whether 2026 will deliver on the promise of truly agentic artificial systems. That said, I’m honing in on two questions:</p>

<ol>
  <li>Where are agentic systems feasible, valuable, and desirable?</li>
  <li>How can we preserve human agency where it matters most?</li>
</ol>

<p>The two are obviously related.</p>

<p>I’m not convinced of the desirability or feasibility of fully autonomous systems in many areas. In 2026, most <em>actually</em> useful solutions will consist of fairly standard workflows enhanced with LLMs. (I call these “assistants” rather than “agents.”)</p>

<p>But if we’re to develop truly autonomous agents — and many people are trying — then we must be mindful about how and where we want them to show up in our lives and work. This will be a focus in my consulting and writing this year, especially because…</p>

<h2 id="agency-over-information-environments">Agency Over Information Environments</h2>

<p>Information affects our ability to choose and act. Our mental models consist of what we know and how we understand things — including the contexts in which we experience them. New facts can change your mind, leading you to act differently.</p>

<p>Our information environments are changing. LLMs drive the cost of slop and propaganda to zero, so we can expect more this year. I’m grappling the implications as a consumer, producer, and designer of information systems.</p>

<p>On the consumer side, I plan to favor older, more trustworthy sources and reduce my reliance on social media. On the producer side, I will focus more on my own platform (e.g., this site, my newsletter) also at the expense of social media.</p>

<p>Design-wise, I will continue exploring the implications of AI for information architecture and information architecture for AI, but with a stronger sense of purpose. The goal in either case is increasing <em>human</em> agency.</p>

<p>BTW, I mean human <em>individuals</em>. Considering how central information is to our lives, we’ve become overly reliant on the services of too few organizations. Personally, I have too many eggs in others’ baskets. This year, I’ll take steps to reclaim my informational agency — and help others do the same.</p>

<h2 id="agency-over-income">Agency Over Income</h2>

<p>Finally, I’ll share a more personal area of focus. This year, I will diversify my revenue streams to increase my agency over my income.</p>

<p>It’s been eight years since I started <a href="/services">consulting independently</a>. Fortunately, I’ve been busy much of that time. But some years have been better than others. I’ve done <a href="/presentations">workshops</a>, written <a href="/writings">two books</a>, and launched an <a href="https://store.jarango.com/ia-wtf">online course</a>, but most of my revenue has come from hourly-based projects.</p>

<p>There are upsides to these kinds of engagements. For one thing, they’re easy to quantify and manage for everyone involved. For another, it allows for flexible, responsive delivery. Charging hourly has helped me develop excellent partnerships and generate lots of value.</p>

<p>But hourly pricing also reduces agency. Freelancers often can’t decide when we’re “on the clock.” If a client needs us, we devote as much of our time to them as we can — often at the expense of other important tasks. (E.g., marketing.) The result: lumpiness, unpredictability, and stress.</p>

<p>This year, I aim toward a healthier mix that includes more project-based (or ideally, value-based) engagements, in-house teaching, and products. At this point, I only have outlines about what these might be, but this post is intended to be more directional than specific.</p>

<h2 id="bootstrapping-agency">Bootstrapping Agency</h2>

<p>And that’s why I wanted to share it with you. This isn’t “here’s what you should do” (although that Pink video is worth your time.) Rather, it’s “here’s what I’m focusing on.” I shared it mostly to keep me accountable — but also because public writing is a powerful way to think and learn. And that, too, is an area where our agency is currently threatened.</p>

<p>I’ll circle back at the end of the year to see how this experiment turned out. But for now, consider this public declaration a first step toward bootstrapping greater agency. While it’s my effort, I encourage you to think about how you might grow your own agency — especially now, when there are so many opportunities to trade it for convenience.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Ethics &amp; Values" /><category term="Artificial Intelligence" />
    <summary type="html"><![CDATA[Making myself accountable by declaring my focus for the year.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Traction Heroes Ep. 26: The Highest Goal</title>
    <link href="https://jarango.com/2025/12/29/traction-heroes-ep-26-the-highest-goal/" rel="alternate" type="text/html" title="Traction Heroes Ep. 26: The Highest Goal" />
    <published>2025-12-29T00:00:00-08:00</published>
    <updated>2025-12-29T00:00:00-08:00</updated>
    <id>https://jarango.com/2025/12/29/traction-heroes-ep-26-the-highest-goal</id>
    <content type="html" xml:base="https://jarango.com/2025/12/29/traction-heroes-ep-26-the-highest-goal/"><![CDATA[<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/whcWcJqTEmU" allowfullscreen=""></iframe>
</div>

<p>As 2025 wanes, I’m revisiting my values and objectives. What do I truly care about now? What’s my work in service to? How can I focus my time and attention so I can get closer to where I <em>actually</em> want to be?</p>

<p><a href="https://www.amazon.com/Highest-Goal-Secret-Sustains-Moment-ebook/dp/B00O99HUYC/"><em>The Highest Goal</em></a>, a book by Stanford professor Michael Ray, has helped me work through such questions several times in my life. In <a href="https://www.tractionheroes.com/2439976/episodes/18418998-the-highest-goal">episode 26</a> of the <a href="https://www.tractionheroes.com/"><em>Traction Heroes</em> podcast</a>, I read Harry a passage that gives a taste of the book:</p>

<blockquote>
  <p>Society’s fundamental assumptions too often lead to negative outcomes. We see the evidence in the growing gap between the haves and have-nots; increasing violence; endemic poverty and starvation; environmental degradation; the breakdown of values, integrity, communication, and community; a sense of unhappiness and fear; and poor health among people in even the richest nations.</p>

  <p>Many of us feel an urgent need to change the status quo and contribute to a new positive direction. The world needs us all to contribute our best. But how can any individual affect what seems to be a massive concatenation of forces and, at the same time, face the challenges of his or her life?</p>

  <p>This book answers that question. In this time of global transformation, we must act creatively and courageously from our deepest knowing and compassion. Only if we are living in service of the highest goal, in whatever way we experience it, can we meet the challenges of our times and fashion lives that work. And only if we discover ways of translating this highest goal into a new way of living, can it be practical and expansive for all.</p>
</blockquote>

<p>Identifying your highest goal isn’t about magical thinking. There are no “law of attraction” vibes here. Instead, it’s about aligning your efforts with what you care about most deeply. Coherence opens doors — and gives you the energy to traverse them.</p>

<p><a href="https://www.tractionheroes.com/2439976/episodes/18418998-the-highest-goal"><em>Traction Heroes episode 26: The Highest Goal</em></a></p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Podcast" /><category term="Values" /><category term="Leadership" />
    <summary type="html"><![CDATA[Reflections on a book that has helped me align my efforts with my values.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Humanities Crash Course Week 52: Growing Up</title>
    <link href="https://jarango.com/2025/12/28/humanities-crash-course-week-52/" rel="alternate" type="text/html" title="Humanities Crash Course Week 52: Growing Up" />
    <published>2025-12-28T00:00:00-08:00</published>
    <updated>2025-12-28T00:00:00-08:00</updated>
    <id>https://jarango.com/2025/12/28/humanities-crash-course-week-52</id>
    <content type="html" xml:base="https://jarango.com/2025/12/28/humanities-crash-course-week-52/"><![CDATA[<p>The final week of the year is also the last of my <a href="https://jarango.com/2025/01/05/a-crash-course-in-the-humanities/">humanities crash course</a>. For this week, <a href="https://www.honest-broker.com/p/my-12-month-immersive-course-in-humanitiesthe">Gioia recommended</a> assorted texts that touched on how we derive meaning and motivation from our responsibilities to ourselves and others. I also watched a classic film by a beloved and recently departed director.</p>

<h2 id="readings">Readings</h2>

<p>Most of these week’s texts were very short — a blessing given my family commitments. Here’s a brief sketch of each:</p>

<ul>
  <li><strong>The White Album</strong>, the opening essay of <a href="https://en.wikipedia.org/wiki/Joan_Didion">Joan Didion</a>’s collection <a href="https://en.wikipedia.org/wiki/The_White_Album_(book)"><em>The White Album</em></a> (1979). An impressionistic first-person account of the closing years of the 1960s. I sensed intellectual and moral confusion — a portrait of a culture unmoored and drifting.</li>
  <li><strong>Bloodchild</strong>, a science fiction novella by <a href="https://en.wikipedia.org/wiki/Octavia_E._Butler">Octavia Butler</a> that leads her collection <a href="https://en.wikipedia.org/wiki/Bloodchild_and_Other_Stories"><em>Bloodchild and Other Stories</em></a> (1985). The narrator is a human in an alien world. The story details humans’ symbiotic relationship with a native species that uses them to incubate their offspring.</li>
  <li><strong>The Things They Carried</strong>, a short story by <a href="https://en.wikipedia.org/wiki/Tim_O%27Brien_(author)">Tim O’Brien</a> that leads his <a href="https://en.wikipedia.org/wiki/The_Things_They_Carried">1990 collection of the same name</a>. A portrait of a platoon during the Vietnam war: their lives, loves, fears, etc. “They carried all they could bear, and then some, including a silent awe for the terrible power of the things they carried.”</li>
  <li><strong>The Awakening of My Interest in Advanced Tax</strong>, chapter 22 of <a href="https://en.wikipedia.org/wiki/David_Foster_Wallace">David Foster Wallace</a>’s posthumous novel <a href="https://en.wikipedia.org/wiki/The_Pale_King"><em>The Pale King</em></a> (2011). The narrator describes his transition from a collage “wastoid” to joining The Service (aka, the IRS) during the 1970s. His reflections describe his drug use (including television), his parents’ divorce, mother’s descent into mental illness, father’s accidental death, and finding purpose through an accidental encounter.</li>
  <li>Chapter five of <a href="https://en.wikipedia.org/wiki/The_Big_Book_(Alcoholics_Anonymous)"><em>The Big Book of Alcoholics Anonymous</em></a> (1939), which describes the AA system. Besides laying out the famous 12-step program, the text emphasizes that alcoholics cannot get better on their own. Instead, they must surrender to higher powers: God and society.</li>
</ul>

<h2 id="audiovisual">Audiovisual</h2>

<p><strong>Music:</strong> The work of <a href="https://en.wikipedia.org/wiki/Stephen_Sondheim">Stephen Sondheim</a>, an influential American composer and lyricist of show tunes. I was most familiar with <a href="https://en.wikipedia.org/wiki/Into_the_Woods"><em>Into the Woods</em></a>, but had also heard selections from <a href="https://en.wikipedia.org/wiki/West_Side_Story"><em>West Side Story</em></a>.</p>

<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/kqCsQCsinK4" allowfullscreen=""></iframe>
</div>

<p><strong>Arts:</strong> The paintings of <a href="https://en.wikipedia.org/wiki/Leonardo_da_Vinci">Leonardo da Vinci</a>. I didn’t spend any time this week with Leonardo, since I’ve had the privilege of seeing many of his paintings in person, including a second viewing of the Mona Lisa earlier this year.</p>

<figure class="image">
  <img src="/assets/images/2025/12/leonardo-mona-lisa.jpg" width="100%" alt="Leonardo’s painting of the Mona Lisa: a portrait of a woman with a serene expression sitting with folded hands, wearing a dark dress against a blurred landscape background. *Mona Lis*a by Leonardo da Vinci via [Wikimedia](https://commons.wikimedia.org/w/index.php?curid=15442524)" />
  <figcaption><p><em>Mona Lis</em>a by Leonardo da Vinci via <a href="https://commons.wikimedia.org/w/index.php?curid=15442524">Wikimedia</a></p>
</figcaption>
</figure>

<p><strong>Cinema:</strong> <a href="https://en.wikipedia.org/wiki/Rob_Reiner">Rob Reiner</a>’s <a href="https://en.wikipedia.org/wiki/Stand_by_Me_(film)">STAND BY ME</a> (1986). I’d seen several other Reiner movies, but not this one. It seemed appropriate given the director (and his wife’s) tragic deaths earlier this month.</p>

<div class="embed-container youtube-wrapper">
  <iframe src="https://www.youtube.com/embed/jaiZ6ZQoO-Y" allowfullscreen=""></iframe>
</div>

<p>The film is a coming-of-age story based on a Stephen King novella. Set in 1959, it tells of four preteen boys who set out on an overnight adventure to view the body of another boy who’s been accidentally killed by a train. Despite the grim topic, the film is a warm chronicle of childhood’s end.</p>

<h2 id="reflection">Reflection</h2>

<p>This week’s works point to maturity: grappling with becoming a functional adult member of a society. We thrive in stable contexts. That implies some degree of social order. Liberty isn’t freedom from responsibility; it’s freedom <em>through</em> responsibility — especially our responsibilities to each other.</p>

<p>The “flower power” generation missed this. I sensed in Didion’s essay exhaustion and disappointment at where the sixties ended: self-involvement, confusion, lack of direction and meaning, etc. The Manson murders and the Vietnam experience were signs that something had gone wrong.</p>

<p>DFW’s ‘wastoid’ is a natural consequence. Meaning comes through responsibility — and a life devoid of meaning also lacks motivation. His redemption comes through something amiss in his life: an authority figure. The substitute Advanced Tax instructor represents social order. The wastoid isn’t so much drawn to the teacher’s worldview as his M.O. — his demeanor <em>projects</em> confidence and stability. That may be enough.</p>

<p>The four friends’ journey in STAND BY ME is also about maturing. They set off on what they expect will be a cool adventure. Their motivation is fame: their discovery will make them local heroes. Confronting the actual corpse disabuses them of that childish notion.</p>

<p>Treat others as you’d like to be treated. Nobody should be used as a means to an end. There, but for the grace of God, go I. Three sentences that recap many of the lessons from this journey through the humanities.</p>

<h2 id="notes-on-note-taking">Notes on Note-taking</h2>

<p>I went mostly old school for this last week of the course. The readings were simple enough, so I didn’t resort at all to LLMs. I did read the Wikipedia entries for the readings, but they weren’t strictly necessary.</p>

<p>As with previous weeks, I captured individual notes for the readings and the movie in Obsidian. Writing helps me think about what I’ve read/seen. Knowing that some of these writing will be public changes their tone a bit, but not their substance.</p>

<p>The notes themselves are not an end. I might revisit some of them over time, but most will lie fallow. The goal hasn’t been to facilitate later recall, but processing ideas in the moment. I expect to write more about this distinction over time.</p>

<h2 id="up-next">Up Next</h2>

<p>I’ve reached the end of Gioia’s syllabus. Of course, this isn’t the end of my self-education: I will continue reading and experiencing other works. But this is also the end of this series of posts. Thanks for reading — I hope this was as valuable to you as it was to me.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Arts &amp; Humanities" /><category term="Personal Knowledge Management" />
    <summary type="html"><![CDATA[Readings about responsibility to self and others at the end of this year-long course.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/images/2025/12/leonardo-adoration-of-the-magi.jpg" />
    <media:content medium="image" url="/assets/images/2025/12/leonardo-adoration-of-the-magi.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
  <entry>
    <title type="html">Brave New World</title>
    <link href="https://jarango.com/readings/brave-new-world/" rel="alternate" type="text/html" title="Brave New World" />
    <published>2025-12-21T00:00:00-08:00</published>
    <updated>2025-12-21T00:00:00-08:00</updated>
    <id>https://jarango.com/readings/brave-new-world</id>
    <content type="html" xml:base="https://jarango.com/readings/brave-new-world/"><![CDATA[<p><em>Brave New World</em> describes a scenario where a progressive industrialized world state covers large swaths of the Earth. It’s primary goal: happiness and stability through social control. Ideal, right? No, it’s a nightmare.</p>

<p>This new “better” world has very different mores than our own. The word “mother” is taboo: human reproduction by natural means has been replaced with a scientifically controlled process. Embryos are manipulated to generate different castes: Alphas, Betas, Deltas, Gammas, and Epsilons. Some are reared for leadership, others for menial labor. The latter are intentionally stunted both physically and intellectually.</p>

<p>All are conditioned through subliminal messaging. The goal: accepting their lot in life and not striving for the alternatives, which would cause strife. Biological needs are provided for. Entertainment is exclusively superficial: titillating and devoid of meaning. There are synthetic — new! improved! — versions of everything, from music to flour. All remaining unpleasantness is sanded off by casual (and frequent!) use of a powerful drug called <em>soma</em>.</p>

<p>Mindless consumption is pushed as a positive, since it drives industrial productivity: people are encouraged to dispose of goods rather than repair them. Religion has been replaced with a worship of progress through efficiency, centered on Henry Ford. The “civilized” world’s calendar now starts with Ford’s birth: the novel takes place in AF (After Ford) 632. All references to “Our Lord” have been replaced with “Our Ford.”</p>

<p>Gender and sexual norms are also very different from those of 1932. Women and men enjoy greater equality and promiscuity is encouraged. There are no lifelong pairings: “everyone belongs to everyone.” But individuality is also discouraged: people are conditioned to loathe loneliness. Religious rituals have been replaced with communal gatherings that encourage ego dissolution.</p>

<p>I’m already deep into this description without mentioning a plot. That’s not an accident. While the novel does have one, it’s mostly in service to sketching the scenario. John, the novel’s ostensible protagonist, is an outcast: born (the old-fashioned way) to Linda, a citizen of the World State who’s accidentally left behind in a native American reservation.</p>

<p>As a result, he’s been raised with many of the foibles of the old world: a melange of religious superstitions, suboptimal diet and hygiene, and — critically, Shakespeare. (The book’s title — a phrase often repeated in the novel — comes from <em>The Tempest</em>.) Two progressive characters — Bernard Marx and Lenina Crowne — bring John to London. Through the ensuing culture clash, we recoil at the obvious failings of this dehumanized society.</p>]]></content>
    <author>
      <name>Jorge Arango</name>
    </author>
    
    <category term="Technology &amp; Innovation" /><category term="Other" />
    <summary type="html"><![CDATA[A classic science fiction novel that's sadly relevant today.]]></summary>
    
    <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" />
    <media:content medium="image" url="https://jarango.com/assets/images/jarango-title-card-ph.jpg" xmlns:media="http://search.yahoo.com/mrss/" />
  </entry>
  
</feed>
