Astronomer and astronaut
Issue 255: Why software builders need to go to space in the era of AI
I've been on the Big Island of Hawai'i for the past two weeks, taking a week off and working remotely the other week. During the quiet evenings, the family started watching the first Jurassic Park movie—a marvel to this day. We ended up watching the next two: The Lost World (the gymnastics kicking a raptor one), and Jurassic Park 3 (Yes, the one with the raptor in the dream yelling, "Alan!").
As much as I love Ian Malcolm, played by the legendary Jeff Goldblum, Sam Neil's Dr. Alan Grant has always been my favorite character. There is a scene in JP3 that got me thinking. Grant was talking to Billy Brennan, a young associate in Paleontology who works with Dr. Grant, who was reflecting on his experience on Isla Nublar, the island of the original Jurassic Park:
“You know, if you wanted to see dinosaurs, you should’ve stuck with me. Because I’m the only one who saw them alive. You see, the other paleontologists – they study bones, fossils…Me? I was with them. I was there. There’s a difference between an astronomer and an astronaut.”
The “astronomer1 vs. astronaut” metaphor captures the gap between studying something from a distance and experiencing it directly. In creative work, it’s the difference between knowing the theory and living the execution—between reading about AI and shipping with it.
The risk of astronomy
Let’s be clear, astronomy is a crucial practice—don’t get it twisted. Astronauts would not exist if it weren’t for the study of systems and research. This is not to say it isn’t important. What is a risk in frontier moments is that being a strategist for a likely area will completely change. The constructs of Computer Science and Human Computer Interaction.
The cost of practical making with AI is the equivalent of the cost of going to space, reducing. As a result, the time spent researching and speculating reduces as a result to go in the field, just like building software.
Designing AI with AI
Let’s look at areas where you can apply AI with your design process instead of talking about it. I’ll continue to use the astronomer and Astronaut metaphor to describe some of these concepts.
The astronomer reads about AI design patterns, human-AI interaction models, and “best practices.” The astronaut actually prototypes with LLMs, tests hallucination boundaries tweaks prompt chains, and sees how users react to imperfect outputs. Designing for AI without experiencing how flaky or powerful it is firsthand leads to mismatched expectations and poor UX.
The astronomer knows what prompt engineering is, and maybe follows OpenAI’s guide. Astronaut experiments with few-shot prompts, chain-of-thought scaffolding, and custom system instructions in real tools. Prompts aren’t static—they’re product features. You only learn their fragility, power, and nuance by shipping them.
The astronomer studies benchmarks like MMLU or BLEU scores. The astronaut Tests Claude vs. GPT-4 vs. open models in their product’s specific context (e.g., UX writing tool, support chat). Benchmarks don’t predict performance in real-world edge cases. You discover that through living it.
The astronomer reads listicles of “Top 10 AI tools for designers.” The astronaut builds a real workflow with those tools (e.g., integrating Figma plugins, using Diagram AI, or making voice prototypes with ElevenLabs). You only learn the friction, the breakthroughs, and the aha moments by stitching tools together yourself.
The astronomer reads about AI ethics and alignment. The astronaut has had to make real product calls around user transparency, edge case hallucinations, and responsible disclosures. It’s easy to talk ethics until you ship a feature that hallucinates sensitive content.
The astronomer talks about “real-time AI” as an idea. The astronaut deals with slow response times, context window issues, and API rate limits. You only understand how these things break your UX when you’ve tried to build with them.
The astronomer believes everything should be AI-enhanced. The astronaut has built enough to know when rules-based logic or traditional UI is better. Experience teaches restraint and better product judgment.
Recap
Astronauts learn by doing, and in the era of AI in software, we need more time to experiment and make sense of the environments and matter we work with. They are the ones who bring the rocks back from the moon to study, not observe from a distance.
Don’t just read the academic papers—run the models. Insight doesn’t come from citations. It comes from collisions. Don’t just study the prompt—test it until it breaks. Good design starts where the instructions fail. Don’t just plan the roadmap—ship into the unknown. Real traction lives past the first version. Don’t just interview users—be one. The best feedback loop is lived experience.
In my time working in AI, you can tell the people who are having conversations about AI vs. the ones who use it and know it. As a friend once said, stop bookmarking links and start building.
Hyperlinks + notes
A collection of references for this post, updates, and weekly reads.
What a weird week to talk about astronomers and not referring to astronomer.io.
love this metaphor. i had one of the coolest moments yesterday building a dashboard for ops with claude code. i accidentally made some datasets which might get me in trouble tomorrow, but in the process i learned about zendesk, bigquery, sql, and how ops utilizes them to make the product offering possible. i can’t wait to bring this knowledge to the consumer design