essay

Using AI Is Design Work

On judgment, materials, and thinking well under uncertainty.

6 min read · Updated January 10, 2026

It’s hard to avoid the prevailing story about AI: that it’s making us dumber.

We hear warnings that critical thinking is eroding, cognition is atrophying, students are cheating, and knowledge work is being hollowed out. AI, we’re told, is a shortcut around thinking itself.

There’s truth in this concern—and research to support it. But I keep returning to the sense that these critiques are aimed at the wrong thing.

The problem isn’t that AI diminishes thinking. The problem is how we’re using it. AI is often framed as a tool. But using AI well isn’t really tool use at all. It’s design work.

That distinction matters. Design, as a discipline, has long been concerned with exactly the conditions generative AI brings into focus: uncertainty, ambiguity, incomplete information, fluent but misleading artifacts, and tools that actively shape the work rather than simply carry it out.

Designers aren’t immune to poor uses of AI. But they tend to be better prepared for thoughtful use—not because they’re more creative or more ethical, but because their way of thinking aligns closely with what working with generative systems actually demands.

Designers don’t seek answers. They frame situations.

In much of today’s discourse, AI is treated like a more capable calculator: you ask a question, it gives you an answer. If that’s the mental model, then cognitive offloading is almost inevitable.

But that isn’t how designers tend to work.

Designers begin not with solutions, but with framing. They expect problems to be ill-defined. They know that much of the work lies in understanding what the question even is. And they assume that the question itself will shift as constraints, values, and trade-offs come into view.

When designers use AI, they bring this stance with them. Prompts aren’t requests for truth; they’re framing moves. Outputs aren’t conclusions; they’re provisional artifacts—things to react to, test against, and reshape.

This reflects a long-standing view of design not as narrow problem-solving or optimization, but as sensemaking under uncertainty. It’s work in which judgment can’t be delegated and responsibility can’t be automated.

Using AI well requires exactly that.

Working with AI is a conversation with a material

Design has always involved a conversation with materials—sketches, prototypes, diagrams, language itself. Designers act, the material responds, and understanding takes shape through that exchange.

As Erik Stolterman and Harold Nelson put it, “materials are not passive in the process of becoming real.” Materials push back. They shape what can be seen, thought, and made. They introduce resistance, affordances, and surprise.

Donald Schön observed something similar when he described designers treating materials as design partners—things that “talk back,” reflecting constraints and possibilities the designer couldn’t fully anticipate.

Generative AI introduces a new kind of responsive material into this familiar process.

Like other design materials, it doesn’t simply execute instructions. It responds—sometimes fluently, sometimes oddly, sometimes misleadingly. It surfaces assumptions embedded in our framing. It makes implicit ideas visible. It opens paths we might not have noticed on our own.

Used well, AI becomes something to think with, not something that thinks for us.

The risk appears when the conversation stops—when outputs are treated as conclusions rather than material. Fluency begins to stand in for authority. Closure arrives early. Thinking quietly recedes.

A designer’s stance resists that move. Fluency doesn’t end the dialogue; it invites further attention.

Thinking has never lived only in our heads

What designers intuitively practice, cognitive science has been describing for decades.

Research on distributed cognition shows that many forms of thinking unfold across people, tools, representations, and environments. No single individual fully contains the cognitive process. Teams navigating ships, scientists working in labs, and people solving everyday problems all think through interaction with artifacts and systems.

More recently, this idea has entered wider conversation through the notion of the extended mind. Humans think with their bodies, their surroundings, and their relationships. Notebooks, whiteboards, conversations, and movement aren’t shortcuts; they’re cognitive infrastructure.

Designers have long treated materials this way: as collaborators in thinking, not threats to human agency. In that sense, AI isn’t a break from how thinking works. It’s an extension of something we’ve always done—often without naming it.

That doesn’t make AI harmless. But it does suggest that the danger isn’t collaboration. It’s uncritical delegation.

AI doesn’t replace judgment. It reveals whether judgment is present.

Generative AI is exceptionally good at producing plausible language. That’s where much of the unease comes from.

When AI outputs are treated as answers rather than artifacts, judgment tends to fade from view. Decisions happen by default. Trade-offs remain unexplored. Closure feels efficient—but it’s often premature.

When AI outputs are treated as design material, something else happens. Possibilities widen. Assumptions become visible. The human user has to choose.

What matters here isn’t intelligence or technical fluency. It’s epistemic posture.

People who expect certainty, speed, and closure will tend to use AI reductively. People who expect ambiguity, iteration, and provisionality are more likely to use it generatively.

Designers already inhabit the second posture. That’s why many report that AI sharpens their thinking rather than dulling it. Used as part of a broader cognitive system—and within clear boundaries—AI can support judgment rather than replace it.

From tool literacy to design literacy

This is the deeper shift AI asks of us.

If we treat AI primarily as a tool for producing answers efficiently, concerns about cognitive decline make sense. In that frame, fluency substitutes for understanding, and judgment quietly drops away.

But that frame is incomplete.

Using AI well is design work. It draws on capacities design has always cultivated: framing situations, working with ambiguity, engaging provisional artifacts, exercising judgment, and staying responsible for meaning. These aren’t soft skills. They’re the disciplines required to think well in environments where language, options, and possibilities are generated at scale.

Seen this way, AI doesn’t lower the bar for thinking. It raises it.

Generative systems don’t eliminate judgment; they make its presence—or absence—visible. They don’t replace sensemaking; they reveal whether we’re willing to do it. When outputs are treated as material rather than conclusions—when fluency invites attention instead of deference—thinking doesn’t atrophy. It deepens.

The anxiety surrounding AI often assumes intelligence is fragile and easily outsourced. Design offers a different view: intelligence is cultivated through interaction—with materials that respond and artifacts that make our thinking visible.

AI doesn’t undermine that tradition. It extends it.

The question isn’t whether AI will think for us.
It’s whether we’re willing to meet a tool that calls for judgment with a mindset prepared to exercise it.

If we frame AI less as something to master and more as something to design with, we don’t lose our capacity to think.

We strengthen it.


Author’s note: This essay was written with the help of generative AI, used as a thinking partner to explore framings, surface assumptions, and refine language. AI-generated outputs were treated as provisional material, not authoritative conclusions; all judgment and final decisions remain my own.