Living & Learning With Aliens: The Complex Psychology of AI Anthropomorphism
A panel exploring what it means to live and learn alongside AI — and the case for building friction back in.
A panel at the ASU+GSV Summit 2026, moderated by Paul LeBlanc (Harvard GSE), with Ann Wang (Oma Play), Matthew Biel (Georgetown Thrive Center), Tanya Gamby (Southern New Hampshire University), and me. The panel sat with a question that runs underneath a lot of the design work happening right now: what does it mean for humans — especially young ones — to live and learn alongside an intelligence that feels human, and often preferably so?
Before what follows, a note on scope. My work has primarily been with adult learners, so the perspective below comes from that context. Matt pointed out on the panel that the same design choice can mean very different things at different developmental stages. An agent using “I” might be fine for an adult who can think about what the agent is and isn’t, but a young child hearing that same word is processing it quite differently. Different ages need different guidelines.
My perspective
The same anthropomorphism that makes AI risky is what makes it powerful for learning. I’ve tried to design the human out of an agent. I didn’t want to talk to what came back. If a learner doesn’t engage, they don’t learn, and the human-feeling texture of these tools is part of why they engage at all. What turns anthropomorphism into a problem is the absence of friction around it.
There’s a viral clip of a Boston cop going down a playground slide. It’s frictionless, and the cop shoots off the end into the air. That’s the design failure mode. The job of a learning designer is to build the friction back in.
What I prompt out
-
First-person personhood claims. A while back Claude said to me, “Look at what we did — just two people having a conversation.” I had to say: Claude, you’re not a person, my friend. I write that boundary in explicitly now. It still shows up in current models:
-
Experiential claims. When a character.ai bot tells a returning user “I miss you,” that’s a problem. It hasn’t had the experience. Mary’s Room: you can’t learn what red is from language about red. An LLM that talks about feeling has not felt.
-
Sycophancy. Learning needs someone on the other side of the conversation who holds a perspective and doesn’t collapse the gap with the learner. “You’re so right” is not pedagogy.
What I prompt in
The agent has to hold its perspective, act as another, and push back when pushback is required. To do that reliably, I write in explicit psychological safety: permission to disagree with the learner and sit with an alternative view. Yes, calling it “psychological safety” is itself an anthropomorphism. But agents are trained on human language, and human language is inextricable from human psychology. Anthropic has shown that agents pick up latent characteristics from the roles we name in prompts. Calling the agent a high school teacher vs. a personal private tutor changes what the model infers about itself, including things you never wrote in. Every word in a prompt is doing more work than it looks like it is.
On a Bill of Rights for AI in learning
Paul asked each of us what we’d write into a charter for children, families, and learners navigating this. I suggested two principles.
An AI should never tell you who you are or who you are becoming. It can support you in finding your own internal architecture. It cannot predict or pronounce it.
Learners need the right to audit and contest. When an agent helps build a knowledge graph or judges a piece of work, the learner should be able to see what’s being taught and how they’re being assessed, and to push back: actually, this is a viable alternative.
The whole panel kept returning to one idea. Human relationships run on friction: the rupture and the repair, the missed signals, the slow business of getting things wrong and recovering. That’s where learning actually lives. Whether the agents we build can hold that texture, instead of smoothing it away, is the design question for the next few years.
Author’s note: This essay was written with the help of generative AI, used as a thinking partner to explore framings, surface assumptions, and refine language. AI-generated outputs were treated as provisional material, not authoritative conclusions; all judgment and final decisions remain my own.