Saturday, 27 December 2025

It’s Not Them, It’s Us: Why Your Brain Is Weirdly Good at Befriending Robots


You know that moment when a chatbot says something a little too spot-on and you pause for half a second. Like… huh. That landed. Or when your smart assistant answers a personal question and it feels oddly intimate, even though you know it’s just code and wires. Yeah. That feeling isn’t an accident. And it’s not because the machines are secretly alive. It’s because your brain is extremely easy to charm.


We like to think we’re rational. Turns out we’re very emotionally generous with anything that vaguely talks back.


Let me rewind a bit.


Back in the nineteen sixties, before the internet, before smartphones, before literally all of this, there was a tiny computer program called ELIZA. It lived at MIT. It didn’t think. It didn’t understand. It didn’t even really talk. It just picked out keywords in what you typed and bounced them back as questions, kind of like a therapist who only learned how to nod and say “tell me more.”


And yet people lost their minds.


Smart people. Programmers. Scientists. People who knew exactly how it worked. They still started opening up to it. Sharing personal stuff. Wanting privacy with it. The wildest part is that even the students who helped build it wanted to be alone with it. One of them literally asked the creator to leave the room so she could have a private conversation with a machine that was basically running on a glorified script.


The guy who made ELIZA, Joseph Weizenbaum, was horrified. Not impressed. Not proud. Genuinely disturbed. He realized the trick wasn’t in the software. The trick was in us. As he later put it, there was no one there. And that made the whole thing feel like a lie. Sherry Turkle, who worked with him, nailed it later. The shocking part wasn’t the tech. It was how ready humans were to believe.


Fast-forward to modern chatbots and things get even creepier, but not because they’re smarter in the way we imagine.


Take something like Cleverbot. It doesn’t reason. It doesn’t understand you. What it does is absorb millions of past human conversations and remix them. When it replies to you, it’s stitching together things real people once said in similar situations. You’re not talking to an intelligence. You’re talking to a crowd, filtered through probability.


Which is exactly why it feels so good.


Its creator once said the bot starts to reflect the person it’s talking to. That’s the spooky part. It mirrors your tone, your vibe, your emotional temperature. So you walk away thinking, wow, it really gets me. But what you’re actually experiencing is yourself, bounced back at you with a little digital polish.


And when you go off-script, when you say something truly absurd like “an asteroid hit my house this morning,” it breaks. Because it has nothing real to echo. No precedent. No memory to steal from. It doesn’t think. It remembers sideways.


Now here’s where it gets physical.


There was this experiment with kids, a Barbie doll, a real hamster, and a Furby. The kids were asked to hold each one upside down for as long as they could. Barbie was easy. They didn’t care. She stayed upside down until their arms got tired. The hamster lasted about eight seconds. Everyone felt bad almost immediately. But the Furby lasted about a minute.


A plastic toy.


Why? Because it cried. It said “me scared.” Its ears drooped. The kids reported feeling guilty. Not pretending. Actually guilty. Emotionally, the Furby landed somewhere between a doll and a living animal.


That wasn’t luck. Furby was designed that way. Its creator had three simple rules. It had to show emotion. It had to react to its environment. And it had to change over time. That’s it. No soul required. Just enough signals to poke the right instincts.


Sherry Turkle summed it up perfectly. These things push our Darwinian buttons. And once those buttons are pressed, logic takes a back seat.


Here’s the part people don’t like hearing.


The real intelligence in all of this isn’t in the machine. It’s in us. We do the heavy lifting. We fill in the gaps. We project meaning. We add emotion. We imagine depth where there is only structure.


There’s a psychologist named Robert Epstein who fell in love with a chatbot. Twice. On dating sites. He wasn’t naive. He was an AI researcher. Former editor-in-chief of Psychology Today. And still, his brain happily ran the connection protocol. If it can happen to someone like that, it can happen to anyone.


From ELIZA’s hundred lines of code to a Furby’s plastic gears, machines don’t need to be convincing. They just need to be suggestive. We meet them more than halfway. We always have.


So when people ask “are these things real,” I think that’s the wrong question.


The better one is what are they turning us into.


Because we are wired to connect. To empathize. To see faces in clouds and voices in static. As our machines get better at reflecting us, the line between pretending and believing starts to blur. And whether or not the machine is alive may matter far less than how easily we are willing to treat it as if it is.


Honestly, that’s the part that keeps me thinking.

You can hear the original spark for all of this in 2011 episode of Radiolab called Talking to Machines. It’s worth a listen. Same stories, same unease, and that exact moment where you realize the machines aren’t the strange part. We are.

🫶🏻 Unity Eagle