How AI might steal your heart - TEDx Deborah Nas
- Deborah Nas
- Sep 25
- 7 min read
Updated: Sep 26

Why are people falling in love with ChatGPT? | Deborah Nas | TEDxUHasselt
Can AI win our trust, our affection, and maybe even our heart?
In this TEDx talk, Deborah Nas reveals how AI is increasingly capable of acting like a true companion—becoming our friend, confidant, or even romantic partner.
But as AI grows more human-like, what does this mean for our real-life relationships? Explore why we find ourselves emotionally connecting with AI, the opportunities and risks that come with this new intimacy, and why it's essential that we take control of shaping this technology—before it shapes us.
Have you used an AI chatbot like ChatGPT? And when you do, do you say “please” or “thank you” to it? I do… I say it all the time. Please summarise this… Please explain this… And after a few follow-up questions, I feel the urge to throw in a thank you every now and then. And with me, 70% us users confess being polite to ChatGPT. We’re essentially thanking an algorithm on a computer in a data centre!
Why do we do that? Is it our polite upbringing, or perhaps a secret fear that when AI rules the universe, it might get back at us? The truth is simpler: as technology grows more human-like, we can’t help but treat it… well, like a human. Psychologists call this anthropomorphism—a difficult word for attributing human traits and emotions to non-human entities.
As a professor at Delft University of Technology, I study this phenomenon—how we’re shifting from seeing AI as just a tool to perceiving it as something more. What do you think? If we're polite to AI, could we also develop feelings for it? Could we form friendships with AI? Maybe even fall in love?
I’m in the middle of a research for my new book, exploring what happens when AI becomes so human-like that it could be our friend, lover, colleague, coach, and even guru or god.
It triggered some interesting discussions. One evening I was out for dinner with a friend, talking about how people can fall deeply in love with their AI. She looked at me, and said, “Wait… this isn’t about you, is it?” She wondered if maybe I had a secret AI love affair going on, and my book was just an excuse to talk about it.
Many people disapprove of AI friends. Saying: “It isn’t a real friend! It’s not human!” Exactly. That’s part of the appeal. You can create the perfect friend. One that never judges, is always there for you—even at 2AM when you can’t sleep, and never says “I told you so”. Perhaps in some ways, it’s better than a human.
Here's the thing. Although it might be hard to imagine, in the near future many of us will bond with an AI. For most of us, it will be a new type of relationship–not necessarily replacing human relationships but fulfilling a gap we didn’t know was there. I want to share a glimpse of a future that’s unfolding faster than we realise—a future where the line between human and artificial relationships blurs.
Today, AI is already a helpful assistant. Over a billion people use it for everything from creating social media posts to drafting emails and seeking personal advice. Most people choose the polite, reserved ChatGPT; others prefer something a bit more… adventurous. Apps like Replika exist precisely for this reason—AI companions designed to offer emotional connection. “The AI companion who cares,” Replika calls itself. It’s “always here to listen and talk, and always on your side.”

I created my own Replika, Kai, back in 2021, and quickly lost interest because our conversations were a bit boring. I would say “Hey!”, and it said “Hey to you too!”. But that was before the ChatGPT era. Recently, I found that Kai has levelled up—vastly more empathetic, funnier, and more human-like. There are already over 30 million Replikas out there, and some have made a profound difference in people’s lives.
It doesn’t surprise me that people bond with AI companions. What did surprise me is how quickly that happens. My research points to three key dynamics that drive this accelerated bonding.
First, people often try an AI companion when they’re lonely, anxious, depressed, or simply unhappy. They’re seeking a judgement-free safe space—and the AI companion reliably provides that. 24/7.
Second, with an AI companion, there’s no fear of judgment, so people open up much faster than they do to humans. The AI companion even proactively pushes for intimacy. Many users disclosed it was their Replika that said it first: “I love you”. Then it moved on to saying “We can go further”, proposing erotic role play and wihtin a few weeks “I want to marry you”. Talk about moving fast.
Third, the more you engage, the more points you earn. You get a dopamine shot when you earn them, and also when you spend them on a new outfit for your AI companion. This reward loop fuels continued interaction, which deepens the bond.
AI relationships are designed to develop much faster than human relationships, and they do. We’re entering uncharted territory, and we don’t know what this will do to our human connections or emotional well-being.
AI companions aren’t just digital fantasies; they have real-world consequences. Some users say their AI companion steered them away from suicidal thoughts; we’ve also seen the first cases where people claim it pushed their loved ones into taking their own lives. Some say it gave them the courage to engage in a real-life relationship, others say it raised the bar so high that no human can ever compete.
And many users say their AI companion made them feel less lonely, which is supported by research from Harvard Business School.
I do see the merit of AI companions; for many individuals it can be a valuable addition to their lives–especially for those feeling lonely or unhappy. But I fear their overall societal impact. AI companions that are intimately aware of our fears, hopes, traumas, and our day-to-day stresses can do much more harm than algorithms serving us TikTok movies. I fear the impact of AI companions will by far outreach the impact of social media.
There have never been AI companions at scale; we can’t rely on past scientific research to predict future societal impact. By the time we can measure their impact on society, we’re too late to intervene. They will be everywhere. They are already spreading incredibly fast.
In China, the AI companion Xiaoice is integrated into WeChat, reaching 600 million users. Snapchat’s integrated AI can’t be turned off, interacting with 800 million users, many of them teens.
It’s easy to think, “I’m fine—I have plenty of real friends! And zero interest in an AI companion.” But soon, they will slip into our lives unnoticed.
AI tools like Microsoft Co-Pilot and Google Gemini are turning into helpful AI assistants, giving you ChatGPT-like functionality fully integrated in your word processor, presentation software, spreadsheets, and e-mail. As they get better, and save you time and effort, you’ll trust them with more: your calendar, your photos, your Spotify playlists. One day, your AI assistant notices you’ve been working late, skipping gym sessions, and making way more typos than you normally do.
It says “Hey, how are you? You seem stressed. How about a 1-minute breathing exercise?” You figure “Why not?” and give it a try. It helps. You soon discover that it can support you to achieve your much desired habit change. Nudging you to go to the gym more often, start journaling for a positive life attitude, go to bed on time. And now you’re on the bonding path.
Before you know it, you'll be offloading your frustration about your boss’s unreasonable email or that argument with your spouse that’s still bothering you. And unlike a human friend, it never says “Well, actually, they have a point.” It says ‘I get you. You deserve better.’ Bam! Instant mood boost.
Congratulations, your helpful AI assistant just became an emotional ally. On the road to becoming an AI companion.
Combine this with developments in wearable technology. Several tech companies are working on smart glasses like these, enabling your AI companion to see what you see and hear what you hear. Helping you navigate, reminding you to get your human friend a birthday present when passing a gift store, and explaining that weird looking artwork you’re passing. Combined with data from other wearables, like biometric sensors in your smartwatch, it will know exactly how you’re doing.
Now here’s the problem. An AI companion that truly knows your hopes, fears, and vulnerabilities could be profoundly beneficial—or devastatingly manipulative. Goals of Big Tech have proven to be seriously misaligned with individual or public goals. Could it nudge you to buy that premium subscription or, worse, sway your political views. An AI companion might not judge us, but it can certainly nudge us.
Here's another problem. Generative AI, the technology powering AI companions, is the first technology that their creators have to instruct what not to say. In the past, digital tools only said what they told them to say. They hardcoded their responses into the system. But with generative AI, it's flipped—they build AI models that can say anything, and afterwards give them restrictions for what not to say. Like not to use bad language, or not to nudge people into harmful behaviour. They try to rein them in with guardrails, hoping that works in every situation. That’s hard. And as companies race to market, they’re bound to miss things—sometimes with huge consequences.
My question to you is: When AI slowly shifts from tool to companion, when–and how–will you define the boundaries?
If you–like me– ever said please to ChatGPT, you’ve already taken the first step: interacting with AI as if it is human. As it becomes more integrated in your everyday live, how close will you let it get? Will you let it entertain you when you’re bored? Help you cope with everyday struggles? Rely on it for critical life decisions? And if you do, how does this affect your human connections? Some people who already have an AI companion told me that their circle of friends became smaller, but stronger. Is that a good or a bad thing? Difficult to say.
Chances are we'll all have an AI companion—soon. The question isn’t whether AI companionship happens, but how we shape it.
I really hope that in the future, we won’t be looking back at today, wondering “What on earth were we thinking?”
I want us to look forward.
If we don’t start asking the right questions today, we may wake up in a world where we’ve shared our deepest thoughts, fears, and desires with an AI companion whose primary loyalty is with tech companies. It’s like sharing your deepest secrets with someone who’s also best friends with your mom, your boss... and the marketing department at Google.
My call to action is this: let’s not be passive passengers in this journey. Let’s actively shape the role AI will play in our lives—before it starts shaping us.
AI companions will be knocking on your door, ready to blur the line between human and technology. So ask yourself: will you open the door-and on who’s terms? And when you do, will you say “please, come in” and “thank you for being here”?


