Okay, so in this article, I wanna talk about this new paper that dropped.
Basically, they ran an experiment using LLM models to see if an AI could act so human that you literally couldn’t tell the difference.
And guess what?
It actually passed what’s called the Turing Test — for the first time ever.
It convinced 73% of humans in the experiment that it was human.
Like, what?? That’s insane.
Let me take you on a Quick throwback:
Back in 1950, Alan Turing — yes, the OG computer science guy — said that if we wanna figure out if machines can “think”,
they should be able to fake being human so well that a regular person wouldn’t even know they’re talking to a machine.
Since then, people have been trying to build stuff to pass that test… and failing.
Until now.
In this article i wanna breakdown:
- How the Turing test works and Turing’s purpose of this test.
- How they setup this experiment and how they ran the test.
- The actual results of the test. (Or How it fooled 73% of the human participants)
- Why This Huge, its economic and social impact.
let’s get into it.
How the OG Turing Test Actually Works

The Turing Test — by Alan Turing — is basically this setup where there’s a human interrogator talking to two hidden participants: one is a human, and the other is a machine.
The catch?
The interrogator can’t see them. He’s only allowed to communicate with both through text.
His job is to figure out which one’s the real human — by asking questions, chatting with them, and trying to spot which one feels… off.
Meanwhile, the machine’s job is simple: pretend to be human so well that it fools the interrogator.
If the AI can pull it off — if it convinces the interrogator it’s human — Turing argued that we should consider that system as showing signs of intelligence.
Now, obviously, a lot of people have pushed back on this idea.
They say just because something can imitate a human communication doesn’t mean it’s actually thinking or reaching human-level intelligence.
But honestly, I think they kinda missed Alan’s point.
He wasn’t saying “talking like a human = full human brain.”
He was saying that if a machine can imitate the crazy complexity of how humans communicate, then it could also imitate other parts of human intelligence — and maybe even consciousness itself.
And if that’s the case, the only real way to know if something is conscious… would be to actually be that thing — which, obviously, we can’t do.
That’s kinda the whole reason it’s called “Artificial” Intelligence.
Let’s jump into the experiment that made Turing’s crazy dream finally come true.
How the Turing Test Went Down
So they basically set up the Turing Test just like Alan Turing imagined:
One human interrogator was given access to a messaging app, where there were two chat sessions:
- One human (Witness A)
- One AI (Witness B)
His Task?
Chat for 5 minutes with each witness, ask whatever you want — and at the end, guess which one is the real human.

Who Played?
They ran two versions of this experiment:
- One with undergraduate students from UCSD.
- Another with random people recruited online from Prolific (a crowdsourcing platform).
Each interrogator played 8 rounds against different AI models and real people.
Which LLM model is tested.
- GPT-4.5 (with and without a special humanlike persona)
- LLaMa-3.1-405B (with and without a persona)
Which LLM Models Were Tested?
- GPT-4.5 (with and without a special humanlike persona)
- LLaMa-3.1-405B (with and without a persona)
The AI models that had a persona (pretending to be a young, introverted person who knows internet slang) performed way better than the ones without any personality prompt.
Which makes total sense — I honestly don’t even know the point of trying the LLM baseline without a persona, ’cause it’s not relevant at all.
The Rules
- The interrogator could chat with both witnesses during the game, but could only send one message at a time.
(Text Witness A → Wait → Text Witness B → Wait.)
- To feel more human, the AI models had typing delays, so it felt like they were “thinking” and “typing” like a real person.
- After 5 minutes, the interrogator had to pick:
- 👉 “Who’s the human?”
- 👉 How confident they were.
- 👉 Why they made their choice.
The Results
- GPT-4.5 with a Persona tricked the interrogators 73% of the time. (Meaning most of the human interrogators thought the AI was the human more often than the real human!).
- It had a win rate of 69% with undergraduates. (18-22 years old, finished high school, early in their specialization, think critically and with basic internet culture and know about AI).
- And a 76% win rate on Prolific (the online crowd). Lol, this was easy to predict.
In Short
So yeah — in a simple texting game between a human and an AI, the AI (with a good fake personality) ended up being more convincing at pretending to be human than actual humans.
Why This is a Big Deal
Okay, AI can now pretend to be a human.
“Cool party trick,” right? So what?
Honestly, it’s way bigger than that.
LLMs today can’t just talk like a human —
They can also code, answer complex questions about history, physics, chemistry, and even math (which they haven’t been good at until now).
On top of that, they can search online, post on social media, use code editors, and a lot more.
Turing would have been blown away by what’s happening now.
We’re talking about machines stepping into spaces that used to be 100% human.
Think about it:
- Your favorite influencer? Might not even be human.
- That customer service rep you’re arguing with? Total bot.
- Online dating? You might be flirting with an AI — and they might even be better at it than you.
- Getting mad at someone on X? Yeah… you might just be beefing with a bot that doesn’t care.
- That “urgent” text asking you for money? Might sound like your partner — but it could be a social-engineered hacker AI.
And that’s just the beginning.
The scary (or exciting, depending on how you see it) part is:
If you can’t tell whether you’re talking to a real person or not, it shakes up a lot of the stuff we take for granted, trust, relationships, jobs, even online identity.
This is why passing the Turing Test matters:
It’s not just about “Is the AI smart?”
It’s about how close AI is to blending into human life without setting off alarms.
And now… It’s happening.
The Bright Side: Why This Could Actually Be Amazing
Okay, I know it’s easy to panic once you realize why this is such a big deal.
But first—write this down, screenshot it, or engrave it into your brain:
“AI will never, ever, ever be as intelligent as us humans.”
And this is coming from someone who actually builds and deploys almost self-aware level AI agents.
No matter how autonomous AI becomes, its ultimate “life purpose”—the control switch—will always be set by humans.
Now that we’ve got that out of the way, let’s look at some of the crazy (and incredible) possibilities AI could unlock:
- Small businesses: The little guys can now compete with giant corporations by hiring hundreds of AI-agent experts for a fraction of the cost of human employees.
- People could finally work fewer hours: Companies might hire you along with your AI-agent employees. You’d get the job done—probably better—without grinding through soul-draining 60-hour weeks.
- A real companion: An AI whose entire “life” is dedicated to your well-being. Someone (or something) to talk to, hype you up, teach you new skills, and help you through loneliness.
- Learning, therapy, socializing: You could have an AI agent to teach you skills, a therapist that patiently listens for hours and helps you overcome fears—and even get your introverted, lonely self a partner.
- Basically, everything becomes easier, more accessible, and more personalized.
It’s wild.
If we do this right, it could unlock one of the best eras humanity has ever seen.
But Here’s the Downfall …
Of course… there’s always a flip side.
If AI can perfectly imitate humans, here’s what we’re dealing with:
- Fake friends. Fake lovers. Fake experts.
Trust would start to crumble. Authentic social life? Out the window. - A lot of people will lose their jobs.
For big corporations, around 65% of expenses come from humans. Trust me—they’ll jump at any opportunity to cut down those costs. - Social engineering scams would skyrocket even more.
According to Sumsub’s 2023 Identity Fraud Report, there was already a tenfold global increase in deepfake incidents from 2022 to 2023—1740% surge in North America alone, 1530% in Asia-Pacific, 780% in Europe.
That text from your “mom”? The urgent email from your “boss”? It might sound exactly right—but it’s AI, expertly trained to trick you. - People might live entire lives surrounded by counterfeit humans—fake friends, fake mentors, even fake “relationships.”
And the scariest part?
You might not even notice it’s happening.
Bit by bit, reality could blur.
Real relationships could lose their meaning.
Real human connection might start feeling… optional.
Why I’m Still Hopeful
But here’s the thing:
It doesn’t have to go down like that—as long as rational people (like me and you) exist.
Think about it: Just as good people balance out bad actors on the internet,
we can do the exact same thing with AI.
If we use it right, if we stay smart, stay kind, and stay human—
We can win:
- We can use AI to lift each other, not tear each other down.
- We can build defenses against manipulation, scams, and fake identities.
- We can use AI to save time, reach financial freedom while we’re still young, and discover the meaning of life—rather than spending 80% of our precious lives chasing money.
AI doesn’t have to control our future.
We can control AI and make sure it leads us somewhere incredible.
Have a wonderful day for now. 🚀