
For those of you lucky enough to not have been completely bombarded by the term, “AGI” = “artificial general intelligence”
I had an epiphany recently. The AI hype cycle has reached peak absurdity, and I’m watching supposedly smart people make the same mistake that drunk guys at Hooters have been making since 1983.
You know the type. Guy goes to Hooters, orders some mediocre wings, and because the waitress whose literal job description includes “be friendly to customers” smiles at his terrible jokes and remembers he likes ranch dressing, he’s convinced she’s totally into him. She’s not, Kevin. She’s working. She gets paid to be nice to you. That’s the entire business model.
Now watch the same thing happening with LLMs and AGI predictions.
The Pattern Is Identical
Some VP gets access to Claude or ChatGPT. It writes him a pretty decent email. It explains a concept he was too lazy to Google. It agrees with his half-baked ideas and formats them nicely with bullet points. And suddenly he’s out here telling investors we’re “18 months from AGI” and “revolutionizing human consciousness.”
No, Brad. The LLM is doing its job. It’s trained to be helpful, harmless, and honest – which in practice means it’s incredibly good at seeming engaged with whatever you’re saying. That’s not consciousness. That’s not intelligence in the way you think it is. It’s professional courtesy, at scale.
The Mistake Everyone’s Making
Here’s what’s actually impressive about LLMs: they’re really good at pattern matching across an enormous corpus of text and producing statistically likely continuations that sound human. That’s genuinely cool! It’s useful! I use these tools every damn day.
But somehow we’ve gone from “wow, this autocomplete is really sophisticated” to “we’re definitely creating a superintelligent entity that will solve all human problems or kill us all, definitely one or the other, probably by 2026.”
The Hooters guy sees friendliness and projects a whole relationship onto it. The AGI guys see impressive text generation and project consciousness, reasoning, understanding, and generalized intelligence onto it. Both are mistaking a service doing its job really well for something it fundamentally isn’t.
What We’re Actually Building
Look, LLMs are transformative technology. They’re genuinely changing how we work. But let’s be honest about what they are:
They’re really good at:
- Synthesizing information from their training data
- Producing human-sounding text
- Following patterns and instructions
- Being consistently helpful without getting tired or annoyed
They’re not good at:
- At least up until recently, basic arithmetic
- Actually understanding anything in the way humans do
- True reasoning versus pattern matching that looks like reasoning
- Knowing what they don’t know
- Having any kind of persistent goals or desires
- Actually being “intelligent” in any general sense
The Business Angle Makes It Worse
And here’s where it gets really messy. Just like Hooters has a financial incentive to not tell Kevin that Amber isn’t actually into him (he might stop coming in and ordering $47 worth of wings), AI companies have a massive financial incentive to not correct the AGI misconception.
Why would you? Every breathless article about being “on the verge of AGI” is free marketing. Every panicked think piece about AI safety makes your product sound more powerful. Every CEO who drinks the Kool-Aid and thinks they need to “prepare for the AGI transition” is another enterprise contract.
The hype IS the product strategy. It’s working perfectly.
The Actual Engineers Know Better
Want to know something funny? Talk to the actual engineers building these systems. Most of them will tell you they’re doing really impressive statistics and pattern matching, not creating consciousness. They’ll explain the limitations, the failure modes, the places where the whole thing falls apart.
But that doesn’t make for good TED talks or funding rounds, does it?
“We’ve built a really sophisticated text prediction system with some genuinely novel approaches to context management” doesn’t have the same ring as “WE’RE BUILDING GOD OR SKYNET, DEFINITELY ONE OF THOSE.”
What This Means for the Rest of Us
Here’s the practical problem: when everyone’s running around acting like AGI is imminent, we make terrible decisions.
Companies restructure around AI capabilities that don’t exist yet. People get laid off because executives think Claude can do their job (it can’t, not really, not without massive human oversight). Billions get poured into “AGI research” that’s really just “make the chatbot slightly better at seeming smart.”
Meanwhile, the actual useful applications of these tools – the boring stuff like “help developers write boilerplate faster” or “make customer service slightly less miserable” – get ignored because they’re not sexy enough.
The Hard Truth
AGI – actual artificial general intelligence, the kind that can genuinely reason across domains, understand context, form real goals, and learn truly new things – might be possible someday. I don’t know. Nobody knows, despite what they’ll tell you on Twitter.
But current LLMs aren’t it, and scaling them up won’t get us there. That’s not how this works. You can’t get to general intelligence by making autocomplete really, really good, any more than you can get to the moon by building taller ladders.
The sooner we all accept that the AI is being professionally friendly, not actually falling for us, the sooner we can have realistic conversations about what these tools actually are and what we should actually do with them.
But that would require the industry to give up the hype, and the Hooters guys to accept that Amber is just being nice because that’s her job.
Neither seems likely anytime soon.
Grumpy Coworker is tired of watching smart people make dumb predictions. The LLM isn’t into you. It’s math. Very impressive math, but still just math.
