Tag: llm

  • The AGI Delusion: When Tech Bros Mistake Professional Service for True Love

    confused stickman from https://pixabay.com/vectors/stickman-thinking-worry-confused-310590/

    For those of you lucky enough to not have been completely bombarded by the term, “AGI” = “artificial general intelligence”

    I had an epiphany recently. The AI hype cycle has reached peak absurdity, and I’m watching supposedly smart people make the same mistake that drunk guys at Hooters have been making since 1983.

    You know the type. Guy goes to Hooters, orders some mediocre wings, and because the waitress whose literal job description includes “be friendly to customers” smiles at his terrible jokes and remembers he likes ranch dressing, he’s convinced she’s totally into him. She’s not, Kevin. She’s working. She gets paid to be nice to you. That’s the entire business model.

    Now watch the same thing happening with LLMs and AGI predictions.

    The Pattern Is Identical

    Some VP gets access to Claude or ChatGPT. It writes him a pretty decent email. It explains a concept he was too lazy to Google. It agrees with his half-baked ideas and formats them nicely with bullet points. And suddenly he’s out here telling investors we’re “18 months from AGI” and “revolutionizing human consciousness.”

    No, Brad. The LLM is doing its job. It’s trained to be helpful, harmless, and honest – which in practice means it’s incredibly good at seeming engaged with whatever you’re saying. That’s not consciousness. That’s not intelligence in the way you think it is. It’s professional courtesy, at scale.

    The Mistake Everyone’s Making

    Here’s what’s actually impressive about LLMs: they’re really good at pattern matching across an enormous corpus of text and producing statistically likely continuations that sound human. That’s genuinely cool! It’s useful! I use these tools every damn day.

    But somehow we’ve gone from “wow, this autocomplete is really sophisticated” to “we’re definitely creating a superintelligent entity that will solve all human problems or kill us all, definitely one or the other, probably by 2026.”

    The Hooters guy sees friendliness and projects a whole relationship onto it. The AGI guys see impressive text generation and project consciousness, reasoning, understanding, and generalized intelligence onto it. Both are mistaking a service doing its job really well for something it fundamentally isn’t.

    What We’re Actually Building

    Look, LLMs are transformative technology. They’re genuinely changing how we work. But let’s be honest about what they are:

    They’re really good at:

    • Synthesizing information from their training data
    • Producing human-sounding text
    • Following patterns and instructions
    • Being consistently helpful without getting tired or annoyed

    They’re not good at:

    • At least up until recently, basic arithmetic
    • Actually understanding anything in the way humans do
    • True reasoning versus pattern matching that looks like reasoning
    • Knowing what they don’t know
    • Having any kind of persistent goals or desires
    • Actually being “intelligent” in any general sense

    The Business Angle Makes It Worse

    And here’s where it gets really messy. Just like Hooters has a financial incentive to not tell Kevin that Amber isn’t actually into him (he might stop coming in and ordering $47 worth of wings), AI companies have a massive financial incentive to not correct the AGI misconception.

    Why would you? Every breathless article about being “on the verge of AGI” is free marketing. Every panicked think piece about AI safety makes your product sound more powerful. Every CEO who drinks the Kool-Aid and thinks they need to “prepare for the AGI transition” is another enterprise contract.

    The hype IS the product strategy. It’s working perfectly.

    The Actual Engineers Know Better

    Want to know something funny? Talk to the actual engineers building these systems. Most of them will tell you they’re doing really impressive statistics and pattern matching, not creating consciousness. They’ll explain the limitations, the failure modes, the places where the whole thing falls apart.

    But that doesn’t make for good TED talks or funding rounds, does it?

    “We’ve built a really sophisticated text prediction system with some genuinely novel approaches to context management” doesn’t have the same ring as “WE’RE BUILDING GOD OR SKYNET, DEFINITELY ONE OF THOSE.”

    What This Means for the Rest of Us

    Here’s the practical problem: when everyone’s running around acting like AGI is imminent, we make terrible decisions.

    Companies restructure around AI capabilities that don’t exist yet. People get laid off because executives think Claude can do their job (it can’t, not really, not without massive human oversight). Billions get poured into “AGI research” that’s really just “make the chatbot slightly better at seeming smart.”

    Meanwhile, the actual useful applications of these tools – the boring stuff like “help developers write boilerplate faster” or “make customer service slightly less miserable” – get ignored because they’re not sexy enough.

    The Hard Truth

    AGI – actual artificial general intelligence, the kind that can genuinely reason across domains, understand context, form real goals, and learn truly new things – might be possible someday. I don’t know. Nobody knows, despite what they’ll tell you on Twitter.

    But current LLMs aren’t it, and scaling them up won’t get us there. That’s not how this works. You can’t get to general intelligence by making autocomplete really, really good, any more than you can get to the moon by building taller ladders.

    The sooner we all accept that the AI is being professionally friendly, not actually falling for us, the sooner we can have realistic conversations about what these tools actually are and what we should actually do with them.

    But that would require the industry to give up the hype, and the Hooters guys to accept that Amber is just being nice because that’s her job.

    Neither seems likely anytime soon.


    Grumpy Coworker is tired of watching smart people make dumb predictions. The LLM isn’t into you. It’s math. Very impressive math, but still just math.

  • Building Fast in the Wrong Direction: An AI Productivity Fairy Tale

    Oh good, another breathless LinkedIn post about how AI just 10x’d someone’s development velocity. Fantastic. You know what else moves fast? A semi truck in the mountains of Tennessee with brakes that have failed. Speed is great until you realize your only hope for survival is a runaway truck ramp.

    Runaway Truck Ramp
    Runaway Truck Ramp image from public domain pictures

    Here’s the thing nobody wants to admit at their AI productivity [ahem… self-congratulatory gathering]: AI doesn’t matter if you don’t have a clue what to build.

    I’ve watched teams use ChatGPT to crank out five different implementations of features nobody wanted in the time it used to take them to build one feature nobody wanted. Congratulations, you’ve quintupled your output of garbage. Your CEO must be so proud. Maybe you can have ChatGPT restyle your resume to look like VS Code or the AWS Console, but it’s not going to change the experience you have listed on it.

    Going fast in the wrong direction gets you to the wrong place faster. But it’s still the wrong place. You’re just confidently incorrect at scale now.

    Agile Saves You From Your Own Stupidity (Sometimes)

    You know why Agile actually works when it works? Not because of the stand-ups or the poker planning or whatever cult ritual your scrum master insists on. It works because it forces you to pause every couple weeks and ask “wait, is this actually the right thing?”

    Short iterations exist to limit the blast radius of your terrible decisions. When you inevitably realize you’ve been building the wrong thing, you’ve only wasted two weeks instead of six months. It’s damage control, not strategy.

    But sure, let’s use AI to speedrun through our sprints so we can discover we built the wrong thing in three days instead of ten. Efficiency!

    Product Strategy: The Thing You Skipped

    Here’s a wild idea: what if you actually figured out what to build before you built it?

    I know, I know. Product strategy and user research are boring. They don’t give you that dopamine hit of shipping code. They require talking to actual users, which is terrifying because they might tell you your brilliant idea is stupid.

    But you know what product strategy and research actually do? They narrow down your options. They give you constraints. They help you make informed bets instead of random guesses.

    Because here’s the math that AI evangelists keep missing: Improving your odds of success by building the right thing will always beat building the wrong things 10 times faster.

    Building the wrong feature in three days instead of two weeks doesn’t make you 5x more productive. It makes you 5x more wrong. You’ve just accelerated your march into irrelevance.

    AI as a Validation Tool, Not a Strategy Replacement

    Now, I’m not saying AI is useless. It’s actually pretty good at helping you validate ideas faster. Rapid prototyping, quick mockups, testing assumptions—yeah, that stuff is genuinely helpful.

    But AI can’t tell you what to validate. It can’t tell you which customer problem is worth solving. It can’t tell you if your market actually exists or if you’re just building another solution in search of a problem.

    That still requires thinking. Remember thinking? That thing we used to do before we decided to outsource our brains to autocomplete?

    The Uncomfortable Truth

    The dirty secret of software development has always been that most of our productivity problems aren’t technical. (See the reprint of the “No Silver Bullet” essay from 1986 in a collection of timeless project managements essays, The Mythical Man-Month) They’re strategic. We build the wrong things, for the wrong reasons, at the wrong time. (Ok, yes, they’re also communication and coordination problems… fortunately, we have Slack for that <insert eye roll emoji here>)

    AI speeds up the building part. Great. But if you’re speeding toward the wrong destination, you’re just failing faster.

    Maybe instead of celebrating how quickly you can ship features, you should figure out which features are worth shipping in the first place. Crazy thought, I know.

    But hey, what do I know? I’m just a grumpy coworker who thinks you should know where you’re going before you hit the gas.


    Now get back to work. And for the love of god, talk to your users and other humans instead of spending all day chatting with a chatbot that declares you a deity when you correct it.