Tag: ai

  • The AI Code Generation Technical Debt Crisis Nobody Sees Coming

    photo of minified JavaScript code

    slams coffee mug on desk

    Oh PERFECT. You know what’s going to be absolutely chef’s kiss hilarious in about 18 months? When we’re all drowning in an ocean of plausible-looking garbage code that nobody understands because it was generated by someone who thought “well the AI wrote it so it must be good!”

    You know what I saw yesterday? YESTERDAY? A simple change in dependencies (e.g., removing one) turned into a 1000+ line refactor of tests, probably because the linter complained about syntax and someone asked Claude to “fix the style problem.”

    Why are AI PRs so big???

    And here’s the thing that makes me want to scream into the void: the Dunning-Kruger effect is about to go NUCLEAR. The people who are most blown away by AI code are the exact same people who can’t evaluate whether it’s actually any good. They don’t know what “good” even looks like! They just know it compiles and maybe passes the happy path test they wrote. This isn’t a judgment… it’s self-awareness: I’ve noticed myself amazed at Claude’s ability to write languages that I’m not great in, and I find myself in awe at how well it works. Ok, except if it’s PowerShell. I don’t know PowerShell all that well and I can tell that most LLMs don’t either, mainly because I watch it getting basic syntax wrong and spending multiple iterations trying to fix it.

    Meanwhile, (back in the language I know well) I’m sitting here thinking “I could write this function in 10 minutes” but instead I’m watching someone spend 45 minutes arguing with ChatGPT, getting five different implementations that each solve slightly different problems, copying bits from each one, and ending up with some Frankenstein monster that technically works but has the architectural elegance of a highway pile-up.

    The expert devs? We’re MAYBE getting a 20% speedup on boilerplate, if that. Because guess what – for anything actually complex, the time isn’t in typing, it’s in thinking! It’s in understanding the problem! And the LLM doesn’t understand ANYTHING. So you end up explaining the problem to the AI, then fixing what it gives you, and congratulations, you’ve just added a slow, mediocre middleman to your development process.

    But the devs who don’t really understand the domain? Oh, they’re FLYING now. They’re 10x faster! They’re shipping features! Never mind that every single one is a ticking time bomb of technical debt that nobody can maintain because the code doesn’t follow any of our patterns, uses deprecated APIs the LLM learned from 2019 StackOverflow posts, and has this absolutely DELIGHTFUL habit of working fine until you hit an edge case, at which point it fails in ways that make no sense because the underlying logic is fundamentally flawed. (Also, why did you use create-react-app? Even *I* know that that’s been deprecated. Its own says that much.)

    And you can’t even review it properly because there’s SO MUCH of it! “Please review my 500-line PR” – oh cool, did you write this or did a robot? Do YOU even understand what it does? Can you explain why it’s using a WeakHashMap here? No? GREAT. AWESOME. LOVE THAT FOR US.

    The worst part? MANAGEMENT LOVES IT. “Look how much faster we’re shipping!” Yeah, we’re shipping, all right. Shipping technical debt at unprecedented velocity. We’re going to be maintaining this garbage for YEARS. Every bug fix is going to be an archaeological expedition trying to figure out what the original generated code was even attempting to do.

    And when something breaks in production – and OH IT WILL – nobody’s going to understand it well enough to fix it quickly. We’ll just… generate more code to patch around it. Code on top of code on top of code, like geological layers of sedimentary garbage accumulating over time.

    Five years from now, we’re all going to be sitting in a “legacy code cleanup” initiative wondering how everything got so incomprehensible so fast.

    But sure, yeah, AI is making us all 10x developers. Can’t wait.

    returns to actually reading the codebase like some kind of dinosaur

  • Building Fast in the Wrong Direction: An AI Productivity Fairy Tale

    Oh good, another breathless LinkedIn post about how AI just 10x’d someone’s development velocity. Fantastic. You know what else moves fast? A semi truck in the mountains of Tennessee with brakes that have failed. Speed is great until you realize your only hope for survival is a runaway truck ramp.

    Runaway Truck Ramp
    Runaway Truck Ramp image from public domain pictures

    Here’s the thing nobody wants to admit at their AI productivity [ahem… self-congratulatory gathering]: AI doesn’t matter if you don’t have a clue what to build.

    I’ve watched teams use ChatGPT to crank out five different implementations of features nobody wanted in the time it used to take them to build one feature nobody wanted. Congratulations, you’ve quintupled your output of garbage. Your CEO must be so proud. Maybe you can have ChatGPT restyle your resume to look like VS Code or the AWS Console, but it’s not going to change the experience you have listed on it.

    Going fast in the wrong direction gets you to the wrong place faster. But it’s still the wrong place. You’re just confidently incorrect at scale now.

    Agile Saves You From Your Own Stupidity (Sometimes)

    You know why Agile actually works when it works? Not because of the stand-ups or the poker planning or whatever cult ritual your scrum master insists on. It works because it forces you to pause every couple weeks and ask “wait, is this actually the right thing?”

    Short iterations exist to limit the blast radius of your terrible decisions. When you inevitably realize you’ve been building the wrong thing, you’ve only wasted two weeks instead of six months. It’s damage control, not strategy.

    But sure, let’s use AI to speedrun through our sprints so we can discover we built the wrong thing in three days instead of ten. Efficiency!

    Product Strategy: The Thing You Skipped

    Here’s a wild idea: what if you actually figured out what to build before you built it?

    I know, I know. Product strategy and user research are boring. They don’t give you that dopamine hit of shipping code. They require talking to actual users, which is terrifying because they might tell you your brilliant idea is stupid.

    But you know what product strategy and research actually do? They narrow down your options. They give you constraints. They help you make informed bets instead of random guesses.

    Because here’s the math that AI evangelists keep missing: Improving your odds of success by building the right thing will always beat building the wrong things 10 times faster.

    Building the wrong feature in three days instead of two weeks doesn’t make you 5x more productive. It makes you 5x more wrong. You’ve just accelerated your march into irrelevance.

    AI as a Validation Tool, Not a Strategy Replacement

    Now, I’m not saying AI is useless. It’s actually pretty good at helping you validate ideas faster. Rapid prototyping, quick mockups, testing assumptions—yeah, that stuff is genuinely helpful.

    But AI can’t tell you what to validate. It can’t tell you which customer problem is worth solving. It can’t tell you if your market actually exists or if you’re just building another solution in search of a problem.

    That still requires thinking. Remember thinking? That thing we used to do before we decided to outsource our brains to autocomplete?

    The Uncomfortable Truth

    The dirty secret of software development has always been that most of our productivity problems aren’t technical. (See the reprint of the “No Silver Bullet” essay from 1986 in a collection of timeless project managements essays, The Mythical Man-Month) They’re strategic. We build the wrong things, for the wrong reasons, at the wrong time. (Ok, yes, they’re also communication and coordination problems… fortunately, we have Slack for that <insert eye roll emoji here>)

    AI speeds up the building part. Great. But if you’re speeding toward the wrong destination, you’re just failing faster.

    Maybe instead of celebrating how quickly you can ship features, you should figure out which features are worth shipping in the first place. Crazy thought, I know.

    But hey, what do I know? I’m just a grumpy coworker who thinks you should know where you’re going before you hit the gas.


    Now get back to work. And for the love of god, talk to your users and other humans instead of spending all day chatting with a chatbot that declares you a deity when you correct it.