Tag: claude

  • Is Your CTO Dabbing in LLM Cults? Here Are the Signs

    Look, I’m not saying your CTO has been compromised by the Church of the Latter-day Tokens, but if they’ve started using “MBiC” unironically in Slack, we need to talk.

    Here are common acronyms your CTO might start using and their LLM cult meanings:

    MBiC – “My Brother in Copilot/Cursor/Claude”

    • Normal people think: My Brother in Christ, a Gen Z riff on the influence of Christianity on culture regardless of actual religion of the recipient.
    • What they mean: A term of endearment for fellow AI-assisted developers
    • Red flag level: 🚩🚩 (Yellow – concerning but not terminal)

    LGTM – “Let GPT Train Me”

    • Normal people think: Looks Good To Me
    • What they mean: They’ve stopped learning and just accept whatever the spicy autocomplete says
    • Red flag level: 🚩🚩🚩 (Orange – intervention recommended)

    YOLO – “Your Output’s Likely Off”

    • Normal people think: You Only Live Once
    • What they mean: Dismissive response when someone questions AI-generated code that definitely has bugs
    • Red flag level: 🚩🚩🚩🚩 (Red – quarantine immediately)

    SMH – “Seeking More Hallucinations”

    • Normal people think: Shaking My Head
    • What they mean: When the AI’s first answer wasn’t convincing enough, so they’re regenerating
    • Red flag level: 🚩🚩🚩 (Orange – they know it’s wrong but persist)

    IMHO – “In My HuggingFace Opinion”

    • Normal people think: In My Humble Opinion
    • What they mean: About to cite some open-source LLM as an authority on architecture decisions
    • Red flag level: 🚩🚩🚩🚩 (Red – open source models have opinions now)

    TBH – “Tokens Be Hallucinating”

    • Normal people think: To Be Honest
    • What they mean: Acknowledging the AI made something up, but they’re going with it anyway
    • Red flag level: 🚩🚩🚩🚩🚩 (Critical – they’ve accepted hallucinations as reality)

    FWIW – “Fine-tuned With Insufficient Weights”

    • Normal people think: For What It’s Worth
    • What they mean: Excuse for why their custom model is confidently wrong about everything
    • Red flag level: 🚩🚩🚩🚩 (Red – they fine-tuned something)

    IDK – “Inference Definitely Knows”

    • Normal people think: I Don’t Know
    • What they mean: They don’t know, but Claude/GPT probably does, hold on
    • Red flag level: 🚩🚩 (Yellow – at least they’re honest about outsourcing cognition)

    RTFM – “Run The F***ing Model”

    • Normal people think: Read The F***ing Manual
    • What they mean: Why read documentation when you can just ask an AI that was trained on it?
    • Red flag level: 🚩🚩🚩🚩🚩 (Critical – manuals are now deprecated)

    WFH – “Working From HuggingFace”

    • Normal people think: Working From Home
    • What they mean: Entire day spent on model repos instead of actual work
    • Red flag level: 🚩🚩🚩 (Orange – at least they’re still technically working?)

    BRB – “Be Right Back (asking Claude)”

    • Normal people think: Be Right Back
    • What they mean: Every conversation now has a 30-second AI consultation pause
    • Red flag level: 🚩🚩🚩 (Orange – human-to-human communication deprecated)

    AFAIK – “According to Fine-tuned AI Knowledge”

    • Normal people think: As Far As I Know
    • What they mean: They asked an LLM and stopped researching
    • Red flag level: 🚩🚩🚩🚩 (Red – epistemology has left the building)

    TL;DR – “Too Long; Didn’t Rewrite (with AI)”

    • Normal people think: Too Long; Didn’t Read
    • What they mean: Everything must now be AI-summarized, including two-sentence emails
    • Red flag level: 🚩🚩🚩 (Orange – reading comprehension outsourced)

    IIRC – “If I Regenerate Context”

    • Normal people think: If I Recall Correctly
    • What they mean: They’ve lost track of which conversation was with humans vs. chatbots
    • Red flag level: 🚩🚩🚩🚩🚩 (Critical – reality boundaries dissolving)

    FYI – “Feed Your Inference”

    • Normal people think: For Your Information
    • What they mean: Attaching 47 documents to “give the AI context” for a simple question
    • Red flag level: 🚩🚩🚩 (Orange – prompt engineering has become lifestyle)

    NGL – “Not Gonna Lint”

    • Normal people think: Not Gonna Lie
    • What they mean: AI wrote it, AI approved it, linting is for people who don’t trust the silicon
    • Red flag level: 🚩🚩🚩🚩🚩 (Critical – code quality gates removed)

    BTW – “Before Training Weights”

    • Normal people think: By The Way
    • What they mean: Referencing the mythical pre-LLM era when people coded with their actual brains
    • Red flag level: 🚩 (Green – nostalgia is healthy)

    ICYMI – “In Case Your Model Ignored”

    • Normal people think: In Case You Missed It
    • What they mean: Reposting because they think you’re also using AI to read Slack
    • Red flag level: 🚩🚩🚩 (Orange – assumes everyone else is also AI-dependent)

    Warning Signs Your CTO Has Fully Converted:

    1. Begins sentences with “As an AI language model” in standup
    2. Refers to the engineering team as “the training data”
    3. Insists all PRs include a “prompt” section explaining what was asked
    4. Says “regenerate that thought” when they don’t like someone’s opinion
    5. Measures performance reviews in “tokens per second”
    6. Has replaced their profile picture with a neural network diagram
    7. Sends meeting agendas as “system prompts”
    8. Refers to coffee breaks as “context window refreshes”
    9. Calls the office “the inference cluster”
    10. Has started ending emails with “Stop sequence: [END]”

    What To Do If Your CTO Is Converting:

    Stage 1 (Early): Gentle reminders that humans still write code sometimes

    Stage 2 (Moderate): Intervention involving unplugged coding exercises and whiteboard sessions

    Stage 3 (Advanced): Emergency contact with former CTO’s mentors from the pre-LLM era

    Stage 4 (Terminal): Accept your new AI overlords and start learning prompt engineering

    The Reality Check:

    Look, AI coding assistants are genuinely useful tools. I use them. You probably use them. But when your leadership starts communicating primarily in LLM-cult acronyms and treating the AI as a team member with voting rights in architecture decisions, we’ve crossed from “productivity tool” to “cargo cult.”

    The warning sign isn’t that they’re using AI. It’s that they’ve stopped being able to tell where the AI stops and their own judgment begins.

    If your CTO asks you to “vibe check the embeddings” one more time, it might be time to update your LinkedIn.

    MBiC (My Buddy in Coding, the normal way),

    Grumpy


    Is your CTO showing signs of LLM cult membership? Drop a 👇 in the comments with the weirdest AI-related acronym you’ve heard in your workplace.

    Disclaimer: No CTOs were harmed in the making of this post. Several were mildly roasted. All AI assistants cited gave their consent to be satirized. Probably. I didn’t actually ask them. They’re just autocomplete.

  • Building Fast in the Wrong Direction: An AI Productivity Fairy Tale

    Oh good, another breathless LinkedIn post about how AI just 10x’d someone’s development velocity. Fantastic. You know what else moves fast? A semi truck in the mountains of Tennessee with brakes that have failed. Speed is great until you realize your only hope for survival is a runaway truck ramp.

    Runaway Truck Ramp
    Runaway Truck Ramp image from public domain pictures

    Here’s the thing nobody wants to admit at their AI productivity [ahem… self-congratulatory gathering]: AI doesn’t matter if you don’t have a clue what to build.

    I’ve watched teams use ChatGPT to crank out five different implementations of features nobody wanted in the time it used to take them to build one feature nobody wanted. Congratulations, you’ve quintupled your output of garbage. Your CEO must be so proud. Maybe you can have ChatGPT restyle your resume to look like VS Code or the AWS Console, but it’s not going to change the experience you have listed on it.

    Going fast in the wrong direction gets you to the wrong place faster. But it’s still the wrong place. You’re just confidently incorrect at scale now.

    Agile Saves You From Your Own Stupidity (Sometimes)

    You know why Agile actually works when it works? Not because of the stand-ups or the poker planning or whatever cult ritual your scrum master insists on. It works because it forces you to pause every couple weeks and ask “wait, is this actually the right thing?”

    Short iterations exist to limit the blast radius of your terrible decisions. When you inevitably realize you’ve been building the wrong thing, you’ve only wasted two weeks instead of six months. It’s damage control, not strategy.

    But sure, let’s use AI to speedrun through our sprints so we can discover we built the wrong thing in three days instead of ten. Efficiency!

    Product Strategy: The Thing You Skipped

    Here’s a wild idea: what if you actually figured out what to build before you built it?

    I know, I know. Product strategy and user research are boring. They don’t give you that dopamine hit of shipping code. They require talking to actual users, which is terrifying because they might tell you your brilliant idea is stupid.

    But you know what product strategy and research actually do? They narrow down your options. They give you constraints. They help you make informed bets instead of random guesses.

    Because here’s the math that AI evangelists keep missing: Improving your odds of success by building the right thing will always beat building the wrong things 10 times faster.

    Building the wrong feature in three days instead of two weeks doesn’t make you 5x more productive. It makes you 5x more wrong. You’ve just accelerated your march into irrelevance.

    AI as a Validation Tool, Not a Strategy Replacement

    Now, I’m not saying AI is useless. It’s actually pretty good at helping you validate ideas faster. Rapid prototyping, quick mockups, testing assumptions—yeah, that stuff is genuinely helpful.

    But AI can’t tell you what to validate. It can’t tell you which customer problem is worth solving. It can’t tell you if your market actually exists or if you’re just building another solution in search of a problem.

    That still requires thinking. Remember thinking? That thing we used to do before we decided to outsource our brains to autocomplete?

    The Uncomfortable Truth

    The dirty secret of software development has always been that most of our productivity problems aren’t technical. (See the reprint of the “No Silver Bullet” essay from 1986 in a collection of timeless project managements essays, The Mythical Man-Month) They’re strategic. We build the wrong things, for the wrong reasons, at the wrong time. (Ok, yes, they’re also communication and coordination problems… fortunately, we have Slack for that <insert eye roll emoji here>)

    AI speeds up the building part. Great. But if you’re speeding toward the wrong destination, you’re just failing faster.

    Maybe instead of celebrating how quickly you can ship features, you should figure out which features are worth shipping in the first place. Crazy thought, I know.

    But hey, what do I know? I’m just a grumpy coworker who thinks you should know where you’re going before you hit the gas.


    Now get back to work. And for the love of god, talk to your users and other humans instead of spending all day chatting with a chatbot that declares you a deity when you correct it.