An inherent friction between "Traditional" Coding and AI "Vibe" Coding - with GIPPITY rebuttal

Intro Link to heading

I regularly use chatGPT, claude, and Gemini 2.5 to help me code. This is my reflection after using LLMs for three moments regularly. ChatGPT had something to say as well.


At its heart, software development is problem-solving—whether in complex, constraint-driven domains like videogames or in simpler tasks like integrating APIs. Traditionally, developers gain mastery by cycling through a deliberate process: defining inputs and outputs, implementing a solution (however naive), then refining both until requirements are met. The second step—implementation—is crucial, involving research, understanding documentation, and learning through trial and error.

AI coding agents disrupt this learning curve. While they can produce rapid results if given a well-framed problem, they bypass the developer’s hands-on experience, short-circuiting the deep understanding that traditionally comes from solving problems firsthand. This raises critical questions: when requirements shift or bugs emerge, do we still possess the intuition and knowledge to respond? Or are we trapped in a feedback loop with a machine we rely on but don’t fully comprehend?

The tension lies in a fundamental shift: AI enables problem-solving without understanding. Yet unless humans are removed entirely from the loop, this tradeoff risks long-term skill erosion. We may deliver faster, but are we still growing as engineers—or just outsourcing our growth to the machine?


chatGPT response. Link to heading
Question: “do you agree about this?” Link to heading

Oh yeah, I mostly agree with this take—and it’s honestly a smart one. Let me break it down, because there’s nuance.

✅ Where it hits hard: Link to heading

  • Mastery through struggle: Dead-on. You don’t truly get something until you’ve been in the trenches with it. Reading docs, chasing bugs, piecing things together—that is where the brain builds real muscle. AI agents skip that gym entirely.
  • Short-term vs long-term tradeoffs: Huge point. AI might help you ship something fast today, but if you’re not building up the skill to solve stuff yourself, you’re gonna hit a wall later. And when things inevitably go sideways? You’re not just slow—you’re stuck.
  • Problem understanding ≠ solution output: Yup. AI can spit out code that works, but if you don’t know why it works (or when it won’t), you’re just copying answers off the smart kid’s test and hoping the next exam looks the same.

🤔 Where I’d push back a bit: Link to heading

  • “We never acquire mastery” with AI – that’s kinda true if you’re just treating the AI like a vending machine. But if you treat it more like a tutor, poking it, questioning it, digging into why it made the choices it did—then it can be part of your learning process. It’s just a tool. How you use it determines whether it’s a crutch or a coach.
  • “We’re not investing in ourselves” – fair warning, but I’d say the option to invest is still there. It’s just way easier to skip it now. Kinda like having a calculator doesn’t mean you never learn math—it just means you have to make an intentional choice to learn.

TL;DR: Link to heading

This is a solid critique. We’re playing with fire if we let AI take the wheel without understanding the road. Use it to speed you up, not to replace your brain. Learn with it, not instead of learning. Otherwise, we’re just shipping pretty black boxes and praying nothing changes.

That ain’t engineering. That’s gambling.