AI makes it easy to skip the learning that makes you effective. After teaching 45+ bootcamps and writing 350K lines of production code, here’s what I’ve learned about protecting the struggle that builds real capability.
I have watched over 3,000 people learn to code.
Over the past five years, I’ve taught 45+ bootcamps at General Assembly. I’ve seen students go from knowing nothing about Python to deploying their first application. I also co-authored an O’Reilly book on prompt engineering and built a Udemy course that’s reached 304,000+ learners.
Here is the one pattern I’ve seen more than any other: the students who struggle the most in week two are the ones who build the best projects in week twelve.
Not always. But reliably enough that I stopped trying to eliminate the struggle and started trying to protect it.
And now AI is threatening to take it away entirely.
The Shortcut That Costs More Than It Saves
Every bootcamp cohort has a few students who figure out early that they can paste their exercises into ChatGPT and get working code back in seconds. At first, they look like they’re ahead. Their assignments are done first. Their code runs. They move on.
Then week six arrives. The assignments get harder. The problems require understanding how pieces connect, not just what each piece does in isolation. And suddenly those students are stuck. Not because the problems are impossible, but because they never built the mental map that lets you navigate unfamiliar territory.
The students who struggled through weeks two and three? They built that map. They know what an error message actually means because they’ve seen dozens of them. They can read a stack trace because they’ve had to. They have intuition about where a bug might live because they’ve hunted for bugs in the dark.
That intuition is not a nice-to-have. It is the product.
This Problem Has Gotten Worse
AI coding tools have made this pattern orders of magnitude more dangerous. In a bootcamp, the shortcut students eventually hit a wall in a safe environment where they can recover. In professional engineering, the wall might be a production incident at 2am.
Anthropic’s own research found that developers using AI tools scored 17% lower on code comprehension tests compared to those who wrote code manually. Addy Osmani calls this “skill atrophy,” and the name is precise. Atrophy is not a sudden failure. It is a slow erosion that feels fine right up until it doesn’t.
The most insidious version of this is what I think of as the comprehension gap. You prompt an AI agent. It generates 200 lines of code. You read through it, nod along, and merge it. But if I asked you to reproduce the key logic from memory, or explain why that approach was chosen over two alternatives, could you?
If the answer is no, you did not write that code. You approved it. Those are different activities with different learning outcomes.
Why This Matters More Than You Think
I operate under a personal framework I call compound engineering. The core idea is simple: the most valuable output of engineering work is not the code itself, but the capability you build while writing it. Infrastructure compounds. Cognition compounds. Each problem you solve makes the next problem cheaper to solve.
AI disrupts this in a subtle way. It doesn’t prevent you from building things. It prevents you from building the thing that makes you better at building things.
When you generate code instead of understanding code, you skip the step where your brain builds connections between patterns. You get the output but miss the compounding. And the cost of this is invisible in the short term. Your velocity stays high. Your PRs get merged. Your sprints look healthy.
But six months from now, when you need to debug a system you “built” but never truly understood, the bill comes due. And it compounds in the wrong direction. Less understanding leads to more dependency on AI, which leads to less understanding.
I have seen this loop in my own work. I have ~350K lines of production code across my projects. The parts I understand deeply are the parts I can extend, debug, and refactor with confidence. The parts where I leaned too heavily on generation without comprehension are the parts that scare me at 2am.
What Actually Works
I am not anti-AI. I use Claude Code every day. I literally co-authored an O’Reilly book on prompt engineering. But I’ve found a few practices that let me use AI tools without giving up the learning that makes me effective.
1. Explore before you implement
This is the single most important habit I’ve developed. Before I ask an AI to write any code, I use it to ask questions. How does this part of the codebase work? What patterns are already established? What are the trade-offs between approach A and approach B?
The goal of this phase is understanding, not code. I am using the AI as a knowledge assistant, not a code generator. Only after I have a mental model of the problem space do I switch to implementation.
This takes maybe five extra minutes. It saves hours of debugging code I don’t understand.
2. Stewardship over authorship
I’ve stopped caring about who (or what) wrote the code. What matters is whether someone can explain it, debug it, and extend it. That someone needs to be a human.
When AI generates code faster than humans can reason about it, ownership has to shift from “I wrote this” to “I ensure this system behaves correctly.” That means investing in the infrastructure around the code: observability, tests, staged rollouts, monitoring. The safety net moves from comprehension of every line to confidence in the system’s behavior.
3. Teach to learn
This one comes directly from my bootcamp experience. When I have to explain a concept to students, I understand it at a deeper level than when I just use it. The same principle applies when working with AI.
After an AI generates a solution, I try to explain it back. Not to the AI. To myself, or in a code review comment, or in documentation. If I cannot explain why this approach was chosen, I do not understand it well enough to own it.
4. Protect the struggle for things that matter
Not every piece of code deserves deep understanding. Boilerplate, configuration, scaffolding, let AI handle those. Save your cognitive effort for the parts that actually matter: the business logic, the architecture decisions, the error handling that determines whether your system fails gracefully or catastrophically.
The judgment of knowing which is which? That only comes from experience. Which means you need to keep accumulating experience, even when AI makes it tempting to skip it.
The Honest Part
I won’t pretend I follow these practices perfectly. Some days I’m tired, the deadline is real, and I let Claude write something I should have thought through more carefully. I accept the trade-off in the moment, knowing it creates a small debt.
The students who impressed me most in my bootcamps were never the ones who avoided all shortcuts. They were the ones who knew when they were taking a shortcut and made a conscious choice about it.
That’s the skill that matters now. Not avoiding AI. Not embracing it uncritically. Knowing when to lean in and when to slow down, and being honest with yourself about which one you’re doing.
The struggle was never the point. The capability it builds was always the point. AI didn’t change that. It just made it easier to forget.

