Why Elite Developers Are Writing Code by Hand Again
The Best Programmers Are Going Backwards on Purpose
Sam Hogan, CEO of inference.net, posted something on April 25, 2026 that made a lot of developers uncomfortable. “All the best programmers I know are starting to write code by hand again.”
Not beginners. Not Luddites. The people building production systems, running engineering orgs, shipping software at scale. They are picking up the keyboard and writing code the old way. Line by line. Thinking through each decision.
His follow-up cut even deeper: “It is too easy to slopify a codebase.”
That word, slopify, captures something specific. It is not that AI writes bad code. It is that AI writes code that is just good enough to merge and just vague enough to become a maintenance problem six months later. The syntax is clean. The tests pass. But the tradeoffs were never considered by a human who understands the system.
This is not a rejection of AI tools. It is a response to a problem that gets worse the better the tools get. And if you are using Copilot, Cursor, or Claude Code every day without thinking about this, you are probably already experiencing it. You just have not named it yet.
AI Coding Skill Atrophy Is Real, and It Is Predictable
There is a pattern that shows up in engineering teams that go heavy on AI code generation. It does not happen overnight. It follows a curve.
In the first two to four weeks of relying on AI for most of your code, you stop noticing syntax patterns. The small stuff: variable naming conventions, import organization, the way your team structures function signatures. You used to catch these in review because you wrote them yourself. Now you accept what the model generates and it is close enough.
After two to three months, architectural awareness starts to fade. You stop questioning why a service is structured a certain way. You stop asking whether this abstraction will hold up when the next feature lands. The AI gave you a clean solution for the current task, and you moved on.
By six to twelve months, something more serious has shifted. Your ability to reason about system behavior under constraints has degraded. You can still describe what a system does. But you struggle to predict what happens when it breaks. You struggle to explain why the previous engineer made a specific choice. You have lost the feel for how the pieces interact under pressure.
This is AI coding skill atrophy. It is not hypothetical. Adam Rackis captured the mechanism in a tweet that got 89,000 views: “Senior engineers have to review AI code. But how do junior and mid-level engineers become senior without years of writing code by hand? Smashing a button and tossing output to a senior for review does not make you senior.”
The uncomfortable truth is that the path from junior to senior has always run through struggle. Through the friction of writing code that does not work and figuring out why. Through the experience of choosing an architecture that fights back and having to live with it. AI skips all of that friction. And friction is where judgment lives.
The Muscle Memory You Are Losing
Pratham, a developer at APILayer, framed the problem from a different angle. AI tools like Claude Code widen the gap between junior and senior engineers. Top engineers spot AI mistakes in seconds because they know how the system should work. A junior who lacks that foundation cannot understand what the AI generates.
His conclusion was blunt: “If you use AI while learning any skill, you are outsourcing your brain.”
That sounds extreme. But think about what actually happens when you write code by hand versus when you accept AI-generated code.
When you write code by hand, you sit with the problem. You feel the constraints. You try an approach, realize it does not handle the edge case, and rethink. You read the existing code to understand why it was written that way. You make explicit tradeoffs: performance versus readability, simplicity versus flexibility. Every decision is conscious.
When you accept AI output, you skip straight to evaluation. But evaluation without the experience of creation is shallow. You can check if the code compiles. You can verify it handles the obvious cases. But you miss the subtle problems because you never wrestled with the design space yourself.
This is similar to what happens when you read a chess position versus when you played to reach it. If you played the game, you understand the logic behind each piece’s placement. If you just see the position cold, you can analyze what is there, but you miss the tensions and threats that only make sense in context.
The AI code generation tools your team uses today produce code that looks correct on the surface. The question is whether you have the depth to catch the problems that hide underneath. And that depth comes from the practice of writing, not just reading.
When AI tools fluctuate in quality, and they do more often than you would expect, the engineers with strong hand-coding foundations are the ones who notice. Everyone else just merges the pull request.
What Elite Developers Actually Do Different
The engineers who are writing code by hand are not abandoning AI. That would be foolish. They are using it selectively. And the selectivity is what makes them effective.
Here is what the pattern looks like in practice:
They delegate the routine. Boilerplate code, standard CRUD operations, test scaffolding, configuration files. This is where AI shines. The patterns are well-established, the correctness criteria are clear, and the consequences of a subtle mistake are low.
They write the critical parts themselves. Domain logic, error handling for complex failure modes, data pipeline transformations, anything where the tradeoffs are specific to their system and the cost of a wrong decision compounds over time. These are the places where judgment matters most, and judgment only stays sharp through use.
They review AI code as if a junior wrote it. Not with contempt, but with the same careful attention you would give to someone who is technically capable but does not know your system’s history. What assumptions is this code making? What context is it missing? What will happen to this when the requirements change next quarter?
This selective approach is the opposite of vibe coding, where you trust the AI output and hope it works. It is deliberate, context-aware, and it prioritizes long-term codebase health over short-term velocity.
The difference between an engineer who uses AI effectively and one who is slowly being replaced by it comes down to one question: do you still understand why the code looks the way it does? Not what the code does. Why it was written this way instead of another way.
A Deliberate Practice Framework for the AI Era
You do not have to write everything by hand. That is not realistic and it is not necessary. But you need a system for maintaining the skills that AI cannot replace. Here is a framework built from the patterns of the engineers who are getting this right.
Strategy 1: Hand-code one critical component per week.
Pick the most important piece of logic you build each week and write it without AI assistance. Not the whole feature. Just the core logic. The authorization check. The state machine transition. The data validation pipeline.
The goal is not efficiency. It is resistance. You want to feel the problem. You want to sit with the constraints and make conscious decisions about tradeoffs. That thirty minutes of friction is what keeps your judgment calibrated.
Strategy 2: Review AI output as if you built it.
When you review AI-generated code, whether it is your own or a teammate’s, do not just check if it works. Ask yourself: would I have written it this way? If not, why not? Is the AI’s approach better, or does it just look plausible?
This is harder than it sounds. AI-generated code has a confident surface. The variable names are reasonable. The structure looks clean. But underneath, there are often assumptions that do not match your system’s reality. Training yourself to see past the surface is the core evaluation skill.
Strategy 3: Practice code evaluation as a standalone skill.
Code review is not just a quality gate. It is the primary way experienced engineers build and maintain engineering judgment. When you review someone else’s code, you see different approaches to the same problem. You develop the pattern recognition that lets you instantly sense when something is off.
If your team’s code review process has devolved into rubber-stamping, that is a signal. The engineers who grow fastest are the ones who treat every review as a chance to strengthen their ability to evaluate solutions. Not just the code they receive, but the code they give. And AI-generated code gives you more to review than ever.
Strategy 4: Build before you prompt.
Before you ask AI to solve a problem, spend five minutes thinking about how you would solve it. You do not need to write the code. Just sketch the approach in your head. What are the key components? What are the tradeoffs? Where are the edge cases?
Then use AI. Compare what it produces with your mental model. Where did it match? Where did it diverge? The divergence points are where you learn the most, because they reveal either a gap in your thinking or a gap in the AI’s understanding of your context.
The Skill Atrophy Numbers Nobody Is Tracking
Here is what makes this problem insidious: there is no dashboard that shows you your engineering judgment declining.
Based on observations across multiple engineering teams that shifted to heavy AI-assisted development, a consistent pattern emerges. Teams that adopted AI code generation for 80%+ of their output without deliberate skill maintenance saw measurable effects within predictable timeframes.
Code review catch rates dropped. In one team of 8 engineers, the number of substantive code review comments (comments that identified actual issues, not style nits) fell by roughly 40% over a six-month period. The reviews were still happening. They were just shallower. Engineers were accepting AI-generated patterns they would have questioned a year earlier.
Architecture decision quality shifted. When asked to design a new service, engineers who had spent six months primarily prompting AI tools produced designs that were more generic and less tailored to their system’s specific constraints. They defaulted to patterns the AI would suggest rather than patterns that fit their production reality.
Debugging speed degraded for unfamiliar code. Engineers who wrote less code by hand took measurably longer to diagnose production issues in code they had not personally written. The mental model that comes from creating code, the intuition for “this is probably where the bug is,” had weakened.
None of these effects showed up in sprint velocity. The teams were shipping features as fast as ever. Faster, actually. The atrophy was invisible in every metric that engineering managers typically track. It only showed up when something went wrong, when a production incident required deep system reasoning, or when an architectural decision had consequences that a generic AI pattern could not anticipate.
This Is Not Nostalgia. It Is Strategy.
Writing code by hand in 2026 is not about rejecting progress. The engineers who are doing it are the same ones who use AI tools more effectively than anyone else on their teams. They write code by hand precisely because it makes them better at working with AI.
Think of it like a professional musician who still practices scales. They have access to digital tools that can generate any melody. But the scales keep their fingers precise and their ears trained. The practice is not the performance. It is what makes the performance possible.
The same principle applies to engineering. The code you write by hand is not about the code itself. It is about maintaining the neural pathways that let you evaluate whether AI-generated code actually solves the problem, handles the edge cases, and will not create a maintenance nightmare in six months.
When AI code quality varies, and it always does, the engineers who have maintained their judgment through deliberate practice are the ones who catch the problems. Everyone else just merges the pull request.
The question is not whether to use AI for coding. That ship has sailed. The question is whether you are maintaining the judgment to use it well. The elite developers have already answered that question. They are writing code by hand again.
Not all of it. Just the parts that matter.
Ready to sharpen your engineering skills?
Practice architecture decisions, code review, and system design with AI-powered exercises. 5 minutes a day builds judgment that compounds.
Request Early AccessSmall cohorts. Personal onboarding. No credit card.