Full disclosure: An LLM helped me write this post. I am part of the problem.
I have been running my small business, Bastion Data, for 8 years now. That’s a pretty good run for going out on your own, I think. I have been mostly a one-person shop, with a few expert contractors brought in to help me with things here and there. One of the great milestones that has always seemed like a natural progression is bringing in a junior developer to help me get work done for my customers and train up. In the context of my company, it would be much more like a traditional apprenticeship: I’m a craftsman and businessman bringing in a novice that I’d have to spend a good bit of my time to capacitate. That milestone was removed the second I had an LLM write semi-functional code.
As I started to use and figure out the best way to accellerate my work with Claude, ChatGPT, and other LLMs, it occurred to me that I was interacting with the AI the same way that I would with a junior. I’d come up with the bite-size task I needed to accomplish, design the logical solution that I wanted, and then write up a detailed description of the problem and the solution I wanted. The AI would think for a minute and propose a solution. It would also provide a description of the code it was proposing (or modifications it wanted to make), and more importantly why it wanted to do that. I’d review the code, see if I was happy with it and either propose modifications, ask for a different approach to be made, or accept the changes. Lots of times it would take many back and forths to come up with an acceptable solution.
This interaction model was producing great results and I started to default to starting most solutions in my work by creating a problem and solution description and throwing it at the AI. I was aware from the start that this was exactly the way that I’d work with a junior developer who I didn’t yet trust with any autonomy.
Coding with an LLM is like having a blazing fast junior developer who knows everything but has no wisdom. And will never improve.
So, when in full flow in a coding session with an LLM, I can basically do the full task, feedback, re-task cycle virtually without waiting. While an actual developer would go off for an hour or two, the AI does it in seconds. The cognitive overhead is pretty high to get so much done so quickly, but it can turn a full day of junior work into 30 minute to an hour of supervisory work.
And thus is this problem. I now have no incentive to bring in a junior. I have no incentive to train up another engineer or developer. I have no incentive to pay a salary for work that I can get done with a $15/month subscription. I accellerated, but the ladder is pulled up behind me.
Perhaps the busy work won’t matter - I might flatter myself in thinking that the low-level grinding I did when I was an apprentice laid the groundwork for my skills, but there reason to think what’s considered low-level will always be the same. I conceptually understand assembly code and foundational systems design, but I never did busy work with them. Perhaps algorithms and systems design will be abstracted away by LLM coding the same way that I run interpreted languages instead of microprocessor code. I think, though, that the “softness” of vibe coding is a level further than that and trusting human knowledge (and all its bugs) funneled through a probability distribution is a level further than that.
Regardless of the actual work that gets abstracted away in the future of software, LLM assisted coding has removed the need for me to hire an apprentice. I could always use another senior developer, but I don’t see a future need for a bright-green junior developer. And that scares the crap out of me for anybody wanting to start a career as a developer.