AI might be useless in any hands, useful in any hands, or useful only when used skillfully, and I feel like a lot of people want that last option to be true. Hence "Prompt Engineering", "Context Engineering", "Agentic Engineering", "Harness Engineering", etc., like guys, we may not be writing code, but we are still doing *Engineering*. This always comes with some phrase like "Humans steer. Agents execute."
The idea of you design a system and the agent implements your vision really appeals to people, imo because
I'm not sure this is wrong, but not sure it's right either.
Per Boris Cherny, creator of claude code, Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next. Engineering is changing and great engineers are more important than ever. Is managing a team of junior robots really a skill you can be bad or good or great at? Imo managers definitely have to deal with the administrative side of work. Email and so forth, meetings, your manager shielding you from this stuff is a genuine blessing. I'm more skeptical of managers as software architects though, which is where engineers seem to want to slot themselves in above the AI.
I've only worked at two companies, both of which maybe started in hardware and eventually belatedly created software for it (with some jankiness), but I've really appreciated when managers have had faith that, based on working through low level problems and gaining familiarity through them, I'm equipped to make high level choices. For example, I once had to generate some messages for another team to test using that protocol. So there was a fair amount of staring at PDFs and wireshark, and actually maybe this is the sort of thing claude code would just grind through. I and my counterpart on the other team were struggling because neither of us could really verify our own code worked without the other one to run against. In a meeting where the other team's manager was asking about deadlines and if our team could try something else or add people or something to finish faster, my manager said (paraphrasing) look, David's your guy, nobody else is familiar with the protocol, best thing we can do is end the meeting and let them figure it out. Of course this is a happy story and we did get it done. The moral is managers are valuable for removing non technical roadblocks, not for contributing technical design.
Don't take it from me though, Boris apparently used to work at Instagram and at one point moved to Japan, so due to time zones he switched from attending meetings to writing code: "I had essentially turned myself into an intern, coding 80% of the day. This was a powerful change, which let me identify and execute on opportunities that others simply couldn’t.". His low level familiarity with the codebase let him identify opportunities! He roped in other people to communicate and coordinate, sort of managerish functions, but the design part of management came from his knowledge of the code.
This is my big sticking point: design is bottom up, not top down. Rather than some boss starting with a vision and the software minions implementing it, the low level work you might dismiss as tedious is actually where original ideas come from.
Tony Hoare won a Turing award "For his fundamental contributions to the definition and design of programming languages". He made the "billion-dollar mistake" of adding null. His wikipedia lists "Hoare developed the sorting algorithm quicksort in 1959–1960. He developed Hoare logic, an axiomatic basis for verifying program correctness. In the semantics of concurrency, he introduced the formal language communicating sequential processes (CSP) to specify the interactions of concurrent processes, and along with Edsger Dijkstra, formulated the dining philosophers problem". Wow!
But even he had ups and downs, which he goes through in The Emporer's Old Clothes: He worked on a very successful ALGOL compiler, then on the "Mark II" second system that was way too complicated and collapsed under its own weight. The story is so good and well written you should just go read it, click the link, it's on page 4 of the PDF, go read it. But to summarize, the project added tons of bigger and better features, he was promoted, given responsibility for company hardware and software products, and started with a team of 15 programmers. In other words, he was set up for failure.
Problems with complexity and delegating work are much more obvious in hindsight, though. Hoare only realized later:
here breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. "You know what went wrong?" he shouted—he always shouted—"You let your programmers do things which you yourself do not understand." I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system?
I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution.
"O.K. Tony," the company said, "you got us into this mess and now you're going to get us out." They helpfully took away his responsibility for hardware and reduced his programmer team size. The engineers reflected a lot on their mistakes and started fighting feature creep. They got their designs and schedules down to realistic levels, and Hoare noted "Above all, I did not allow anything to be done which I did not myself understand. It worked!"
It's no coincidence that Hoare's good principles and architectural decisions came while he was grappling with low level problems. Good taste comes from the feedback you get doing the work and bumping into problems and limits, not from ascending to management and saying ok now make it better. Obviously teams can be larger than one person, but there does need to be one person with all the little technical details in their head in order to make good higher level decisions.
Agents make it very easy to remove yourself from that position, because not only do they give you a team of 15 programmers, they will happily do work you yourself do not understand. Writing code yourself forces you to think about it; giving work to another human maybe pushes you into thinking about it, and at least some human will think about it; giving work to an agent defaults to ok, generated some code, nobody learned anything from the experience.
But ok, it's 2026, who cares about 1980's opinion on 1963's work? Maybe a team of claudes could one shot an ALGOL compiler. I mean maybe every time your claude hits a roadblock you add a skill and it's kind of learning and building its own context for decisions, idk. I just doubt you're learning the details you need to be informed of to make decisions that will keep complexity under control and maintainable.
Back in high school, a friend said in the process of debugging one problem he often learned a bunch of unrelated stuff about his code, which at the time seemed very wise and actually today still seems very wise. Imo when you sit there struggling and working through a bunch of potential problems, even if they're unrelated to your actual problem, they might be relevant later. Whereas with an AI it's very easy to get an instant answer, not necessarily one that solves your problem the best way or teaches you anything. I'm not sure it's impossible to learn using AI, maybe it can even be helpful, but it doesn't force you to learn the way figuring things out yourself does. So I'm skeptical when someone says "I learned XYZ with chatgpt", did you really learn anything, or did it just burp out a product you won't be able to maintain?
debugging is twice as hard as writing code, readability is most important, bla bla bla. Imo the person who originally wrote the code is best qualified by far to maintain it, because they know what they were thinking. If nobody writes the code, will anyone learn what's needed to maintain it?
AI has only been around for a few years, and around late 2025 became capable of generating a lot more. So it's simply not possible for anyone to say AI generated this code and I've had no problems maintaining it for decades. Maybe the AI will maintain it for us, or throw it away and rewrite a replacement in a day, idk. But the default position should be that code is only valuable inosfar as people understand it; it would be a weird new paradigm if code nobody understands is valuable because you can just point claude at it.
This is the part where I performatively say AI is good at some things but bad at others, I am so measured and reasonable. I mean really maybe AI can do everything and you should just sit, or maybe letting it do anything erodes your understanding, idk. Still this is my current feeling as I work on Gleam FHIR:
Maybe there's a world where AI gets so good it can get its own feedback from the low level problems and use that to make high level decisions, write good docs, etc. Maybe that world is already here but unevenly distributed! I don't see where us humans fit into that world though. And I feel like 9/10 AI rapture comments I see don't link any code, and of those that do 9/10 build some kind of AI tooling slop I'd never touch. Still, some cases eg Vibing a Non-Trivial Ghostty Feature are interesting. Imo these compelling cases are less about ignoring how things work to make high level decisions; more about learning how everything works to base decisions on that.