A new essay by software practitioner Graeme Lockley, published March 14, 2026 on his blog "Ideas in Software," argues that experienced developers are not simply being stubborn when they resist AI coding tools — they are following a well-documented psychological pattern that has appeared at every major technological transition in history. Drawing on cases from 19th-century medicine, textile manufacturing, accounting, and the arts, Lockley contends that the people most invested in a craft consistently resist the tools that threaten to shortcut it, while junior practitioners and those at the margins adopt first. In software, the same pattern holds: product managers, business analysts, and junior engineers have embraced AI-assisted coding tools faster than senior developers with years of accumulated expertise.
The essay's most detailed historical case is that of Ignaz Semmelweis, the Hungarian physician who in 1847 demonstrated that mandatory hand-washing could reduce maternal mortality at Vienna General Hospital from 10–35% down to 1–2%. The medical establishment's response was not skepticism but hostility — because accepting the evidence meant accepting that physicians themselves had been killing patients. Lockley uses this case, alongside the resistance of elite surgeons to anesthesia and master weavers to power looms, to argue that expert resistance is psychologically rooted in identity rather than technophobia. Senior developers who have spent years building mental models of code quality, architecture, and debugging risk perceive AI tools as devaluing the very cognitive work that defines their professional identity—<a href="/news/2026-03-14-grief-and-the-ai-split-how-ai-coding-tools-are-exposing-a-long-hidden-developer-divide">a form of grief over the loss of craft</a> that some practitioners articulate in response to AI tools.
Lockley is careful to note that his analysis is explicitly experiential rather than empirical — the essay draws on his direct experience leading software delivery teams through AI adoption transitions, and he acknowledges that the frameworks he references, including Steve Yegge's individual adoption stages and Khare's team maturity model, are similarly practitioner-observed rather than formally researched. This matters for how the argument should be weighted: it is a thoughtful practitioner's framework, not a controlled study. His core policy argument, aimed at organizational leaders, is that dismissing expert developer concerns as mere resistance risks alienating the most capable engineers and inviting quality regressions, while deferring entirely to skepticism risks competitive disadvantage as AI tooling matures.
The essay's practical challenge — distinguishing concerns that reflect genuine tool limitations from those that are primarily identity-defensive — is where it offers the least concrete guidance, which Lockley himself implicitly acknowledges. The Semmelweis case is a useful corrective to the common assumption that expert resistance is simply a change management problem to be overcome: sometimes the experts are right about the limitations, and sometimes they are protecting an identity that the technology genuinely threatens. The difficulty, as Lockley frames it, is telling the difference—a challenge that <a href="/news/2026-03-14-longitudinal-study-ai-tools-boost-developer-productivity-10-percent-not-hyped-2-3x">empirical productivity studies</a> can help illuminate.