Automation and Skill Retention

Anthropic recently published research examining how AI use impacts skill development and efficiency:

In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.

And the findings:

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.^1

They conclude that “Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of skill development—and notably the ability to debug issues when something goes wrong.”

Their findings echo Lisanne Bainbridge’s “Ironies of Automation,” examining the “ways in which automation of industrial processes may expand rather than eliminate problems with the human operator.”^2

Bainbridge identifies a variety of ways in which cognitive and physical skills atrophy as more of a process is automated. The first two directly speak to the specific question examined by Anthropic, and the remaining speak to problems that seem likely:

I believe we should expect all of the above to apply just as much to cognitive automation. The Anthropic finding is one bullet point, and it does not take much looking to find concerns about, e.g., loss of status:

As cognitive work becomes more automated, we should expect the above to become more pronounced.

It also raises serious questions concerning monitoring and oversight. As more and more cognitive labor is automated, how can breakdowns in processes be identified and rectified? From the outside, it does not appear that there are good answers to these questions, although they are doubtless in part the focus of alignment research.

Ultimately, there is no escaping the burden of judgment. The more a process or system is automated, the more responsibility is concentrated on singular moments of judgement in moments of acute uncertainty.

^1: https://www.anthropic.com/research/AI-assistance-coding-skills

^2: Bainbridge, Lisanne, “Ironies of Automation,” 1983, 775