When digital computers were invented, the first task was to instruct them to do what we wanted. The problem was that the machines didn’t understand English – they only knew ones and zeros. You could program them with long sequences of these two digits and if you got the sequence right then the machines would do what you wanted. But life’s too short for composing infinite strings of ones and zeros, so we began designing programming languages that allowed us to express our wishes in a human-readable form that could then be translated (by a piece of software called a “compiler”) into terms that machines could understand and obey.
Over the next 60 years or so, these programming languages – with names such as Fortran, Basic, Algol, COBOL, PL/1, LISP, C, C++, Python – proliferated like rabbits, so that there are now many hundreds, perhaps even thousands, of them. At any rate, it takes quite a while to scroll down to the end of the Wikipedia page that lists them. Some are very specialised, others more general, and over the years programmers created libraries of snippets of code (called subroutines) for common tasks – searching and sorting, for example – that you could incorporate when writing a particular program.
For more than half a century, therefore, an arcane, exclusive priesthood evolved, of people who had mastered one or more of these specialised languages and were able to make computers do their bidding. Membership of the priesthood gave one an intoxicating feeling of absolute power. In software, remember, you can program a set of pixels to move endlessly in a circle, say, and they will continue to do that for ever if you leave them to it. They need neither fuel nor food, and they will never complain. “In that sense,” I once wrote when writing a history of this technology, “being a programmer is like being Napoleon before the retreat from Moscow. Software is the only medium in which the limits are exclusively those set by your imagination.”
This is why, when large language models (LLMs) such as ChatGPT emerged, many people were flabbergasted to discover that not only could these machines compose coherent English sentences, but they could also write computer programs! Instead of having to master the byzantine intricacies of C++ or Python in order to converse with the machine, you could explain what you wanted it to do and it would spit out the necessary code. You could program the machine in plain English!
How was this possible? Essentially because, in its training phase, the machine has ingested a lot of published computer code – just as it has also ingested virtually every examination paper that has ever been published. And although the computer code that it produces often has flaws in it, they can often be ironed out in successive iterations. The technology is already pretty good, which is why programmers have been early adopters of it as a kind of “co-pilot”. And it will get steadily better.
So are we seeing the twilight of the software priesthood, as some of the more apocalyptic reactions to LLMs claim? Personally I doubt it, if only because we always overestimate the short-term impact of tech change, while underestimating its longer-term effects. What these AI “co-pilots” really do is take the grunt work out of programming, freeing those who understand software to do more interesting and productive things.
When GitHub, the programmers’ repository owned by Microsoft, quizzed more than 2,000 professional software professionals about the technology, the results supported that view: 88% said that it made them more productive; 59% said it made the job less frustrating; 74% said that it had enabled them to focus on “more satisfying work”; 96% found that it made them faster when doing repetitive tasks; and 77% said that they now spent less time searching. This is the image not of Armageddon, but of something more positive.
And already, AI co-pilots are beginning to change how programming itself is taught. Most introductory computer science courses tended to focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging now need to be taught more explicitly. Academics are finding that the fact that students are using AI tools frees up time “to teach higher-level thinking – for example, how to design software, what is the right problem to solve, and what are the solutions? Students can spend more time on optimisation, ethical issues and the user-friendliness of a system rather than focusing on the syntax of the code.”
Mario Fusco, a great software developer, once said: “The code you write makes you a programmer. The code you delete makes you a good one. The code you don’t have to write makes you a great one.” So maybe, for once, AI is on to something.
What I’ve been reading
Student disunion
Very perceptive blogpost, Universities As Factories, by the economist Branko Milanovic about the Columbia demonstrations.
Short is sweet
The Tyranny of Content Algorithms: a succinct essay on the importance of quality over quantity by Silicon Valley veteran Om Malik.
Where Walter went wrong
What Elon Musk’s Favourite Game Tells Us About Him is the title of Dave Karpf’s blistering critique of Musk’s biographer, Walter Isaacson.