Select Page

PL Perspectives

Perspectives on computing and technology from and for those with an interest in programming languages.

Artificial intelligence is eating our jobs. In manufacturing, transport, sales, catering, customer service, and many other domains, tasks that once required human involvement are now being handled by machines. What about programming—is that another profession that is at risk of being automated out of existence? Would that be a good thing, at least for anyone who isn’t themselves a programmer?

The prospects are intriguing. Most routine physical activities are best handled by robots, from assembling cars to sorting the post. Information-oriented activities are not immune; in fact, in many ways they are even better suited to automation than tasks that require mechanical robots. Although the immediate effects of automation on social structures can be disruptive and require careful management, surely in the long term it is beneficial to free people from tedious and error-prone repetition, and to make space for more creative and fulfilling activities.

What is it that distinguishes a task suitable for automation from one best handled by a human? Whatever it is, it isn’t mere complexity: that’s a dimension along which computers are very quickly getting better and better, especially when given enough data to work with. Machine translation between natural languages has come on in leaps and bounds in the last decade. Certainly, award-winning automated translation of literary novels and poetry is currently out of reach, but it is possible now at the press of a button for a human reader to get the gist of a factual text in an unfamiliar language.

Automating creativity

The dimension that seems hardest to automate is creativity. Can a computer compose a piece of music, or paint a picture, that can pass as a “work of art”? Again, remarkable advances are being made. Algorithmic composition systems can fill in extended gaps in Bach chorales or generate Joplin rags in such a way that the joins are indiscernable to the listener who doesn’t know the full repertoire. Similarly, adversarial neural networks can produce facsimiles of modern abstract art that can pass as human-generated. Marcus du Sautoy’s recent book The Creativity Code: How AI is Learning to Write, Paint and Think (Fourth Estate, 2019) has many such examples. He describes his motivation for writing the book as a concern about whether his job as a mathematician was at risk of automation.

Ada Lovelace famously observed that her collaborator Charles Babbage’s Analytical Engine was not limited to numerical computation, but could “weave algebraic patterns” in any domain. “Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of [mathematical] expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent”, she wrote. That is, she foresaw symbolic computation. But she was also realistic, cautioning against hyperbolic hopes for a mechanical brain: “It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretentions whatever to originate anything. It can do whatever we know how to order it to perform.”

Marcus du Sautoy invokes Ada Lovelace in coining a new test for AI: “To pass the Lovelace Test, an algorithm must originate a creative work of art such that the process is repeatable (i.e. it isn’t the result of a hardware error) and yet the programmer is unable to explain how the algorithm produced its output” (The Creativity Code, p7-8). Du Sautoy discusses many examples, including the musical and graphical ones mentioned above. He also spends considerable time discussing Google DeepMind’s AlphaGo, which astonished the world in 2016 by soundly beating the 18-time world champion Lee Sedol in a public tournament. Go commentators regard expert play as creative, and apparently AlphaGo can play the game better than any human can. So does AlphaGo exhibit creativity?

Artificial programmers

The explainability criterion in Du Sautoy’s definition of the Lovelace Test is really just pushing the lump under the carpet to a different position. What does it mean to “explain” a computation? Certainly, AlphaGo evoked surprised responses, both on the large scale from winning the tournament and on the small scale with individual moves. Observers could not predict how well AlphaGo would perform, and there was genuine jeopardy in the tournament. Nevertheless, AlphaGo is just a program, exploiting sophisticated yet ultimately explainable engineering techniques involving linear algebra and tree search. There is no magic. Every programmer has had the experience of being surprised by the behaviour of a program they have written, often while trying to debug it. Even short correct programs can be surprising: who would have imagined, before Benoit Mandelbrot and others explained it, that a simple deterministic program could generate such startlingly rich fractal images? Alan Turing himself wrote that “machines take me by surprise with great frequency.”

Du Sautoy writes: “Up until a few years ago it was felt that humans understood what their algorithms were doing and how they were doing it. Like Lovelace, they believed that you couldn’t really get more out than you put in. But then a new sort of algorithm began to emerge, an algorithm that could adapt and change as it interacted with its data. After a while the programmer may not understand quite why it is making the choices it is. These programs were starting to produce surprises, and for once you could get out more than you put in. They were beginning to be more creative. These were the algorithms DeepMind exploited in its crushing of humanity in the game of Go. They ushered in the new age of machine learning” (The Creativity Code, p65). One might say that AI and machine learning are changing the way programs are constructed: no longer just written top-down, following the grand vision of a software architect, but now sometimes constructed bottom-up instead, by synthesis and training based on large amounts of data. But it is all still programming, albeit at a different level—someone has to write the synthesiser or trainer. Perhaps no human has directly programmed all the weights in the neural network; but some human has programmed the program that computes those weights.

To put it another way, it has been the case for decades that the programs we run have not been directly programmed by a human, but by a computer. Register allocators decide better than we can how to use the limited hardware resources on a chip. Compilers and assemblers generate executable programs from more abstract descriptions: few people directly write x86 assembly, and fewer still have to remember x86 opcodes. (Early programming languages were even called “autocodes“, because they automated the coding task inherent in translating operation names to numerical codes and in calculating jump offsets.) When a machine generates x86 opcodes, we still call it programming; when a machine generates neural network weights, isn’t that programming too? Machine learning algorithms may be quantitatively different from more traditional programming, but they are not qualitatively different simply on account of involving computer-generated code.

Conclusion

To return to the question posed at the beginning, I do not think there is any danger of programming being automated out of existence. Certainly, tedious aspects of the programming task continue to be fruitfully automated, just as compilation and register allocation have been automated in the past. That’s good—for society, but also for programmers—because it frees us to think about more interesting aspects of the task, and to construct programs that were beyond our reach before. Eliminating tedium from programming may reduce the number of programming jobs on offer—but we cannot get anywhere near meeting current demand anyway, so perhaps that is another beneficial side-effect.

It seems unlikely that we will ever run out of ideas for new programs to write, or that machines will converge on the supremum in the generality ordering—the metaprogram to end all programs. Computing and data processing magazines in the early 1980s used to carry adverts for a programming system called The Last One, essentially a 4GL for report generation. Needless to say, that was not the Last Program, or even the Last Programming Language. At some level of abstraction, there will always be the small matter of programming.

Bio: Jeremy Gibbons is Professor of Computing at the University of Oxford, where he leads the Algebra of Programming research group and is former Deputy Head of Department. He served as Vice Chair then Past Vice Chair of ACM SIGPLAN, with a particular focus on Open Access. He is also Editor-in-Chief of the Journal of Functional Programming, SC Chair of ICFP, on the Advisory Board of PACMPL, an editor of Compositionality, and former Chair of IFIP Working Group 2.1 on Algorithmic Languages and Calculi.

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.