Select Page

PL Perspectives

Perspectives on computing and technology from and for those with an interest in programming languages.

In a series of two posts, I’d like to share some thoughts I’ve accumulated over the past few years about how to draw better graphs. In this post, I recommend that normalised data should usually be plotted on a logarithmic scale. To elaborate on my recommendation, I draw upon many examples of graphs I found in the proceedings of PLDI 2019. In my next post, I argue that scatter plots can be easier to understand than bar charts.

Logarithmic Scales for Normalised Data

Normalised data is data that is the ratio between two measurements that have the same dimension. Such data comes up frequently in PL research, e.g., the ratio between the execution time of a program before a proposed compiler optimisation has been applied and the execution time of that program afterwards. I believe that such data should be plotted on a logarithmic scale. Indeed, the SIGPLAN Empirical Evaluation Checklist suggests logarithmic scales for speedups, among several other recommendations for drawing good graphs.

I will illustrate my reasons with reference to the two graphs below, which both show some sort of “speedup” that has been obtained on four benchmark programs, A, B, C, and D. The left graph uses a linear scale on the y-axis, while the right one plots the same data on a logarithmic scale.

There are four reasons why the logarithmic scale is better:

  1. The natural origin for a speedup ratio is 1, not 0. That is, we are primarily interested in seeing whether a data point lies above 1 (which indicates a speedup) or below 1 (which indicates a slowdown). This fits nicely with logarithmic scales, which can’t go down to 0. In the right-hand graph above, it is immediately obvious that A and B experience a slowdown; this is slightly less obvious in the left-hand graph.
  2. Going from a 1x speedup to a 2x speedup is surely more impressive than going from a 3x speedup to a 4x speedup. But on the linear y-axis in the left-hand graph above, the distance between 1 and 2 is the same as the distance between 3 and 4, so these feats would appear equally impressive.
  3. Often, it is just as good to get a 2x speedup as it is bad to get a 2x slowdown. But on the linear y-axis in the left-hand graph above, the distance from 1 to 0.5 is much smaller than the distance from 1 to 2, so the speedup experienced by benchmark C is emphasised over the slowdown experienced by benchmark B, even though both have the same magnitude.
  4. On a linear scale, the “centre of gravity” is at the arithmetic mean, while on a logarithmic scale, the centre of gravity is at the geometric mean. When averaging dimensionless ratios, most authors tend to use the geometric mean, so it makes sense for the readers’ eyes to be drawn to this value.

One caveat: the logarithmic scale doesn’t work well if the normalised data can be zero. This is quite rare though – I don’t think I saw this in any of the PLDI 2019 papers I scanned through when writing this post.

Examples from PLDI 2019

I found sixteen papers from PLDI 2019 that contained graphs where normalised values were plotted on a linear scale. I suspect that all of these graphs would be improved by using a logarithmic scale. The full version of this article lists all of the examples I found; here I’m just going to focus on a handful of interesting ones.

The first example is from Continuously reasoning about programs using differential Bayesian inference by Heo et al.

This graph demonstrates three common problems with plotting normalised values on linear scales.

  1. The blue bars are redundant. They represent the measurement against which the other bars are normalised, so all have a height of 1. On a logarithmic scale, all the bars would naturally start at y=1 rather than at y=0, so the blue bars would disappear.
  2. It is not immediately clear which bars indicate a speedup and which indicate a slowdown. If the bars all started at y=1 rather than at y=0, then slowdowns would point down and speedups would point up. (The blue bars, though technically redundant, are somewhat helpful for visually distinguishing slowdowns from speedups. Similar graphs I saw in other papers have used other visual features to serve this purpose, such as horizontal line superimposed over the graph at y=1.)
  3. Some of the bars extend beyond the top of the scale. A logarithmic scale would handle unusually large (and unusually small) values more gracefully.

I’m a little torn about my second example, which is from Scalable verification of probabilistic networks by Smolka et al., and plots speedup against the degree of parallelism.


The graph uses a linear scale on the y-axis, which goes against my advice. Yet this does not seem unreasonable here because it means that we can easily compare against the ideal situation of “speedup factor equals parallelism degree”, which is depicted as a straight line at y=x. Perhaps the best compromise in these situations would be to use logarithmic scales for both axes.

My final example is from Toward efficient gradual typing for structural types via coercions by Kuhlenschmidt et al.

It is technically a non-example, because the graph does use a logarithmic scale. However, the benefits of the logarithmic scale would be better exploited by starting the bars at y=1 rather than at an arbitrary small number like y=0.01.

Once that change is made, we would have a fine example of a bar chart. But opportunities for improvement remain! In my next post, I explore whether a bar chart is really the best kind of graph for these situations, or whether a scatter plot may be better still.

Further Reading

  • An extended version of this article is available on my blog.
  • LaTeX code for the graphs drawn by me is available.
  • A 1983 article “On graphing rate ratios” in the American Journal of Epidemiology argues that relative rates should be plotted on logarithmic rather than linear scales. A counterpoint is provided by James R. Hebert and Donald R. Miller’s 1989 article “Plotting and discussion of rate ratios and relative risk estimates” in the Journal of Clinical Epidemiology, which argues that relative rates should actually be plotted not on logarithmic scales but on reciprocal scales!

Bio: John Wickerson is a Lecturer in the Department of Electrical and Electronic Engineering at Imperial College London, where he researches programming languages and hardware design.

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.