Select Page

PL Perspectives

Perspectives on computing and technology from and for those with an interest in programming languages.

This is the third part of a three-part article describing the approach that Richard Bird and I took to using functional programming for algorithm design in our recent book Algorithm Design with Haskell.

Part 0 of the article argued for using a functional language rather than imperative one, because it supports the calculation of algorithms from specifications using good old-fashioned equational reasoning. In particular, the fusion law is fundamental. We specify a problem as clearly as possible, in a modular fashion; for example, many optimization problems take the form

\displaystyle \mathit{algorithm} = \mathit{aggregate} \cdot \mathit{test} \cdot \mathit{generate}

where the generation phase is some combinatorial enumeration, testing filters out some ineligible candidates, and aggregation selects the optimal remaining candidate. We then use fusion to promote aggregation and testing into the generator. The result is hopefully an efficient program; it may be more obscure than the very clear specification, but that’s fine, because it is demonstrably equivalent to it.

Part 1 used greedy algorithms as an illustration (the book also covers other algorithm design techniques, such as dynamic programming and exhaustive search). Just following one’s nose leads to a theorem indicating when an optimization problem as above has a greedy solution. But there is an obstacle: the premises of that theorem are too strong for most applications.

The essence of the obstacle is that the ordering we are optimizing (to find the cheapest, or smallest, or whatever) is often not linear, because it is not antisymmetric: there typically exist distinct solutions tied on cost. Because both specification and implementation are expressed as functions, both choose a specific cheapest solution—but maybe not the same solution. So the specification and implementation are different functions; and of course we then cannot prove that the implementation implements the specification.

The fix is to allow nondeterministic functions in a few, carefully controlled places. Specifically, when there is a tie, we consider all optimal partial candidates, not an arbitrary single one of them. Here is where we pick up the story.

Nondeterministic functions

Optimization problems are often underdetermined: there are typically multiple distinct optimal solutions. The specification of the problem should admit any optimal solution, without committing in advance to one; this allows maximal freedom in implementing the specification. But ultimately, the implementation (especially if it is to be a greedy algorithm) must commit to one optimal solution: it must be optimal, of course, but not all optimal solutions need be reachable. Therefore, the implementation no longer equals the specification, it refines it.

Taking this perspective seriously inherently leads to a calculus of relations rather than functions, with relational inclusion as refinement. This is the approach taken in the Algebra of Programming book by Richard Bird and Oege de Moor, now sadly out of print. I think this is morally the right approach to take; but it is rather complicated. Experience has shown that—with a few honorable exceptions—no-one can keep all the necessary rules in their head long enough to use them fluently.

Fortunately, we can get away without a wholesale switch from functions to relations; a much gentler approach suffices. All we need is to extend our language of algorithm development with one special nondeterministic function {\mathit{MinWith}}, denoted with an initial capital to indicate that it is not a true function, and with one piece of meta-notation {\leftarrow}. These satisfy the following property:

\displaystyle x \leftarrow \mathit{MinWith}\;f\;\mathit{xs} \quad\Leftrightarrow\quad x \in \mathit{xs} \land \forall y \in \mathit{xs} : f\;x \le f\;y

Read {\mathit{MinWith}\;f\;\mathit{xs}} as “the collection of all minimal elements of finite list {\mathit{xs}} under cost function {f}“, and {x \leftarrow F\;\mathit{xs}} as “{x} is a possible result of applying nondeterministic function {F} to {\mathit{xs}}“.

It’s important to note that this extension is just for problem specifications; there is no change to the implementation language. The game is then to manipulate the nondeterministic specification to eliminate all the nondeterminism. Crucially, we can still use most of the familiar results; for example, the distributive law from Part 1 becomes

\displaystyle \mathit{MinWith}\;f\;(\mathit{concat}\;\mathit{xss}) = \mathit{MinWith}\;f\;(\mathit{map}\;(\mathit{MinWith}\;f)\;\mathit{xss})

This means:

\displaystyle \begin{array}{@{}l} \quad x \leftarrow \mathit{MinWith}\;f\;(\mathit{concat}\;\mathit{xss}) \\ \Leftrightarrow \quad \\ \quad x \leftarrow \mathit{MinWith}\;f\;(\mathit{map}\;(\mathit{MinWith}\;f)\;\mathit{xss}) \\ \Leftrightarrow \quad \\ \quad\exists \mathit{xs} : \mathit{xs} \leftarrow \mathit{map}\;(\mathit{MinWith}\;f)\;\mathit{xss} \land x \leftarrow \mathit{MinWith}\;f\;\mathit{xs} \\ \end{array}

The most important result we need is a fusion theorem for {\mathit{foldr}} with nondeterministic post-processing (in particular, with {\mathit{MinWith}}):

\displaystyle \mathit{foldr}\;f'\;e'\;\mathit{xs} \leftarrow H\;(\mathit{foldr}\;f\;e\;\mathit{xs}) \quad\Leftarrow\quad f'\;x\;(H\;y) \leftarrow H\;(f\;x\;y) \land e' \leftarrow H\;e

With this theorem, we can make much better progress with greedy algorithms. Starting with the nondeterministic specification

\displaystyle \mathit{MCC} = \mathit{MinWith}\;\mathit{cost} \cdot \mathit{candidates}

we can calculate the same deterministic greedy algorithm

\displaystyle \mathit{mcc} = \mathit{foldr}\;\mathit{gstep}\;c_0

satisfying {\mathit{mcc}\;\mathit{xs} \leftarrow \mathit{MCC}\;\mathit{xs}}, where

\displaystyle \mathit{gstep}\;x\;\mathit{cs} \leftarrow \mathit{MinWith}\;\mathit{cost}\;(\mathit{extend}\;x\;\mathit{cs})

That is, {\mathit{gstep}} is any deterministic refinement of the nondeterministic greedy step. The calculation goes through under the weaker assumption of refinement rather than equality for the greedy condition:

\displaystyle \mathit{gstep}\;x\;(\mathit{MinWith}\;\mathit{cost}\;\mathit{cs}) \leftarrow \mathit{MinWith}\;\mathit{cost}\;(\mathit{map}\;(\mathit{gstep}\;x)\;\mathit{cs})

Making change

For example, consider the problem of making change in UK currency, which has coins of denominations 1p, 2p, 5p, 10p, 20p, 50p, £1 (ie 100p), £2 (200p):

The smallest denomination is 1p, so any whole number sum can be achieved; the problem is to do so with the fewest coins. It is well known that the greedy algorithm works for making change in most currencies, including this one: from largest denomination to smallest in turn, use the most coins that you can. Let’s see if we can calculate that!

Define a tuple to be a list of coin counts, of the same length and in the same order as the denominations (smallest denomination first), with count the total number of coins in a tuple:

\displaystyle \begin{array}{@{}l} \mathbf{type}\;\mathit{Tuple} = [\mathit{Integer}] \medskip \\ \mathit{count} :: \mathit{Tuple} \rightarrow \mathit{Integer} \\ \mathit{count} = \mathit{sum} \end{array}

Then the change-making problem is specified by

\displaystyle \mathit{change}\;\mathit{ds}\;n = \mathit{minWith}\;\mathit{count}\;(\mathit{tuples}\;\mathit{ds})

where

\displaystyle \begin{array}{@{}l} \mathit{tuples} :: \mathit{Integer} \rightarrow [\mathit{Integer}] \rightarrow [\mathit{Tuple}] \\ \mathit{tuples}\;n = \mathit{finish} \cdot \mathit{foldr}\;\mathit{step}\;[([\,],n)] \quad\mathbf{where} \\ \quad \begin{array}[t]{@{}ll} \mathit{step}\;d & = \mathit{concat} \cdot \mathit{map}\;(\mathit{extend}\;d) \\ \mathit{extend}\;d\;(\mathit{cs},r) & = [ (c:\mathit{cs}, r - c \times d) \mid c \leftarrow [0 .. r \mathbin{\underline{\mathrm{div}}} d ]] \\ \mathit{finish} & = \mathit{map}\;\mathit{fst} \cdot \mathit{filter}\;((0==) \cdot \mathit{snd}) \end{array} \end{array}

In words, components are coin denominations, and candidate solutions are tuples. The initial candidate {([\,],n)} represents the empty tuple and remaining sum {n}. Partial solutions are constructed from largest denomination to smallest; for each smaller denomination {d}, the current candidate {(\mathit{cs},r)} is extended by {c} coins of denomination {d}, for each possible value of {c} from {0} up to {r \mathbin{\underline{\mathrm{div}}} d}. Finally, we select only those candidates whose remainder is {0}, project out the tuples and discard the remainders, and take the smallest (by count) tuple.

Then the big question is: does the functional calculation above give us the familiar greedy algorithm for making change? Not always. By happy accident, it does so for UK currency. But consider a currency with denominations 1p, 3p, 7p, for which the greedy algorithm does work, and a sum of 9p. The well-known algorithm (largest coins first, as many coins as possible) will use one 7p and two 1p coins—three coins in total. However, our function {\mathit{change}} instead uses three 3p coins—also three coins, but a different solution. This is because of the particular order in which {\mathit{tuples}} happens to generate results, and the particular tie-breaking rule that {\mathit{minWith}} uses. The specification and the implementation are not equal; therefore the deterministic greedy condition cannot hold. However, the nondeterministic greedy condition does hold, and the familiar algorithm is indeed a refinement of the specification that admits all optimal tuples instead of prematurely committing to one.

Beyond greedy algorithms

Most optimization problems are like the coin-changing problem: the most natural specification admits multiple distinct optimal solutions. The specification should not prematurely rule out any optimal solution. But a given implementation strategy must eventually make a choice.

Therefore any calculus of program equality is too strict; program refinement is necessary. But one need not fully commit to a calculus of relations, and all the complexity that entails; it suffices to judiciously introduce just a couple of specification primitives into an otherwise plain calculus of functions.

The book obviously goes beyond greedy algorithms. In particular, even using the nondeterministic greedy condition, not all currencies accommodate a greedy algorithm. One that does not is the pre-decimal UK currency {[ 1, 3, 6, 12, 24, 30 ]}:

(ignoring fractional denominations). For example, using the greedy algorithm for 48d will yield one half-crown (30d), one shilling (12d), and one sixpence (6d), rather than two florins (24d each). For this instance we need an algorithm that is allowed to maintain multiple partial candidates rather than just one; the book discusses thinning algorithms of this kind, and the corresponding nondeterministic {\mathit{ThinWith}} operation needed to specify them.

Conclusion

Functional programming is a convenient vehicle for presenting algorithm design techniques and algorithmic developments: higher-order functions encapsulate the important patterns of computation, and good old-fashioned equational reasoning allows the calculation of programs from specifications, directly in terms of program texts rather than in a separate formalism such as predicate calculus. But specifically for optimization problems, it is convenient to step slightly outside the boundaries of pure functional programming, to accommodate nondeterministic functions in a handful of places.

BioJeremy Gibbons is Professor of Computing at the University of Oxford, where he leads the Algebra of Programming research group and is former Deputy Head of Department. He served as Vice Chair then Past Vice Chair of ACM SIGPLAN, with a particular focus on Open Access. He is also Editor-in-Chief of the Journal of Functional Programming, SC Chair of ICFP, on the Advisory Board of PACMPL, an editor of Compositionality, and former Chair of IFIP Working Group 2.1 on Algorithmic Languages and Calculi.

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.