This is the third part of a three-part article describing the approach that Richard Bird and I took to using functional programming for algorithm design in our recent book Algorithm Design with Haskell.

Part 0 of the article argued for using a functional language rather than imperative one, because it supports the calculation of algorithms from specifications using good old-fashioned equational reasoning. In particular, the *fusion law* is fundamental. We specify a problem as clearly as possible, in a modular fashion; for example, many optimization problems take the form

where the generation phase is some combinatorial enumeration, testing filters out some ineligible candidates, and aggregation selects the optimal remaining candidate. We then use fusion to promote aggregation and testing into the generator. The result is hopefully an efficient program; it may be more obscure than the very clear specification, but that’s fine, because it is demonstrably equivalent to it.

Part 1 used greedy algorithms as an illustration (the book also covers other algorithm design techniques, such as dynamic programming and exhaustive search). Just following one’s nose leads to a theorem indicating when an optimization problem as above has a greedy solution. But there is an obstacle: the premises of that theorem are too strong for most applications.

The essence of the obstacle is that the ordering we are optimizing (to find the cheapest, or smallest, or whatever) is often not linear, because it is not antisymmetric: there typically exist distinct solutions tied on cost. Because both specification and implementation are expressed as functions, both choose a specific cheapest solution—but maybe not the same solution. So the specification and implementation are different functions; and of course we then cannot prove that the implementation implements the specification.

The fix is to allow *nondeterministic functions* in a few, carefully controlled places. Specifically, when there is a tie, we consider *all* optimal partial candidates, not an arbitrary single one of them. Here is where we pick up the story.

## Nondeterministic functions

Optimization problems are often underdetermined: there are typically *multiple distinct optimal solutions*. The specification of the problem should *admit any optimal solution*, without committing in advance to one; this allows maximal freedom in implementing the specification. But ultimately, the implementation (especially if it is to be a greedy algorithm) must *commit to one optimal solution*: it must be optimal, of course, but not all optimal solutions need be reachable. Therefore, the implementation no longer equals the specification, it *refines* it.

Taking this perspective seriously inherently leads to a calculus of *relations* rather than functions, with relational inclusion as refinement. This is the approach taken in the *Algebra of Programming* book by Richard Bird and Oege de Moor, now sadly out of print. I think this is morally the right approach to take; but it is rather complicated. Experience has shown that—with a few honorable exceptions—no-one can keep all the necessary rules in their head long enough to use them fluently.

Fortunately, we can get away without a wholesale switch from functions to relations; a much gentler approach suffices. All we need is to extend our language of algorithm development with *one* special nondeterministic function , denoted with an initial capital to indicate that it is not a true function, and with one piece of meta-notation . These satisfy the following property:

Read as “the collection of all minimal elements of finite list under cost function “, and as “ is a possible result of applying nondeterministic function to “.

It’s important to note that this extension is just for problem *specifications*; there is no change to the implementation language. The game is then to manipulate the nondeterministic specification to eliminate all the nondeterminism. Crucially, we can still use most of the familiar results; for example, the distributive law from Part 1 becomes

This means:

The most important result we need is a fusion theorem for with nondeterministic post-processing (in particular, with ):

With this theorem, we can make much better progress with greedy algorithms. Starting with the nondeterministic specification

we can calculate the same deterministic greedy algorithm

satisfying , where

That is, is *any* deterministic refinement of the nondeterministic greedy step. The calculation goes through under the weaker assumption of refinement rather than equality for the greedy condition:

## Making change

For example, consider the problem of making change in UK currency, which has coins of denominations 1p, 2p, 5p, 10p, 20p, 50p, £1 (ie 100p), £2 (200p):

The smallest denomination is 1p, so any whole number sum can be achieved; the problem is to do so with the fewest coins. It is well known that the greedy algorithm works for making change in most currencies, including this one: from largest denomination to smallest in turn, use the most coins that you can. Let’s see if we can calculate that!

Define a *tuple* to be a list of coin counts, of the same length and in the same order as the denominations (smallest denomination first), with *count* the total number of coins in a tuple:

Then the change-making problem is specified by

where

In words, components are coin denominations, and candidate solutions are tuples. The initial candidate represents the empty tuple and remaining sum . Partial solutions are constructed from largest denomination to smallest; for each smaller denomination , the current candidate is extended by coins of denomination , for each possible value of from up to . Finally, we select only those candidates whose remainder is , project out the tuples and discard the remainders, and take the smallest (by count) tuple.

Then the big question is: does the functional calculation above give us the familiar greedy algorithm for making change? Not always. By happy accident, it does so for UK currency. But consider a currency with denominations 1p, 3p, 7p, for which the greedy algorithm does work, and a sum of 9p. The well-known algorithm (largest coins first, as many coins as possible) will use one 7p and two 1p coins—three coins in total. However, our function instead uses three 3p coins—also three coins, but a different solution. This is because of the particular order in which happens to generate results, and the particular tie-breaking rule that uses. The specification and the implementation are not equal; therefore the deterministic greedy condition cannot hold. However, the nondeterministic greedy condition *does* hold, and the familiar algorithm is indeed a refinement of the specification that admits *all* optimal tuples instead of prematurely committing to one.

## Beyond greedy algorithms

Most optimization problems are like the coin-changing problem: the most natural specification admits multiple distinct optimal solutions. The specification should not prematurely rule out any optimal solution. But a given implementation strategy must eventually make a choice.

Therefore any calculus of program *equality* is too strict; program *refinement* is necessary. But one need not fully commit to a calculus of relations, and all the complexity that entails; it suffices to judiciously introduce just a couple of specification primitives into an otherwise plain calculus of functions.

The book obviously goes beyond greedy algorithms. In particular, even using the nondeterministic greedy condition, not all currencies accommodate a greedy algorithm. One that does not is the pre-decimal UK currency :

(ignoring fractional denominations). For example, using the greedy algorithm for 48d will yield one half-crown (30d), one shilling (12d), and one sixpence (6d), rather than two florins (24d each). For this instance we need an algorithm that is allowed to maintain multiple partial candidates rather than just one; the book discusses *thinning algorithms* of this kind, and the corresponding nondeterministic operation needed to specify them.

## Conclusion

Functional programming is a convenient vehicle for presenting algorithm design techniques and algorithmic developments: *higher-order functions* encapsulate the important patterns of computation, and *good old-fashioned equational reasoning* allows the calculation of programs from specifications, directly in terms of program texts rather than in a separate formalism such as predicate calculus. But specifically for optimization problems, it is convenient to step slightly outside the boundaries of pure functional programming, to accommodate *nondeterministic functions* in a handful of places.

**Bio**: Jeremy Gibbons is Professor of Computing at the University of Oxford, where he leads the Algebra of Programming research group and is former Deputy Head of Department. He served as Vice Chair then Past Vice Chair of ACM SIGPLAN, with a particular focus on Open Access. He is also Editor-in-Chief of the Journal of Functional Programming, SC Chair of ICFP, on the Advisory Board of PACMPL, an editor of Compositionality, and former Chair of IFIP Working Group 2.1 on Algorithmic Languages and Calculi.

**Disclaimer:** *These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.*