PL Perspectives

Perspectives on computing and technology from and for those with an interest in programming languages.

Ideas from programming languages research find application in many domains, even ones that don’t seem to involve a programming language. In this post, I’ll talk about how key insights around compositionality, a trait that is much desired and utilised in language semantics, can simplify the challenge of designing and explaining realistic distributed systems. We will see how functional-style combinators enable building complex systems from “primitive” protocols. We will survey the state of the art in behavioural type systems that can specify protocol interfaces. Finally, we will see how system correctness proofs can be structured in a way similar to compositional proofs of compiler correctness.

What Makes Distributed Systems Hard to Build and Understand?

There are several reasons why developing distributed systems is particularly challenging.

First, distributed systems, being comprised of many independent entities (aka nodes), are inherently concurrent. That means that the nodes may communicate with each other by sending messages, in an asynchronous fashion, making it difficult for the programmer to foresee all possible interaction scenarios. Second, distributed systems must cope with failures. If one or more nodes stop following the protocol, due to a fault, network partitioning, or an adversary taking over, the remaining majority of well-behaved participants are still expected to keep the system functioning properly. Finally, distributed systems are usually expected to run for a long period of time. Therefore, one should be able to maintain and upgrade their implementation, without ever stopping their operation.

These requirements mean that even the most basic distributed problems involve sophisticated protocols. Consider, for instance, a problem of reaching an agreement amongst N independent parties on a single value chosen from several proposed ones — the so-called distributed consensus problem. To tolerate up to N/2 - 1 node failures, distributed consensus requires a three-phase protocol with at least four kinds of messages, two rounds, and two different roles assumed by its participants at different moments of time. The most famous version of such a protocol, known as Paxos Consensus, was proposed by Leslie Lamport in 1989. It has been since then widely adopted in industry and is now taught at universities.

Those who, like the author of this post, attempt to present Paxos to students (or even professionals) without the necessary systems background, might discover that this is a nontrivial algorithm to explain. Even the fact that it works at all is far from obvious. In his widely-cited short follow-up paper Paxos Made Simple, Lamport argues that “The Paxos algorithm, when presented in plain English, is very simple.” That paper, however, only presents a core part of the protocol. It ignores, for instance, the system upgrade aspect and leaves many subtle design decisions open to the reader’s interpretation. Further papers (Lamspon, 2001, Van Renesse and Altinbuken, 2015) attempt to better align the Paxos description with its existing encodings. Yet, to date there is still a large conceptual gap between descriptions of core ideas in distributed systems and their real-world implementations.

In the past five years, researchers in the programming languages community started to tackle the challenge of bridging this gap. To do so, they employed the paramount PL insight for building complex yet intellectually manageable systems — composition.

Axes of System Composition

The most natural way to structure large programs is to compose them from smaller, independent units. Such units could be implemented using functions, classes, modules, or other programming abstractions.

When talking about system composition, it is common to consider it as a two-dimensional concept, talking about its horizontal and vertical dimension. In this dichotomy, the horizontal composition is a way to accommodate multiple protocols implemented within a single application. The complementary vertical composition is a way to “hide” the complexity that stems from the implementation subtleties. Those subtleties usually have to do with various models of message passing and faults, as well as efficient implementations of the low-level protocol logic, which is typically not a concern of the protocol designers.

This image illustrates the two axes of composition. The horizontal dimension contains two protocols, for consensus and persistence, used for communication between the involved nodes. Notice that nodes C and D are unaware of the agreement, as it’s irrelevant for their functionality, while the code of nodes A and B contains components implementing both protocols. The vertical dimension “zooms in” on a specific node implementation. For instance, the persistence layer may involve concurrent processing of updates from multiple sources, contain the code to cope with local failures and implement networking primitives. All those aspects are irrelevant for other nodes in the system (which may implement them differently), hence they should not be exposed at the protocol level.

Vertical System Composition via Layered Design

A great example of the use of vertical system decomposition appears in the Verdi verification framework (Wilcox et al., 2015). In Verdi, one can verify a realistic distributed system by building a stack of Verified System Transformers, each of which encapsulates the logic of handling a particular notion of distributed faults. This way, the top-level system implementation can be verified in the simplest network model (assumed to have no faults). Verdi has proven to be powerful enough to build and verify in the Coq a real-world implementation of the Raft consensus protocol.

Vertical composition has also been employed to address the mentioned challenge of explaining, implementing, and verifying Paxos in a modular fashion. To do so, a recent work by Garcia-Perez et al. (2018) represented Paxos as a combination of layers starting from its “simple” version, and gradually adding more “features” to it, each implemented as a separate layer. By doing so, the approach eventually restored the performance of the realistic version of the protocol, while retaining the simplicity of its idealistic description. This approach to vertical decomposition drew heavily from the ideas on compositional verification of compilers (Kumar et al., 2014) as well OS kernels (Gu et al., 2015). A version of vertical composition has been also recently employed in the Asphalion framework (Vukotic et al., 2019) for verifying Byzantine fault-tolerant protocols.

Distributed Systems as Combinations of Protocols

A clean horizontal decomposition of a distributed system into the units is, unfortunately, not so easy. While each node typically operates independently from others, it might simultaneously run code that is responsible for providing fault-tolerance, replication, allowing upgrades, and implementing the core logic of the protocol (e.g., reaching the consensus). One promising idea that has been explored recently is to consider each of those activities as a separate “primitive” protocol. Instead of keeping in mind all the processes a distributed node needs to take care of, one can just decide which component protocols (e.g., replication, consensus, etc) the node needs to incorporate. The resulting implementation can be, thus, obtained as a combination of primitive protocols. Recent work implements this idea by providing a Haskell library of “protocol combinators” (Andersen and Sergey, 2019). The functions offered by the library are very similar in spirit to more traditional combinators used for construction of parsers and financial contracts.

Types for Inter-Protocol Interfaces

Alas, it is frequently difficult or even impossible to have a clean split of a complex protocol into independently specified and implemented components. What if, for instance, the behaviour of the consensus logic depends on whether certain nodes have been properly kept up-to-date via a separate upgrade protocol? Such a scenario is not uncommon, for example, in Byzantine fault-tolerant protocols such as PBFT. To allow for composition, one must specify the interaction interface between different primitive protocols.

Fortunately, techniques to tackle the problem of interfacing several concurrent processes (Jones, 1983) have been previously explored in the context of multi-threaded programming. In the last two years, those techniques have been successfully adopted to reason about complex distributed systems. While the initial goal of the decomposition was to enable scalable mechanised systems verification (Taube et al., 2018), the same ideas have proven to be very useful for the purpose of systems implementation and testing (Desai et al., 2018).

The contemporary research on PL design offers a number of proposals for conveniently capturing the inter-protocol interfaces as a form of behavioural types. One of the promising approaches is based on dependent types, which specify the conditions a node implementation needs to satisfy in order to conform to the corresponding protocol. Several protocol-aware dependent typing disciplines have been implemented as embeddings in Idris (Brady, 2017) and Coq (Sergey et al., 2018).

The Cost of Compositional Reasoning

Horizontal and vertical composition complement each other for structuring the implementation and reasoning about distributed systems. As I argued in my PODC’19 talk, in order to enjoy both of them when doing systems implementation and verification, one has to build on a higher-order logic, such as that used by Coq. This point has been independently demonstrated by the work on the Aneris verification framework (Krogh-Jespersen, 2019), which is implemented on top of Coq-embedded Iris logic.

The expressivity of a higher-order logic as a framework for implementing and verifying distributed systems has a significant practical downside: most of the proofs need to be conducted with a human in the loop. Conversely, by giving up on either horizontal or vertical composition, one can avoid the need for higher-order logic, and can achieve a much better support for proof automation. For instance, the verification-friendly languages PSync (Dragoi et al., 2016) and Goolong (Gleissenthall et al., 2019) reduce the reasoning about asynchronous systems to verification in a round-based model. This makes verification decidable (the user only need to specify the properties of interest), but only for “monolithic” protocols.


In his 1997 paper “Composition: A Way to Make Proofs Harder”, Lamport notes that “engineers rarely specify and reason formally about the systems they build. […] It seems unlikely that reasoning about the composition of open-system specifications will be a practical concern within the next 15 years”. This observation has proven valid. It took the programming languages community almost twenty years to develop tools and insights for making composition an enabling tool for building well-understood, realistic, and trustworthy distributed systems. But we are still only at the beginning of the long road ahead. It took many more than fifteen years for us to understand the benefits and drawbacks of structuring regular code in various ways. It might take at least as long to distill the essence of the principled protocol design before the task of implementing a distributed system will become as easy as writing a simple function.

Bio: Ilya Sergey is a tenure-track Associate Professor at Yale-NUS College and the School of Computing of the National University of Singapore. He does research in programming language theory, semantics, software verification, and program synthesis. He was awarded the 2019 AITO Dahl-Nygaard Junior Prize for his contributions in the development and application of programming language techniques to various problems across the programming spectrum.

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.