Select Page

PL Perspectives

Perspectives on computing and technology from and for those with an interest in programming languages.

(Update: fixed links and formatting issues.)

Shortly before I started as a new faculty, I was given my teaching assignment: revamping the undergraduate programming languages class. Since the class hadn’t been taught for many years—and since I was a postdoc at the time, living a life of relative leisure—I decided that it would be a great idea to redesign the course from scratch. What should go into a course titled Theory and Design of Programming Languages?

Since this post is a bit long, I will lead with the punchline: I ended up designing an undergraduate course focused on languages with strong static guarantees—Haskell and Rust—intermixed with more theoretical material on core languages and operational semantics; readers who want to see the details can skip ahead to the linked sections. As a side benefit, the course topics can be easily connected to current ideas and trends in PL research.

Programming Languages at the Undergraduate Level

Unlike in some fields of computer science, in programming languages there isn’t widespread agreement on the undergraduate curriculum (though there are ongoing efforts to improve standards). Most undergraduate degrees in the United States have two kinds of programming languages courses. The first one is compilers, a topic that can be taught in a variety of interesting ways.

Theory and Design of Programming Languages is an example of the second kind of course, programming paradigms. In principle, this course is organized around major programming paradigms, such as object-oriented programming, functional programming, logic programming, and scripting. In practice, this organization often resembles a kind of programming language buffet, introducing a bunch of programming languages in rapid succession and having students write programs in each.

The purpose of this kind of course is less clear, and more controversial. Philosophically, the idea of classifying languages by paradigm seems a bit fishy. Practically, a paradigms course risks students taking away a superficial view of languages—if students only spend a short time with each language, they may conclude that languages are mostly the same besides the punctuation. However, these courses do serve a useful purpose: they give students working experience with novel programming languages that they likely have not seen before. Different languages encourage different styles of programming; in short, languages influence how programmers think. But how can we design a course that has the benefits of traditional paradigms courses without the focus on “paradigms”?

Principles for a New Course

Trying for a fresh spin on the paradigms course, I decided to refocus the material on a narrower topic: languages with strong static guarantees. Such languages are becoming increasingly mature and suitable for practical use, and I wanted to show how language technology could eliminate entire classes of errors at compile-time. At the same time, working in a language with strong static checks is a radically different than programming in more permissive languages, a point that can only be appreciated through direct programming experience.

With this choice, the central messages became clear. First and foremost:

Language technology can eliminate entire classes of common errors at compile-time.

This point leads naturally into a question: how can we know what bugs are possible in a language with any certainty, and more broadly, how can we reason about programming languages? Thus, the second message:

It is possible to formally model and reason about programming languages.

A proper treatment of this point would require another course, but I hoped to teach students how to specify languages on paper, at least, and give them a glimpse of contemporary PL research.

Of course, there are always practical requirements:

  • Accessible to upper-level undergraduates. Target third- and fourth-year undergraduate CS students. Half had seen some systems programming, none had seen any functional programming.
  • Feasible to cover in a semester. Design for a 15-week long course, about 35 hours of lectures.
  • Doable with typical resources and staff. Rely as much as possible on existing, freely-available course materials and tools; no custom teaching languages or new textbooks.
  • Self-contained, as much as possible. No redesigning the whole CS curriculum (yet).

Below, I’ll outline the three main components of the course. For readers who are curious about the details, the course schedule and teaching materials are available here.

Programming in Haskell

The first half of the course introduces Haskell, a lazy, strongly-typed functional language.

Why Haskell?

Haskell and other languages from the ML family are common choices for teaching purposes, usually serving as an example of functional programming. While functional programming doesn’t align directly with the course goals and would ideally be covered in a separate course, programming paradigm courses are often the only place in the CS curriculum where students learn about this important style of programming.

  • Strong type system. Haskell’s type system is likely to be stronger and richer than anything students have encountered before, showing students the power of structuring data with rich types. Haskell’s type inference also helps dispels the myth that strongly-typed languages must be verbose and full of tedious annotations.
  • Control of effects. Haskell’s strong control of effects, while sometimes awkward for programming, forces students to think about purity in a way that is less obvious in more typical, non-pure languages.
  • Advanced language features. Haskell popularized a variety of language concepts that are influential in contemporary programming language design. Examples include algebraic datatypes and pattern matching; polymorphism and typeclasses; and even monads.

Programming Assignments

The Haskell portion of the course features three programming assignments.

  1. A simple puzzle solver. As a warmup to the language, students develop a solver for a simple, Sudoku-like puzzle.
  2. A purely-functional datastructure. To see some interesting applications of Haskell datatypes and work with a functional-style datastructure, students implement several kinds of difference lists and zippers.
  3. A language implementation. The final project of the first half is an interpreter for a simple, Scheme-like language, starting from an operational semantics specification. Students learn how to use monads for three applications: maintaining state during parsing, handling errors during evaluation, handling input and output for the top-level user interface. (A fourth, bonus application has students write Quickcheck generators for property-based testing their parser.)

Programming in Rust

The second half of the course introduces Rust, a strongly-typed language for safe systems programming developed by Mozilla.

Why Rust?

Being a relatively new language, Rust is currently not a common choice in programming language course. However, Rust is an interesting example of a language with strong static checks, and fits well within the course:

  • Naturally builds on concepts from Haskell. Rust has direct analogues of many Haskell features, like algebraic datatypes (“struct” and “enum” types), typeclasses (“traits”), and polymorphism (“generics”), with restrictions for systems programming applications. Furthermore, while Rust resembles imperative languages like C or C++, it has closures and supports a functional-programming style when convenient.
  • Mechanisms for tracking ownership and controlling aliasing. Perhaps the most interesting aspect of Rust is how it blends ideas from cutting-edge PL research to bring safety to a new domain: low-level systems programming. Rust’s concept of ownership—inspired by linear logic and C++’s RAII—prevents memory leaks without manual memory management or garbage collection. It also exposes the idea of a resource: something that must be released after it is acquired. There are many resources besides memory, such as file handles, sockets, locks, etc. Rust is also notable for its strong control of aliasing. While languages like Haskell highlighted the importance of considering a program’s side-effects, the dangers of aliasing have been less well-known outside of PL research. Rust’s borrow checker—an alias analysis inspired by PL research on region-based memory management and its realization in the Cyclone language—prevents dangling pointers and data races in concurrent programs. Rust dispels the myth that strongly-typed programming languages are only suitable for high-level programming.
  • Advanced compiler. Rust’s compiler relies on a variety of sophisticated program analyses to ensure safety while reducing programmer annotations. Rust’s error messages have been a key point of focus, and are highly useful for resolving an error when the compiler rejects a program.

Programming Assignments

The Rust portion of the course also features three programming assignments.

  1. A few simple programs. As a warmup to the language, students implement a Towers of Hanoi solver and a basic calculator.
  2. A pointer-based datastructure. To learn about Rust’s core ownership and borrow checking analyses, students implement a full-featured binary search tree with iterators.
  3. A concurrent RSS indexer. The final project of the second half of the course gives students experience with another core strength of Rust: concurrency. Students turn a single-threaded RSS indexer into a concurrent indexer in two implementations: using locks and semaphores, and by writing a simple thread pool. (A third, bonus implementation uses cooperative concurrency through Rust’s recent async/await features.)

A note on the borrow checker. Rust’s alias analysis is notoriously challenging to learn, and challenging to teach. On the conceptual side, the course spends some time discussing the concept of aliasing: what it is, why it is hard to reason about, and how it can lead to bugs. Then, we introduce a toy version of the Rust borrow checker, sketching how an alias analysis works. On the practical side, Rust’s borrow checker is constantly evolving, seeking to accept more and more safe programs by using increasingly sophisticated program analyses. The actual analysis is too complex to teach in the classroom, and students mostly learn how it functions by working through the homework assignments. In my experience teaching the course, the borrow checker is now good enough that most students can learn how to satisfy it without much trouble.

Core Languages

While the Haskell and Rust portions of the course give students hands-on programming experience, I also wanted the course to introduce some theoretical concepts that might be broadly useful for designing programming languages (the course is titled Theory and Design of Programming Languages, after all). A proper treatment of PL theory requires a dedicated course, so I made two simplifying choices:

  1. Focus on operational semantics. This seemed like a fairly easy choice: operational semantics is a reusable tool that is flexible enough to model many kinds of programming languages.
  2. Don’t do any proofs. A formal treatment of operational semantics requires more mathematical knowledge than I could assume, so I opted for an informal presentation. While this meant that the course couldn’t show how to prove things about programming languages, the intuitive treatment is already interesting and useful.

Written Assignments

Rather than putting the theoretical topics in a separate section, the course introduces operational semantics for several core languages throughout the semester. Each programming assignment was paired with a written assignment, using operational semantics to illuminate some particular aspect of the (real) languages we were covering in class:

  • The lambda calculus as a core language (Haskell)
  • Simple type systems and eager datatypes (Haskell)
  • Lazy datatypes and evaluation (Haskell)
  • A core imperative language (Rust)
  • The lambda calculus with references (Rust)
  • A toy process calculus for concurrency (Rust)

Most written assignments followed a similar pattern: we defined a core language and an operational semantics, and then asked students to write and step toy programs, and then extend the core language with new features.

Lessons Learned

While designing the course turned out to be a bit more work than I had bargained for, I learned a lot throughout the experience. A few takeaways:

  • There’s plenty of room to experiment within “Programming Paradigms”. Narrowing the scope and going deeper into certain topics is a good idea. The course I described makes certain choices, but many other course designs are possible and worth exploring.
  • Students are curious about PL research. Academic PL research has an unfortunate reputation as being highly esoteric and hard to understand. Bringing ideas from research into undergraduate education, whether through a special end-of-class lecture, or integrated throughout the course, can help dispel this reputation. To my surprise, some students told me that the core language topics were their favorite.
  • This is absolutely doable at the undergraduate level! Throughout the first iteration of the course, I was constantly worrying that the course was trying to cover too much material—two challenging programming languages, several core languages, and operational semantics. While I did have to scale back a few of my more outlandish ideas, in the end I think that many ideas from PL have matured to the point that they absolutely can (and maybe should) be taught at the undergraduate level.

Resources and Inspirations

This course is primarily influenced by previous courses covering strongly-typed languages and programming language semantics, like CSCI 198 at Pomona and CS 242 at Stanford. There are now plenty of regular courses covering Haskell and functional programming (e.g., CS 552 at Penn), and a few experimental courses covering Rust (e.g., CS 198 at Penn, EECS 3/496 at Northwestern, and CS 3210 at Georgia Tech). The course material is also strongly influenced by CMSC 330, UMD’s undergraduate PL course, which is detailed in a nice series of posts on the predecessor of this blog.

Acknowledgments: Launching a new course would not have been possible without a fantastic group of teaching assistants and undergraduate peer mentors: Stephen Lee, Zach Susag, Link Lin, and Jeremy Intan played a huge role in getting this course off the ground. Mark Mansi helped clarify lots of Rust confusions, and saved a lot of compiler time by telling me about cargo check. Michael Greenberg gave feedback on the course topics and emphasized the importance of not skipping Applicative in Haskell. Finally, Mike Hicks greatly improved this post with numerous edits and suggestions.

Bio: Justin Hsu is an Assistant Professor of Computer Sciences at the University of Wisconsin–Madison. He accidentally got into PL research after taking a one-off course on Haskell, and he hopes that teaching more PL topics at the undergraduate level will inspire other students to make similarly questionable life choices.

Disclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM.