You have heard your grandmother tell you many times: parallel programming is hard. In 2022, does it still have to be? Back in grandma’s heyday, they knew a cool and breezy way to do parallelism: pure functional programming. They knew that pure functions are parallel by default, being free of pesky concurrency bugs and all. But, parallel functional programming remained slow and steady, resisting practical efficiency for decades. This post shows the way towards solving the performance problems of functional programming.
Modern analog computers a host of analog behaviors that affect the fidelity of the mapped computation. How can we mitigate these analog behaviors in compilation?
Peer review is an essential aspect of academic research, giving a feedback loop that stimulates and rewards high-quality work – but as we all know, it doesn’t always function well. To help maintain a consensus of what constitutes good reviewing, this note spells out some bad and good reasons to reject and accept papers.
The ASPLOS Steering Committee is considering two changes to the ASPLOS submission process: 1) three submission deadlines spread over the year, and 2) the possibility for papers near acceptance to be revised and resubmitted. This proposal outlines these changes.
“Undefined Behavior” often has a bad reputation. But what, really, is Undefined Behavior, and is it actually that bad?
In this blog post, I will look at this topic from a PL perspective, and argue that Undefined Behavior is a valuable tool in a language designer’s toolbox.
How does compiler optimization affect binary code differences? In this work, we perform a systematic study using search-based iterative compilation. We have built an auto-tuning framework called BinTuner that iteratively compiles to adjust the differences in binary code. Our results demonstrate the effect of modern compiler optimization on binary code difference has been swept under the carpet for a long time. We wish our study can help the research community redesign the optimization-resistance experiments and evaluate the compiler-agnostic capability.
Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. In this post, I will talk about the verification problem for neural networks and some of the prominent verification techniques that are being developed. I will also discuss the great challenges that our community is well positioned to address and some of the ideas that we can port from the machine-learning community.
With about half a century of life, the Unix shell is pervasive and entrenched in our computing infrastructure—with recent virtualization and containerization trends only propelling its use. A fresh surge of academic research highlights show potential for tackling long-standing open problems that are central to the shell and enable further progress. A recent panel discussion at HotOS ’21 concluded that improvements and research on the shell can be impactful and identified several such research directions. Maybe it’s time for your research to be applied to the shell too?
My POPL 2011 “Flash Fill” paper was the most important turning point in my research career. I went from searching for the hardest problem I can solve to searching for the simplest problem that will have the most impact. It sensitized me to customer connection and enlightened me to how practical requirements can inspire foundational research ideas and directions. And most of all, it led to a blissful connection with my loved ones.