We accept that data structure determines program structure. But we should not forget that it is not just the input data that may be structured: output data may be structured too, and both may determine program structure.
This post showcases three papers published in SIGPLAN venues which will appear as Research Highlights in upcoming issues of the Communications of the ACM.
Telling the truth about all program behaviors collectively is hard. Can an analysis say something useful and true without making assumptions that are violated by nearly all real programs?
The usage of the term program verification has expanded well beyond its original meaning. As research in this space advances and expands, is it time to reconsider the term?
Although the computer science community successfully harnessed exponential increases in computer performance to drive societal and economic change, the exponential growth in publications is proving harder to accommodate. To gain a deeper understanding of publication growth and inform how the computer science community should handle this growth, we analyze publication practices from several perspectives.
A Checklist Manifesto for Empirical Evaluation: A Preemptive Strike Against a Replication Crisis in Computer Science
To avoid an empirical replication crisis in programming languages research, PL researchers should employ the best scientific practices for empirical evaluation. A SIGPLAN empirical evaluation committee has assembled a checklist to help.
The wealth of code now available on-line is fertile ground to enable machine learning to be applied to programming tasks. This post discusses the promise of and some progress on the problem “deep code.” It is the first in a series.
Journals broaden the impact of PL. One way to make journals a more attractive publication vehicle is to allow presentations of journal papers at PL conferences, as TOPLAS does.
The purpose of a program analysis is to infer whether a certain property of a program execution can be observed at runtime. The notion of an analysis’ soundness defines how much confidence one should put in its results. The notion is not uniform and is determined by whether the analysis is intended to be used as a testing or as a verification tool.