If you wanted to learn about soundiness, you are in the right place. Below is a brief excerpt from our Soundiness manifesto...

Static program analysis is a key component of many software development tools, including compilers, development environments, and verification tools. Practical applications of static analysis have grown in recent years to include tools by companies such as Coverity, Fortify, GrammaTech, IBM, and others. Analyses are often expected to be sound in that their result models all possible executions of the program under analysis. Soundness implies that the analysis computes an over-approximation in order to stay tractable; the analysis result will also model behaviors that do not actually occur in any program execution. The precision of an analysis is the degree to which it avoids such spurious results. Users expect analyses to be sound as a matter of course, and desire analyses to be as precise as possible, while being able to scale to large programs.

Soundness would seem essential for any kind of static program analysis. Soundness is also widely emphasized in the academic literature. Yet, in practice, soundness is commonly eschewed: we are not aware of a single realistic whole-program analysis tool (e.g., tools widely used for bug detection, refactoring assistance, programming automation, etc.) that does not purposely make unsound choices. Similarly, virtually all published whole-program analyses are unsound and omit conservative handling of common language features when applied to real programming languages.

The typical reasons for such choices are engineering compromises: implementers of such tools are well aware of how they could handle complex language features soundly (e.g., by assuming that a complex language feature can exhibit any behavior), but do not do so because this would make the analysis unscalable or imprecise to the point of being useless. Therefore, the dominant practice is one of treating soundness as an engineering choice.

In all, we are faced with a paradox: on the one hand we have the ubiquity of unsoundness in any practical whole-program analysis tool that has a claim to precision and scalability; on the other, we have a research community that, outside a small group of experts, is oblivious to any unsoundness, let alone its preponderance in practice.

The term "soundiness" has been inspired in part by Stephen Colbert's "truthiness."

Our article In defense of soundiness: a manifesto (HTML, pre-print PDF) appeared in the February 2015 issue of Communications of the ACM. Also see these presentation slides on the subject.

Below is a short bibliography of papers related to soundiness produced from this bibtex file. We aim to list papers that either (1) measure the unsoundness of whole-program analyses in some way, (2) give solutions to analyzing "hard" language features, or (3) measure the utility of soundness in a particular context. If you have further suggestions for references, submit an issue or a pull request.

  • Li, Y., Tan, T. and Xue, J. Effective Soundness-Guided Reflection Analysis. SAS (2015).
  • Smaragdakis, Y. and Kastrinis, G. and Balatsouras, G. and Bravenboer, M. More Sound Static Handling of Java Reflection, 2014.
  • Christakis, M., Müller, P. and Wüstholz, V. An Experimental Evaluation of Deliberate Unsoundness in a Static Program Analyzer. VMCAI (2015).
  • Christakis, M., Müller, P. and Wüstholz, V. Collaborative Verification and Testing with Explicit Assumptions. FM (2012).
  • Li, Y., Tan, T., Sui, Y. and Xue, J. Self-Inferencing Reflection Resolution for Java. ECOOP (2014).
  • Feldthaus, A., Schäfer, M., Sridharan, M., Dolby, J. and Tip, F. 2013. Efficient Construction of Approximate Call Graphs for JavaScript IDE Services. ICSE (2013), 752–761.
  • Richards, G. 2012. Eval Begone ! Semi-Automated Removal of Eval from JavaScript Programs. OOPSLA (2012).
  • G. Soares, R. Gheyi, and T. Massoni. Automated behavioral testing of refactoring engines. IEEE TSE (2013).
  • Bodden, E., Sewe, A., Sinschek, J., Oueslati, H. and Mezini, M. 2011. Taming Reflection Aiding Static Analysis in the Presence of Reflection and Custom Class Loaders. ICSE (2011), 241–250.
  • Richards, G., Hammer, C. and Burg, B. 2011. The Eval that men do: A large-scale study of the use of eval in JavaScript applications. ECOOP (2011).
  • Bravenboer, M. and Smaragdakis, Y. 2009. Strictly declarative specification of sophisticated points-to analyses. ACM SIGPLAN Notices (Oct. 2009), 243.
  • Smaragdakis, Y. and Csallner, C. 2007. Combining Static and Dynamic Reasoning for Bug Detection. Tests and Proofs (2007).
  • Lhoták, O. 2007. Comparing call graphs. PASTE (2007).
  • Xue, J. and Nguyen, PH. 2005. Completeness Analysis for Incomplete Object-Oriented Programs. CC (2005).
  • Livshits, B., Whaley, J. and Lam, M. 2005. Reflection analysis for Java. Programming Languages and Systems. 0326227 (2005).
  • Flanagan, Cormac and Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B. and Stata, R. 2002. Extended Static Checking for Java. PLDI (2002).

Please pick the language/features your analysis supports: