Atoms of Confusion: Tracking the Tiny Causes of Programmer Misunderstanding

2017-06-05 · Posted by: Dan Gopstein · Categories: Atoms of Confusion · Comments

Atoms of Confusion is a project designed to understand the root causes of programmers’ misunderstanding of source code. It is anchored in the idea that empirical software engineering research can be at once rigorous, objective, quantitative, and insightful. To this end, the project has core members and collaborators from diverse fields, including psychology and neuroscience, as well as computer science. It builds on traditions from software engineering research, and adds wisdom from more mature scientific disciplines. Together, it offers a fresh view on what causes misunderstandings in programming.

Like most psychological phenomena, “confusion” is an intricate and complicated topic. To ensure rigor throughout the project, the initial foundational experiments have chosen an objective, measurable definition of confusion, and have focused on the most minimal causes of such a state in programmers. Confusion, as used in this work, describes what occurs when a programmer believes source code will behave differently than it actually does when run on a computer. Our goal is to find the smallest pieces of source code that can cause this type of misunderstanding in programmers.

The initial work, selected for publication in ESEC/FSE 2017, focuses on the C programming language. C is natural fit since it is both very popular, and very conducive to misunderstanding. C is so prone to confusing programmers, that it has even spawned a contest (IOCCC - The Obfuscated C Code Contest) to create the most deliberately difficult to read small C programs. These “obfuscated” programs were inspected to find recurring patterns that made them difficult to understand. In turn, an experiment showed these extracted “atoms” of confusion to 73 programmers in order to empirically measure exactly how confusing they were. Each participant was asked to hand execute ~50 of the distilled confusing pieces of code, as well as equivalent code snippets that had been transformed to use less confusing constructs. Afterwards, a separate group of programmers were shown the original IOCCC programs, both before and after the atoms of confusion had been transformed away. These studies were designed to test (1) whether very small pieces of code can cause programmers to misinterpret programs and (2) whether these small patterns affect the understanding of larger programs. All evidence so far points towards the potential for small patterns in code to cause large gaps in understanding in programmers.

Atoms of Confusion is among the first research efforts to explicitly investigate patterns of this nature. Similar concepts have traditionally been addressed in style guidelines and coding standards. However, those typically reflect opinions and anecdotal observations. The results of the experiments conducted as part of this project so far indicate that popular style guides for large projects are often incomplete, and occasionally make recommendations that may be harmful to programmer comprehension.

This work is rooted in simple facts that can be scientifically validated. The experiments performed for this project were designed to make as few assumptions as possible. All hypotheses were stated in objective and falsifiable terms. On this solid foundation, there is a plan to grow the body of research to test more sophisticated theories. By using the data already collected, and by growing the methodology, there are groups of questions which can be addressed in new ways. In the future, this work will seek answers to questions that can allow for automatic analysis of code bases for comprehensibility hot spots. It can also seek answers to such questions as:

  • Why programmers make the mistakes they do.
  • How these code patterns generalize over different programming languages
  • How the natural languages and cultures of programmers may affect the way they react to these patterns

The project offers a way to deepen our understanding of how languages do or do not mesh in the minds of developers. Insights of this nature could not only lead to fewer coding misunderstandings in global computing projects, but help to evolve much more “human-friendly” programming languages and methodologies.

With time, Atoms of Confusion can help inform the age old question of why and how programmers write bugs. In the meantime, we welcome other researchers to replicate our work or explore new questions using our data sets. All of our raw study materials and data are available on our web site.