Papers by Melina Mongiovi

2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
Recent advances in program synthesis offer means to automatically debug student submissions and g... more Recent advances in program synthesis offer means to automatically debug student submissions and generate personalized feedback in massive programming classrooms. When automatically generating feedback for programming assignments, a key challenge is designing pedagogically useful hints that are as effective as the manual feedback given by teachers. Through an analysis of teachers' hint-giving practices in 132 online Q&A posts, we establish three design guidelines that an effective feedback design should follow. Based on these guidelines, we develop a feedback system that leverages both program synthesis and visualization techniques. Our system compares the dynamic code execution of both incorrect and fixed code and highlights how the error leads to a difference in behavior and where the incorrect code trace diverges from the expected solution. Results from our study suggest that our system enables students to detect and fix bugs that are not caught by students using another existing visual debugging tool.
Understanding the impact of refactoring on smells: a longitudinal study of 23 software projects
Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering
Revisiting Refactoring Mechanics from Tool Developers’ Perspective
Lecture Notes in Computer Science
Avoiding useless mutants
ACM SIGPLAN Notices
Revisiting the Refactoring Mechanics
Information and Software Technology
A change-aware per-file analysis to compile configurable systems with #ifdefs
Computer Languages, Systems & Structures
Detecting overly strong preconditions in refactoring engines
IEEE Transactions on Software Engineering
A change-centric approach to compile configurable systems with #ifdefs
Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences - GPCE 2016, 2016

2014 IEEE International Conference on Software Maintenance and Evolution, 2014
Defining and implementing refactorings is a nontrivial task since it is difficult to define preco... more Defining and implementing refactorings is a nontrivial task since it is difficult to define preconditions to guarantee that the transformation preserves the program behavior. Therefore, refactoring engines may apply incorrect transformations in which the resulting program does not compile, preserve behavior, or follow the refactoring definitions. These engines may also prevent correct transformations due to overly strong preconditions. We find that 84% of the test suites of Eclipse and JRRT are concerned to detect those kinds of bugs. However, the engines still have them. Researchers have proposed a number of techniques for testing refactoring engines. Nevertheless, they may have limitations related to the bug type, program generation, time consumption, and number of refactoring engines necessary to evaluate the implementations. We propose and implement a technique to scale testing of refactoring engines. We improve expressiveness of a program generator and use a technique to skip some test inputs to improve performance. Moreover, we propose new oracles to detect behavioral changes using change impact analysis, overly strong preconditions by disabling preconditions, and transformation issues. We evaluate our technique in 28 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse) and find 119 bugs. The technique reduces the time in 96% using skips while missing only 6% of the bugs. Additionally, it finds the first failure in general in a few seconds using skips. Finally, we evaluate our proposed technique by using other test inputs, such as the input programs of Eclipse and JRRT refactoring test suites. We find 31 bugs not detected by the developers.

2011 27th IEEE International Conference on Software Maintenance (ICSM), 2011
Each refactoring implementation must check a number of conditions to guarantee behavior preservat... more Each refactoring implementation must check a number of conditions to guarantee behavior preservation. However, specifying and checking them are difficult. Sometimes refactoring tool developers may define overly strong conditions that prevent useful behavior-preserving transformations to be performed. We propose an approach for identifying overly strong conditions in refactoring implementations. We automatically generate a number of programs as test inputs for refactoring implementations. Then, we apply the same refactoring to each test input using two different implementations, and compare both results. We use Safe Refactor to evaluate whether a transformation preserves behavior. We evaluated our approach in 10 kinds of refactorings for Java implemented by three tools: Eclipse and Netbeans, and the JastAdd Refactoring Tool (JRRT). In a sample of 42,774 transformations, we identified 17 and 7 kinds of overly strong conditions in Eclipse and JRRT, respectively. Listing 4. A subset of the Java metamodel specified in Alloy. sig Type { •••} sig Class extends Type { extend: lone Class, fields: set Field, methods: set Method, ••• } sig Field { •••} sig Method { •••}
Safira
Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion - SPLASH '11, 2011
We propose a tool (Safira) capable of determining if a transformation is behavior preserving thro... more We propose a tool (Safira) capable of determining if a transformation is behavior preserving through test generation for entities impacted by transformation. We use Safira to evaluate mutation testing and refactoring tools. We have detected 17 bugs in MuJava, and 27 bugs in refactorings implemented by Eclipse and JRRT.

Making refactoring safer through impact analysis
ABSTRACT Currently most developers have to apply manual steps and use test suites to improve conf... more ABSTRACT Currently most developers have to apply manual steps and use test suites to improve confidence that transformations applied to object-oriented (OO) and aspect-oriented (AO) programs are correct. However, it is not simple to do manual reasoning, due to the nontrivial semantics of OO and AO languages. Moreover, most refactoring implementations contain a number of bugs since it is difficult to establish all conditions required for a transformation to be behavior preserving. In this article, we propose a tool (SafeRefactorImpact) that analyzes the transformation and generates tests only for the methods impacted by a transformation identified by our change impact analyzer (Safira). We compare SafeRefactorImpact with our previous tool (SafeRefactor) with respect to correctness, performance, number of methods passed to the automatic test suite generator, change coverage, and number of relevant tests generated in 45 transformations. SafeRefactorImpact identifies behavioral changes undetected by SafeRefactor. Moreover, it reduces the number of methods passed to the test suite generator. Finally, SafeRefactorImpact has a better change coverage in larger subjects, and generates more relevant tests than SafeRefactor.
Uploads
Papers by Melina Mongiovi