My students took 4 of the 6 awards in the
FSE 2016 SRC (student research competition), including both first-place awards.
Undergraduate category:
- "Combining bug detection and test case generation", Martin Kellogg, University of Washington
- "Bounded model checking of state-space diginal systems", Felipe Monteiro, Federal University of Amazonas
- "Preventing signedness errors in numerical computations in Java", Christopher Mackie, University of Washington
Graduate category:
- "Cozy: Synthesizing collection data structures", Calvin Loncaric, University of Washington
- "How should static analysis tools explain anomalies to developers?" Titus Barik, North Carolina State University
- "Evaluation of fault localization techniques", Spencer Pearson, University of Washington
Congratulations to all the winners!
Here are more details about each of the UW projects.
"Combining bug detection and test case generation", by Martin Kellogg
Software developers often write tests or run bug-finding tools. Automated tools for these activities sometimes waste developer time: bug finders produce false positives, and test generators may use incorrect oracles. We present a new technique that combines the two approaches to find interesting, untested behavior, in a way that reduces wasted effort. Our approach generates tests that are guaranteed to rule out ("kill") some mutant that could not be killed by the existing test suite. The developer must only determine whether the program under test is behaving correctly. If it is, then the new te
st case—which improves mutation coverage—can be added to the test suite. If it is not behaving correctly, then our approach has discovered and reproduced a bug. We demonstrated that the technique can find about a third of historical defects while only bothering the developer with truly novel input.
"Preventing signedness errors in numerical computations in Java", by Christopher Mackie
If a program mixes signed and unsigned values, it will produce meaningless results. We have developed a verification tool that prevents such errors. It is built on a type system that segregates signed from unsigned integers. In a case study, our type system proved easy to use, and it detected a previously-unknown bug. Our type system is implemented as the
Signedness Checker and is distributed with the
Checker Framework.
"Cozy: Synthesizing collection data structures", by Calvin Loncaric
Many applications require specialized data structures not found in standard libraries. Implementing new data structures by hand is tedious and error-prone. To alleviate this difficulty, we have built a tool called
Cozy that synthesizes data structures using counter-example guided inductive synthesis. We have evaluated Cozy by showing how its synthesized implementations compare to handwritten implementations in terms of correctness and performance across four real-world programs. Cozy's data structures match the performance of the handwritten implementations while avoiding human error.
"Evaluation of fault localization techniques", by Spencer Pearson
Given failing tests, a fault localization tool attempts to identify which lines of source code are responsible for the failures. So far, evaluations of fault localization tools have overwhelmingly relied on artificial faults, generated by mutating correct programs. Researchers have assumed that whichever tools best localize artificial faults will best localize real-world faults. We tested this by repeating several previous evaluations on both kinds of faults, and found that the assumption was
false on our data set: artificial faults were not useful for identifying the best fault localization tools! Since this result calls into question all previous studies of these tools, we examined what makes a tool do well at localizing real faults, and we designed a new technique that outperforms all prior techniques we studied, on real faults. Our technical report is available at
http://www.cs.washington.edu/tr/2016/08/UW-CSE-16-08-03.pdf.
No comments:
Post a Comment