39. Are we done?
• Best performing techniques still require
the tester to inspect 10% of the code...
• 100 LOC 10 LOC
• 10,000 LOC 1,000 LOC
• 1000,000 LOC 10,000 LOC
39
40. Metrics
• Are we measuring the right thing?
• rank-based
• PDG-based
40
43. Why do we need
human studies?
• Do developers follow the ranking?
• Does perfect bug understanding exist?
• How can we quantify isolation efforts?
43
44. Ecosystem in need
• Wide adoptionDebugging – a framework
Better Cues for
will only be possible if there
BetteraCues for Debugging – a framework
is framework which provides
Check it out at www.gzoltar.org
• testing functionalities
• debugging capabilities
• integrated in an IDE
44
45. Interested?
• Do you wanna try it out?
• We are always interested in receiving
feedback
• Email José Carlos Campos to participate
• jose.carlos.campos@fe.up.pt
• Thanks!
45
47. Open Research Questions
• Can we automatically decide if a test fails?
• Using program invariants
• Sort of replace asserts in JUnit tests
• Can we automatically suggest fixes?
• Other intuitive visualisations?
• How to reduce the overall overhead?
• Can we apply this principles to Web/Mobile envs?
• Self-healing: Architecture-based Run-time fault
localization (NSF project with CMU)
47