Driving Behavioral Change for Information Management through Data-Driven Gree...
iFL: An Interactive Environment for Understanding Feature Implementations
1. Shinpei Hayashi, Katsuyuki Sekine,
and Motoshi Saeki
Department of Computer Science,
Tokyo Institute of Technology, Japan
iFL: An Interactive
Environment for Understanding
Feature Implementations
ICSM 2010 ERA
14 Sep., 2010 TOKYO INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE
2. We have developed iFL
− An environment for program understanding
− Interactively supports the understanding of feature
implementation using a feature location technique
− Can reduce understanding costs
Abstract
2
3. Program understanding is costly
− Extending/fixing existing features
Understanding the implementation of target
feature is necessary
− Dominant of maintenance costs [Vestdam 04]
Our focus: feature/concept location (FL)
− Locating/extracting code fragments which
implement the given feature/concept
Background
[Vestdam 04]: “Maintaining Program Understanding – Issues, Tools, and Future Directions”,
Nordic Journal of Computing, 2004.
3
4. FL Example (Search-based)
……
public Time(String hour, …) {
......
}
…
public void createSchedule() {
......
}
public void updateSchedule(…) {
……
scheduler
Source Code
schedule time Search
Feature Location
(Search-based approach)
FL
I want to understand
the feature
converting input time
strings to schedule
objects…
Reading these methods
for understanding 4
A new
maintainer
5. Constructing appropriate queries
requires rich knowledge for the
implementation
− Times: time, date, hour/minute/second
− Images: image, picture, figure
Developers in practice use several
keywords for FL through trial and error
Problem 1:
How to Find Appropriate Queries?
Search
FL How??
5
6. Complete (Optimum) FL results are rare
− Accuracy of used FL techniques
− Individual difference in appropriate code
Problem 2:
How to Fix FL Results?
schedule time Search
FL
An FL result
(code fragments)
Optimum result
(Code fragments that should be understood)
Unnecessary code
(false positives)
Necessary code
(false negatives)
6
7. Selection and
understanding of
code fragments
Query Input
Feature location
(calculating scores)
schedule Search
1st: ScheduleManager.addSchedule()
2nd: EditSchedule.inputCheck()
…
Updating
queries
Relevance
feedback
(addition of hints)
Added two feedback processes
Our Solution: Feedbacks
Finish if the user judges that he/she
has read all the necessary code fragments 7
8. Wide query for initial FL
− By expanding queries to its synonyms
Narrow query for subsequent FLs
− By using concrete identifies in source code
Query Expansion
Expand
• schedule
• agenda
• plan
schedule* date* Search
1st FL
public void createSchedule() {
…
String hour = …
Time time = new Time(hour, …);
…
}
schedule time Search
2nd FL
• list
• time
• date
A code fragment in a FL result
Thesaurus
8
9. Dependency
Code fragment
with its score
Improving FL results by users feedback
− Adding a hint when the selected code fragments
is relevant or irrelevant to the feature
− Feedbacks are propagated into other fragments
using dependencies
Relevance Feedback
i th result of FL
1 2 9 6
: relevant
1 8 11 6
(i+1) th result of FL
propagation
by dependencies
9
10. Supporting Tool: iFL
Implemented as an Eclipse plug-in
− For static analysis: Eclipse JDT
− For dynamic analysis: Reticella [Noda 09]
− For a thesaurus: WordNet
JDT
Reticella iFL-
core
Eclipse
Word
Net
Exec. Traces /
dependencies
Syntactic
information
Synonym
Info.
Implemented!
10
22. # Correct
Events
# FL
execution
Interactive
costs
Non-
interactive
costs
Δ Costs Overheads
# Query
updating
S1 19 5 20 31 0.92 1 2
S2 7 5 8 10 0.67 1 1
S3 1 2 2 2 0.00 1 0
S4 10 6 10 13 1.00 0 2
S5 3 6 6 15 0.75 3 2
J1 10 4 20 156 0.93 10 2
J2 4 6 18 173 0.92 14 3
No effect in S3
− Because non-interactive approach is sufficient for understanding
− Not because of the fault in interactive approach
Evaluation Results
22
23. Summary
− We developed iFL: interactively supporting the
understanding of feature implementation using FL
− iFL can reduce understanding costs in 6 out of 7
cases
Future Work
− Evaluation++: on larger-scale projects
− Feedback++: for more efficient relevance feedback
• Observing code browsing activities on IDE
Conclusion
23
25. Source
code
Execution trace
/ dependencies
Dynamic analysis
Methods with
their scores
Static analysis
Evaluating
eventsTest
case
Query
schedule
Based on search-based FL
Use of static + dynamic analyses
The FL Approach
Events with
their scores
(FL result)
Hints
25
26. Static Analysis: eval. methods
Matching queries to identifiers
public void createSchedule() {
…
String hour = …
Time time = new Time(hour, …);
…
}
20 for method names
1 for local variables
• schedule
• agenda
• time
• date
Schedule time Search
Thesaurus
The basic score (BS) of
createSchedule: 21
Expand
Expanded queries
26
27. Dynamic Analysis
Extracting execution traces and their
dependencies by executing source code with
a test case
e1: loadSchedule()
e2: initWindow()
e3: createSchedule()
e4: Time()
e5: ScheduleModel()
e6: updateList()
e1
e3 e6e2
e4
e5
Dependencies
(Method invocation relations)Execution trace
27