O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

iFL: An Interactive Environment for Understanding Feature Implementations

14.461 visualizações

Publicada em

Presented at ICSM 2010
http://dx.doi.org/10.1109/ICSM.2010.5609669

Publicada em: Tecnologia
  • Seja o primeiro a comentar

iFL: An Interactive Environment for Understanding Feature Implementations

  1. 1. 14 Sep., 2010 TOKYO INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE ICSM 2010 ERA iFL: An Interactive Environment for Understanding Feature Implementations Shinpei Hayashi, Katsuyuki Sekine, and Motoshi Saeki Department of Computer Science, Tokyo Institute of Technology, Japan
  2. 2. Abstract  We have developed iFL − An environment for program understanding − Interactively supports the understanding of feature implementation using a feature location technique − Can reduce understanding costs 2
  3. 3. Background  Program understanding is costly − Extending/fixing existing features Understanding the implementation of target feature is necessary − Dominant of maintenance costs [Vestdam 04]  Our focus: feature/concept location (FL) − Locating/extracting code fragments which implement the given feature/concept [Vestdam 04]: “Maintaining Program Understanding – Issues, Tools, and Future Directions”, Nordic Journal of Computing, 2004. 3
  4. 4. FL Example (Search-based) I want to understand scheduler the feature converting input time strings to schedule objects… Source Code …… public Time(String hour, …) { A new ...... maintainer } … FL public void createSchedule() { ...... schedule time Search } public void updateSchedule(…) { …… Feature Location (Search-based approach) Reading these methods for understanding 4
  5. 5. Problem 1: How to Find Appropriate Queries? FL How?? Search  Constructing appropriate queries requires rich knowledge for the implementation − Times: time, date, hour/minute/second − Images: image, picture, figure  Developers in practice use several keywords for FL through trial and error 5
  6. 6. Problem 2: How to Fix FL Results?  Complete (Optimum) FL results are rare − Accuracy of used FL techniques − Individual difference in appropriate code An FL result Necessary code (code fragments) (false negatives) Unnecessary code (false positives) FL schedule time Search Optimum result (Code fragments that should be understood) 6
  7. 7. Our Solution: Feedbacks  Added two feedback processes Query Input schedule Search Feature location (calculating scores) Updating 1st: ScheduleManager.addSchedule() Relevance queries 2nd: EditSchedule.inputCheck() … feedback (addition of hints) Selection and understanding of code fragments Finish if the user judges that he/she has read all the necessary code fragments 7
  8. 8. Query Expansion  Wide query for initial FL − By expanding queries to its synonyms  Narrow query for subsequent FLs − By using concrete identifies in source code 1st FL 2nd FL schedule* date* Search schedule time Search Thesaurus Expand A code fragment in a FL result public void createSchedule() { … • schedule • list String hour = … • agenda • time Time time = new Time(hour, …); • plan • date … } 8
  9. 9. Relevance Feedback  Improving FL results by users feedback − Adding a hint when the selected code fragments is relevant or irrelevant to the feature − Feedbacks are propagated into other fragments using dependencies Dependency i th result of FL (i+1) th result of FL 1 2 9 6 1 8 11 6 Code fragment : relevant propagation with its score by dependencies 9
  10. 10. Supporting Tool: iFL  Implemented as an Eclipse plug-in − For static analysis: Eclipse JDT − For dynamic analysis: Reticella [Noda 09] − For a thesaurus: WordNet Exec. Traces / Synonym Reticella dependencies iFL- Info. core Word Syntactic Net information JDT Implemented! Eclipse 10
  11. 11. Supporting Tool: iFL 11
  12. 12. How iFL Works Inputting Query 12
  13. 13. How iFL Works Calculating Evaluated code fragments scores with their scores 13
  14. 14. How iFL Works Associated method will be shown in the code editor when user selects a code fragment 14
  15. 15. How iFL Works Calculating scores again Adding hints 15
  16. 16. How iFL Works Scores updated 16
  17. 17. How iFL Works Code reading FL 17
  18. 18. Preliminary Evaluation  A user (familiar with Java and iFL) actually understood feature implementations Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 change requirements and0.67 features 5 8 10 related 1 1 S3 1 from 2 Sched 2 2 0.00 1 0 S4 10 (home-grown, small-sized) 1.00 6 10 13 0 2 S5 3 6 6 15 0.75 3 2 2 change requirements and related features J1 10 from 4 JDraw 20 156 0.93 10 2 J2 4 (open-source, medium-sized)0.92 14 6 18 173 3 18
  19. 19. Evaluation Criteria # selected, but unnecessary code fragments Reduced ratio of overheads between interactive and non-interactive approaches Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 19
  20. 20. Evaluation Results  Reduced costs for 6 out of 7 cases − Especially, reduced 90% of costs for 4 cases Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 20
  21. 21. Evaluation Results  Small overheads − Sched: 1.2, JDraw: 12 in average Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 21
  22. 22. Evaluation Results  No effect in S3 − Because non-interactive approach is sufficient for understanding − Not because of the fault in interactive approach Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 22
  23. 23. Conclusion  Summary − We developed iFL: interactively supporting the understanding of feature implementation using FL − iFL can reduce understanding costs in 6 out of 7 cases  Future Work − Evaluation++: on larger-scale projects − Feedback++: for more efficient relevance feedback • Observing code browsing activities on IDE 23
  24. 24. additional slides
  25. 25. The FL Approach  Based on search-based FL  Use of static + dynamic analyses Static analysis Query Methods with Hints   schedule their scores Source code Execution trace Evaluating Test events Events with case / dependencies their scores (FL result) Dynamic analysis 25
  26. 26. Static Analysis: eval. methods  Matching queries to identifiers Schedule time Search Expand The basic score (BS) of createSchedule: 21 Thesaurus public void createSchedule() { • schedule … 20 for method names • agenda String hour = … • time Time time = new Time(hour, …); • date … 1 for local variables } Expanded queries 26
  27. 27. Dynamic Analysis  Extracting execution traces and their dependencies by executing source code with a test case Dependencies Execution trace (Method invocation relations) e1 e1: loadSchedule() e2: initWindow() e2 e3 e6 e3: createSchedule() e4: Time() e5: ScheduleModel() e4 e6: updateList() e5 27
  28. 28. Evaluating Methods  Highly scored events are: − executing methods having high basic scores − Close to highly scored events 52.2 Events Methods BS Score e1 (20) e1 loadSchedule() 20 52.2 e2 initWindow() 0 18.7 e3 createSchedule() 21 38.7 e2 e3 e6 38.7 19.0 e4 Time() 20 66.0 18.7 (21) (0) (2) e5 ScheduleModel() 31 42.6 e4 66.0 e6 updateList() 2 19.0 (20) e5 42.6 (31) 28
  29. 29. Selection and Understanding  Selecting highly ranked events − Extracting associated code fragment (method body) and reading it to understand Events Methods Scores Rank e1 loadSchedule() 52.2 2 Extracted code fragments e2 initWindow() 18.7 6 public Time(String hour, …) { e3 createSchedule() 38.7 4 … String hour = … e4 Time() 66.0 1 String minute = … … e5 ScheduleModel() 42.6 3 } e6 updateList() 19.0 5 29
  30. 30. Relevance Feedback  User reads the selected code fragments, and adds hints − Relevant (): maximum basic score − Irrelevant to the feature (): score becomes 0 Basic Events Methods Hints Scores Ranks scores e1 loadSchedule() 20 52.2 77.6 2 e2 initWindow() 0 18.7 6 e3 createSchedule() 21 38.7 96.4 4 1 e4 Time()  20 46.5 66.0 70.2 1 3 e5 ScheduleModel() 31 42.6 51.0 3 4 e6 updateList() 2 19.0 5 30

×