SlideShare uma empresa Scribd logo
1 de 48
Christopher N. BullHistory-Sensitive Detection of Design FlawsB.Sc. Computer Sciencewith Software Engineering20th March, 2009“I certify that the material contained in this dissertation is my own work and does not contain unreferenced or unacknowledged material. I also warrant that the above statement applies to the implementation of the project and all associated documentation. Regarding the electronically submitted version of this submitted work, I consent to this being stored electronically and copied for assessment purposes, including the Department’s use of plagiarism detection systems in order to check the integrity of assessed work.<br />I agree to my dissertation being placed in the public domain, with my name explicitly included as the author of the work.”<br />Date: 20th March, 2009<br />Signed:<br />Abstract<br />This project presents Jistory; an exploration of history-sensitive software metrics and detection strategies through prototypical implementation and empirical evaluation. Jistory has been designed and implemented as a plug-in for Eclipse (a popular IDE). Tight integration with source versioning technologies such as SVN are used to gather a measure of ‘constancy’, highlighting areas of the project that are prone to change and likely to cause problems. The project has been evaluated with respect to two separate and well recognised software analysis tools: ‘Together’ and ‘Eclipse-Metrics’. The results show that Jistory performs very well; laying the groundwork for avenues of further research.<br />Table of Contents TOC  quot;
1-3quot;
    Chapter 1 – Introduction PAGEREF _Toc225315695  11.1 Project Aims PAGEREF _Toc225315696  11.2 Report Overview PAGEREF _Toc225315697  2Chapter 2 – Background PAGEREF _Toc225315698  32.1 Design Flaws PAGEREF _Toc225315699  32.2 Detecting Design Flaws PAGEREF _Toc225315700  32.3 History-Sensitive Detection Strategies PAGEREF _Toc225315701  4Chapter 3 – Feasibility Study PAGEREF _Toc225315702  63.1 Detection Strategies PAGEREF _Toc225315703  63.1.1 Different metric definitions PAGEREF _Toc225315704  73.2 History-Sensitive Metrics PAGEREF _Toc225315705  83.2.1 Constancy PAGEREF _Toc225315706  83.2.2 Application of Constancy PAGEREF _Toc225315707  9Chapter 4 – Design and Implementation PAGEREF _Toc225315708  104.1 Considerations PAGEREF _Toc225315709  104.2 Architecture PAGEREF _Toc225315710  114.3 Tools & Technologies PAGEREF _Toc225315711  13Chapter 5 – The System in Operation PAGEREF _Toc225315712  145.1 Walkthrough PAGEREF _Toc225315713  14Chapter 6 – Testing & Evaluation PAGEREF _Toc225315714  176.1 Methodology PAGEREF _Toc225315715  176.2 Metrics Evaluation PAGEREF _Toc225315716  196.3 Detection Strategy Evaluation PAGEREF _Toc225315717  22Chapter 7 – Conclusions PAGEREF _Toc225315718  277.1 Review of Goals PAGEREF _Toc225315719  277.2 Critical Assessment of Design & Implementation PAGEREF _Toc225315720  277.3 Future Work PAGEREF _Toc225315721  28References PAGEREF _Toc225315722  30<br />Appendix<br />,[object Object]
– Metrics for HealthWatcher (version 10)
– Metrics for MobileMedia (version 1)
–Metrics for MobileMedia (version 5)
–Final Year Project ProposalWorking Documents: http://www.lancs.ac.uk/ug/bullc/<br />– Introduction<br />Software development teams are constantly increasing their efforts in detecting and reducing design flaws in software systems, with the aim to improve the reusability, comprehensibility and maintainability of the system they are developing. Software systems are continuously modified throughout the development and maintenance stages of the software life cycle, and this is recorded through new versions or iterations of the software. A single version of software is a snap shot of a system in time; history on the other hand is “an ordered set of versions” [1]. This project analyses the resultant “history” of those modifications to improve the detection of design flaws by integrating the analysis of the software’s history into automated detection strategies. <br />Most software analysis uses only the current version of the software to identify and remove design flaws from the system. This very linear approach does not take advantage of the software history, which would give the developer a greater understanding of how the software system has evolved, and therefore a better understanding of potential design flaws. Henceforth, the term history-sensitive analysis will denote a detection strategy that is aware of the software system’s history. Incorporating history-sensitivity into automatic design flaw detection strategies will allow strategies to measure another dimension of the software systems, which, it is argued, will lead to analyses with improved accuracy [2].<br />There are plenty of tools available that analyse software to detect design flaws; however, there are none, to the author’s knowledge, that use a software system’s history to aid the detection of design flaws, despite there being research in this area [2]. This challenging and novel project is intended to create a tool that implements an automatic history-sensitive detection strategy for analysing software, and to evaluate the tool’s effectiveness in reducing the number of false positives and negatives that are common among detection strategies that do not incorporate historical data. <br /> Project Aims<br />The aim of this project is to create a tool, Jistory, which implements a history-sensitive detection strategy to detect one of the common design flaws, god class. A god class “refers to those classes that tend to centralize the intelligence of the system. An instance of a god-class performs most of the work, delegating only minor details to a set of trivial classes and using the data from other classes” [3]. <br />The project will research the state-of-the-art in object-oriented design flaw detection strategies, which will be used to form the basis of the history-sensitive detection strategy to be used in Jistory. This detection strategy identifies the god class design flaw in software systems automatically; allowing the developer to avoid incomprehensible and changeable program structures, as well as improving software modularity principles. Modularity principles [4] are rules that should be followed in order to achieve modular code; small modules (smaller modules are better), information hiding (modules should shield their internal structures and data), least privilege (only have access to necessary resources), minimal coupling (how much pairs of modules are connected) & maximum cohesion (how much a module’s parts/components are related).<br />The aim of the project is to successfully implement this strategy into Jistory and to evaluate its output against the below hypotheses:<br />1st hypothesis: History-sensitive detection strategies can improve the detection of design flaws.<br />2nd hypothesis: History-sensitive detection strategies can be automated and provide accurate results. <br />The project’s output will provide evidence to support or refute these hypotheses. <br /> Report Overview<br />This section provides a brief overview of all of the chapters in this report:<br />Background<br />This chapter describes the background to the research areas discussed in this report and any implications of previous research on the project. It describes the state-of-the-art in conventional metrics and detecting design flaws, the limitations of conventional metrics and previous research on history-sensitive detection strategies.<br />Feasibility Study<br />This chapter discusses the research performed prior to any implementation to determine if the project is feasible. It discusses the methodology for the research, research on current detection strategies, definitions of history-sensitive metrics and any implications of the research on the project.<br />Design and Implementation<br />This chapter describes the high level design of Jistory, including a description of the data flow in the system, and explains design decisions and how they have affected the implementation.<br />The System in Operation<br />This chapter explains how Jistory is used in detail and gives a walkthrough of its processes. <br />Testing and Evaluation<br />This chapter describes the testing and evaluation methodology for validating Jistory and the detection strategy it implements. This chapter also describes the results of the evaluations and evaluates Jistory against well-defined success criteria.<br />Conclusions<br />This chapter summarises the findings of the project presented in this report, and makes suggestions for future research directions.<br />– Background<br />Research into design flaws and detection strategies is a relatively well established area. This chapter discusses the state-of-the-art in these areas and then goes on to discuss the limitations of the current approaches. Lastly, research on proposed solutions to these limitations is presented in the form of history-sensitive detection strategies.<br /> Design Flaws<br />Software design is essentially incremental, mainly due to time constraints. This incremental nature leads to changes in the design of software as the code evolves, which is often detrimental to modularity principles [4]. If any of these principles are not adhered to, then software becomes harder to understand, maintain or reuse. To avoid these negative changes in a system, more commonly known as Design Flaws or Design Anomalies, the design needs to be continuously assessed by applying detection strategies, and when a design flaw is detected it should be corrected by refactoring the code. An example of a design flaw is a god class which is a very complex class that centralises too much intelligence in the system. The solution to this particular design flaw is to refactor the class into multiple new classes in an attempt to break up the class and distribute its intelligence.<br />Refactoring is the process of rewriting existing source code to improve its structure and reusability without changing its behaviour[5]. Automated refactoring in IDEs has rapidly expanded and is readily used by developers. It is also one of the main methods for resolving Design Flaws.<br /> Detecting Design Flaws<br />Detection strategies are techniques applied by developers in an attempt to detect design flaws. These strategies are continuously evolving and new ones are created to improve their accuracy because there are many situations in which design entities are incorrectly detected and/or design entities that should be detected are missed [6]. Conventional strategies consist of two parts, metrics that are gathered on a piece of code which then have thresholds applied by the detection strategy which determines if a certain design flaw exists or not.<br />A metric is a generic term for a measurement of software. Metrics can measure almost any aspect of software including size, complexity, cohesion, coupling and many more. A simple example of a metric is NOM (Number of Methods) which, in this case, is easily calculated by counting the number of methods in a given class. The problem with using metrics on their own is that they offer an understanding of symptoms, but they cannot provide an understanding of the cause of a problem and therefore it is harder to detect and resolve design flaws with metrics alone.<br />A detection strategy is a rule that takes metric values as input and evaluates them against a set of thresholds to determine whether the associated entity suffers from a Design Flaw. Detection strategies are commonly represented visually as a set of logic gates.  REF _Ref225168275  Figure 2.1 30194250shows an example conventional god class Detection Strategy that was defined by Radu Marinescu [7]. This strategy takes three metrics as input and applies individual thresholds to each. If they all evaluate to true Figure 2.1 - A ‘god class’ detection strategythen the class that the metrics were gathered from suffers from a god class design flaw. If even one of the metrics evaluates to false, then the logical ‘and’ gate evaluates to false, which means the class does not suffer from the god class design flaw. The three metrics in  REF _Ref225168275  Figure 2.1 will henceforth be referred to as the three god class metrics. These metrics are defined as follows:<br />,[object Object]
WMC (Weighted Method Count) is the sum of complexity for all of the methods in a class.
TCC (Tight Class Cohesion) is a percentage of directly connected visible methods.Larger codebases, in any system, incur a larger task of detecting Design Flaws. This problem is solved with the automation of detection methods for conventional Design Flaws, but leads to the detection strategies being less accurate. The reason for this, according to [6], is because the values that are usually used for thresholds in manual detection methods are chosen based on a designer’s experience. As soon as a Detection Strategy is automated it uses absolute values which could easily be disputed, but the benefits of an automated detection strategy, in terms of person hours, are too great to not perform automated strategies. With that in mind, there is a great need to increase the accuracy of detection strategies.<br /> History-Sensitive Detection Strategies<br />Conventional detection strategies only look at the current version of software, but a system’s history adds an additional dimension to be analysed. It has been theorised that analysing software’s history can improve the detection of design flaws [2], Ratiu et al also argue that system history can improve the understanding of a flaw by inferring whether it is harmful or not by analysing how it has evolved through the source code’s history. A harmful god class is detrimental to modularity principles, and make a system harder to understand, maintain and reuse. A harmless god class on the other hand is still a god classes but it has changed very little through its existence and are likely not to change in the future, therefore not being harmful to reusability and maintainability aspects. It should be noted that the analysis of software’s history is not intended to replace conventional methods; history analysis is intended to be an enhancement of conventional methods.<br />Tool Support<br />There are many tools that attempt to collect metrics but few that use those metrics and apply them to a detection strategy to find design flaws. On the other hand, there are none, to the author’s knowledge, that take software’s history into account to improve the detection of these anomalies. There are some tools available that use software’s history, although they are commercial tools, they do not use history for anomaly detection. Those tools use history for other purposes, mainly as a visualisation feature. CodeCrawler, for example, has a visualisation feature for showing classes that have survived the entire evolution of a system, classes that have been added and removed and classes with size attributes that vary.<br />Managing History<br />Version control systems manage multiple revisions of the same file or data. VCSs have many features, which vary between different implementations, but the main reason they are of interest to this project is the ability to retain historical data about each individual item that is under version control. VCSs, such as Subversion (SVN) or Concurrent Versions System (CVS), hold data about a file and exactly how it has changed through different revisions. VCSs hold all of the data that could be needed for any detection strategy that requires historical elements.<br />There are currently are no tools, to the author’s knowledge, that take advantage of a VCS for the purpose of enhancing design flaw detection strategies. However, there are tools that do use a VCS for other statistical analysis. StatSVN is an example of how a VCS is used to analyse a system’s historical data for statistical purposes, which uses an SVN repository and can gather data on how everything has changed, when it changed and by whom. A good example of the sort of output that can be produced is a LOC (Lines of Code) over time graph, which can show the LOC value of a system through time and can even show the LOC value for individual developers which could be compared against each other. Although this is a good feature, it is only beneficial when used to track changes and progress as it is more useful for detecting project process flaws rather than actual design flaws.<br />Refactoring is a great idea, and used widely, but it poses a problem with any history related analysis. This is because as entities in software evolve, they move, get renamed, get grouped together or split up. This makes data collection on specific entities harder as it tends to need manual intervention by a developer to tell a system how it has evolved.<br />– Feasibility Study<br />It is well-established that which metrics to use in a detection strategy is easily decided, but the application of thresholds to those metrics is usually based on an individual’s experience [6]. This poses an obvious problem when trying to automate detection strategies, so it seems prudent to evaluate current strategies for god class and Feature Envy to determine which ones to implement in Jistory before applying any history-sensitivity. The two main applications used to analyse detection strategies were HealthWatcher and MobileMedia. <br />One of the main reasons for using the HealthWatcher application was because manual design flaw assessments had previously been performed on the software by its original developer(s), and a list of suspect god classes had been created:<br />,[object Object]
HealthWatcherFacadeInit
PersistenceMechanismAfter some manual inspection of the code, it was clear why these were suspects, so one of the first tasks was to apply some automatic detection strategies or metric analysers and evaluate and compare their output. Once this was complete, the desired detection strategy would then be analysed to see how best to incorporate history-sensitivity.<br /> Detection Strategies<br />Conventional detection strategies are constantly evolving because the understanding and experience of metrics and related detection strategies is continuously growing. Therefore the application of these detection strategies into an automated environment may not fully achieve the desired results without proper evaluation. An example of an evolving metric is cyclomatic complexity (CC) [8]. CC is a measure of the complexity of a program’s flow-of-control or module’s decision structure, and is simply calculated by counting every decision point encountered in a method i.e. if statement, for loop etc. This metric, when first proposed, was based on a procedural programming paradigm and adaptations have emerged over time to better fit the object-oriented paradigm[9]. In most cases, CC forms part of the WMC metric which is used in a large proportion of god class detection strategies.<br />To evaluate the effectiveness of detection strategies and related metrics several tools will be used, such as Together and Eclipse-Metrics. Some of the tools that were used had a feature to explicitly ask to detect particular design flaws. This option returned mixed results as one program returned no results and another detected 3 suspect god class: <br />,[object Object]
HealthWatcherFacadeInit
DateThese results help confirm that ‘HealthWatcherFacade’ and ‘HealthWatcherFacadeInit’ are god classes, but bring about the question of why Date was detected. After some in-depth analysis of the class ‘Date’ it is deemed a false-positive as the class does not centralise intelligence in the system, which forms part of the description of a god class [7].<br />To further analyse conventional detection strategies, the metrics that make up the most common god class detection strategies were gathered and compared using the different tools. The metrics that were gathered are AOFD, TCC and WMC which were explained in the previous chapter. However, during the early stages of this analysis something became exceptionally clear; all of the tools or plug-ins that gathered metrics were returning different values for the metrics. Metrics are supposed to be simple and precise [10], but this was clearly not the case. If the metric values produced are not accurate enough, then how are detection strategies that are built on top of these metrics supposed to be trusted? <br />The analysis of the metrics was going to contribute to the architecture of Jistory because it would have helped find a candidate tool to extend that gathers metrics for Jistory, leaving Jistory to deal with the main focus of this project. Due to the tools returning different metric values it was decided to implement the metrics collection into Jistory rather than re-use another tool. This discovery enforced more research into how each of the metrics that are required for this project is measured. It then became apparent that there is a lack of set methodologies on exactly how these metrics should be measured or implemented. Going back to the CC metric as example, there are many websites that report their way of calculating CC,, but all have slightly different ways of calculating the metric. As CC is a measure of the number of linearly-independent paths through a section of code or method, an Object-Oriented paradigm, such as Java, should consider Exceptions and therefore add 1 to the CC metric value for every catch clause encountered. In line with the previous point, else, default and finally statements should not be considered as they do not provide an additional path through a section of code. However, there are many apparent solutions on how to calculate CC that “Mix & Match” these points.<br /> Different metric definitions<br />A big problem with the metrics that are implemented inside conventional detection strategies is that multiple definitions are freely available. The metric TCC, for example, is defined as the percentage of visible directly linked method pairs, most definitions state that only public methods should be considered, but there are some definitions that state “A method is visible unless it is Private” . But protected methods are not considered truly visible. WMC has the same problem as the above CC example, because WMC is basically the sum of all methods CC value.<br /> History-Sensitive Metrics<br />History-sensitive metrics are measurements of software that take the software’s history into consideration. These metrics can then complement or even replace conventional metrics. Although history-sensitive metrics are not intended to replace conventional metrics, some scenarios or detection strategies could be devised to benefit from those situations.<br /> Constancy<br />Constancy is a measurement of change over 2 or more versions of software’s history. This metric is intended to enrich conventional detection strategies which will be discussed later. Constancy is based on Marinescu’s stability metric [2] with some clear differences.<br />Firstly, the name stability is a misleading metric name and does not accurately capture what this metric is intending to do. Stability is “the capacity of an object to return to its original position, or to equilibrium, after having been displaced” [11]. The name constancy more accurately captures the correct intent of the metric which is to measure “the quality of being enduring and free from change or variation”.<br />Secondly, in Marinescu’s paper they state that “a class was changed if at least one method was added or removed”, therefore using the metric with respect to the Number of Methods (NOM). This is highly inaccurate because functionality can be added to a class within methods and the NOM metric will not have changed, therefore giving the illusion that the class is, in their words, “stable” when in fact it is not. This could possibly be solved by using other metrics such as Lines of Code (LOC) or WMC which is much more accurate than just NOM, but there is still the drawback of modifying the class but the metric values not being changed. For example, the same number of LOC can be added in one method and removed in another. This could have been a major change, but the LOC value could still be the same. A similar scenario is applicable if using WMC.<br />To solve these issues some Constancy metrics have been defined:<br />LOCCn (Lines of Code Changedversion) <br />This metric is the sum of lines that have been Added, Modified & Removed between version n and version n-1<br />CCONn (Class Constancyversion) <br />This metric returns a percentage value between 0 and 1 that represents how constant a class has been between version n and version n-1 and is calculated as such:CCONn = 1 – (LOCCn/MAX(LOCn, LOCn-1)<br />CCONm..n (Class Constancyversion..version)<br />This metric is the same as the previous metric, but calculates the Constancy of a class from version m to n. It is calculated by acquiring all of the CCONn values for each version m to n and then performing a mean average on the results. This is more formally written as:<br />j=m+1nCCONj/(n-m)<br />These metrics successfully calculate how much a file has changed over versions of the software and can be improved a step further by defining different versions of these metrics that acknowledge or ignore whitespace and comments (however, this is outside the scope of this project and therefore will not be implemented). <br /> Application of Constancy<br />Constancy is envisaged to mainly add functionality to conventional detection strategies, and therefore enhance the strategy with the knowledge of whether a design flaw is truly harmful or not. If a design flaw is harmful, then steps should be taken to resolve the flaw, but if it is harmless then it means the file has changed very little in its lifetime. If Constancy was added to a god class detection strategy and a class was flagged as a harmless god class the developer has a choice: resolve the design flaw or leave the design flaw. No matter which option is chosen, the design flaw must not be ignored, being harmless does not mean that it can’t harm the system that the flaw resides in. An example of a situation when a developer may want to leave a harmless god class is if that class was designed specifically to perform a task, and the developer does not want to edit the file in any way.<br />Constancy can also be applied to other detection strategies (for example feature envy etc.). As feature envy is a method level design flaw, and not a class level flaw like god class, a new metric would need to be defined; MCON (Method Constancy). This new metric would act very similar to CCON, but be implemented slightly different to calculate the lines of code that have changed for a given method.<br />– Design and Implementation<br />This chapter describes design considerations of Jistory and decisions that were made to finalise the overall design of the system. To get a clear mental model of the system the high-level architecture is covered with an explanation of how each module contributes to the system. Lastly, based on the architecture described, all of the tools and technologies used in Jistory are listed, along with what they are intended to do and why they were chosen.<br /> Considerations<br />The first consideration for Jistory was which detection strategy to implement, which was mentioned in chapter 1 (god class). The reason why this strategy was chosen is because it is a well known design flaw, and commonly used as an example in research papers. Knowing which detection strategy will be implemented means that the metrics that need to be collected for that strategy are known, ergo particular tools can be considered due to their ability to collect those metrics. However, in the Feasibility Study chapter it was mentioned that many tools, when collecting the same metric on the same resource, returned a wide range of values. Furthermore, there were no non-commercial tools found that collected all of the metrics for either of the intended detection strategies that could be extended or was easy to integrate into an IDE. For all of these reasons it was decided to implement the collection of the metrics into Jistory, rather than rely on another tool. <br />The second consideration was to make Jistory easy to integrate into the software development process. This can be achieved by extending or making a plug-in for an Integrated Development Environment (IDE), which also means that Jistory can take advantage of features offered by the IDE to plug-in developers. Examples of these features can involve search functionality that is optimised within that IDE and providing the ability to perform static analyses on projects stored in the IDE, which is an analysis that is performed without executing the program.<br />Another consideration is that the envisaged task of analysing the history of software is deemed to be extensive, especially if the history is stored on the network/internet because the network latency will extend the execution time of a detection strategy. Therefore the collected or calculated data should persist in a database to reduce the intensity of calculation and network latency. As the history of a system does not change, when new versions of software are released, only the newest version of the software needs to be queried from the history as the rest of the required data would be retrieved from the database and the newly queried data from the history can be placed into the database to improve the detection strategy next time.<br />Lastly, Jistory should be extensible for the addition of other detection strategies and metrics. Even though Eclipse is Java based, and primarily for Java projects it does have support for C++ projects, so Jistory could also be extended for C++ as it is too an Object Oriented language.<br /> Architecture<br />In this section the overview of Jistory’s architecture is discussed along with an overview of the data flow in the system. The high level architecture in  REF _Ref225168234  Figure 4.1 shows the main modules that are needed to be implemented into the system to add its core functionality<br />Figure 4.1 – High Level Architecture<br />The high level architecture diagram is made up of 6 main modules:<br />Source Code Analyser<br />This analyses source code and outputs an AST which enables the rest of the system to query and manipulate the tree, and therefore the source code easily.<br />Metric Collector<br />This module collects conventional and history metrics from the AST given by source code analyser and data returned from the repository analyser; examples of these metrics are mentioned in the Background and Feasibility Study chapters.<br />Repository Manager<br />A Repository Manager analyses a repository of history data about a system, for example SVN or CVS repositories.<br />Database Manager<br />All collected metrics are stored by the Database Manager in a database, and when requested all required metrics that are stored in the database are retrieved.<br />Detection Strategies<br />This module performs a specified detection strategy on a project, and after applying the detection strategy outputs the results.<br />Jistory has two main inputs, implementation artefacts and a system’s history. Implementation artefacts are the source files of an Eclipse project, i.e. “.java” files, and a system’s history is a representation of the history, or incremental version changes, of the implementation artefacts, for example, version 1.0, v2.0 and v2.1 etc. The main output of Jistory is warnings to the developer about design flaws of the system that was analysed.  REF _Ref225168132  Figure 4.2 shows the inputs and outputs of Jistory and the different data flows in the system.<br />Implementation ArtifactsSource CodeAnalyserMetric CollectorMetric ValuesSystem HistoryDetection StrategyDatabase ManagerRepository ManagerHistory DataWarnings of Design FlawsAbstract Syntax TreePersistent MetricStore<br />Figure 4.2 – Data Flow Diagram<br /> REF _Ref225168132  Figure 4.2 can be explained using the god class detection strategy as an example. When a history-sensitive god class detection strategy is being performed the first thing that happens is the analysis of the current project in the Eclipse environment. The source code is analysed and metrics are collected from the generated ASTs and are held until the second phase of the analysis is complete. The second phase is to gather history metrics which there are two methods for. The first of these methods is to query the persistent metric store (database) and retrieve all of the required history metrics that are known. If all of the desired metrics for all of the defined versions of the software reside in the database then they are passed onto the detection strategy module, along with the metrics collected from the implementation artefacts. However, if after querying the database the list of desired metrics is not complete then they are retrieved from the system’s history. To do this a repository manager retrieves any necessary data from the repository, examples include certain files or file diffs(diffs are generated output of the differences between two files), and then gathers metrics on the returned history data. Once the metrics have been gathered from the history data, they are placed into the database for future analyses and then sent on to the detection strategy module along with all of the other metrics that were gathered from the database and the implementation artefacts. Lastly the detection strategy module applies a design flaw detection method to the metrics and then outputs the results of the analysis which can then be shown to the developer.<br /> Tools & Technologies<br />This section discusses all of the tool and technologies that Jistory uses or extends in its implementation.<br />Eclipse<br />As mentioned in Considerations section, Jistory is implemented as a plug-in to an IDE. The IDE that was chosen for this purpose was Eclipse because “many programmers prefer Eclipse because of its ease of use”, which assists the main reason for implementing it as a plug-in in the first place which was to make it easy to integrate into the development process. One of the most important features of Eclipse that is used is the in-built ability to create Abstract Syntax Trees (AST) from the source code of a project. Eclipse offers the ability to efficiently traverse and manipulate this tree and then reflect it back into the project source files.<br />SVNKit<br />As Jistory implements history-sensitive detection strategies, it requires historical data of the project being analysed. It was previously mentioned that Jistory will use SVN repositories to gather all necessary history data about the projects. SVNKit is a Java library for interacting with SVN repositories, and will allow Jistory to gather all data that it may require from a repository. All SVN activity will be hidden from the developer as much as possible, with all communication to a repository happening automatically when an analysis is performed.<br />DB4O<br />Jistory requires an element of persistency, so it takes advantage of DB4O which is an object database implemented in Java. The most important aspect of DB4O is that as a database it can execute with or without a server, which means that it works well when integrating with an Eclipse plug-in. All metrics that are gathered from a project’s history are stored in this database to reduce computational activity and possible network latency in future analyses.<br />– The System in Operation<br />Jistory has been designed to easily integrate into the development processes and to be as effortless to use as possible. In this chapter a walkthrough of Jistory is given to outline the main features of the plug-in and an explanation of the generated output is given. A quick overview of Jistory’s properties page is also provided for a better understanding of how Jistory works. <br /> Walkthrough<br />Jistory was developed as a proof of concept, and as such means that the UI (user interface) was not the main focus during the design and development. Consequently, the UI was designed to be small, concise and simple to understand from a user’s perspective, with minimal time required to begin using the tool.<br />Figure 5.1 – Multiple-project analysis 35623502527300Figure 5.2 – Available detection strategies2924175517525The main purpose of this tool is to detect god classes and determine if the discovered flaws are harmful or not. This was done by refining a conventional detection strategy to incorporate the use of the software’s history. This approach of refining a conventional detection strategy leads to two detection strategies being available to the user, as seen in  REF _Ref225178949  Figure 5.2. The “Find god classes” menu item refers to the conventional strategy, and the “Find god classes (using History)” menu item refers to the history-sensitive detection strategy. These options appear when the ‘drop-down arrow’ of the circled toolbar button in  REF _Ref225178949  Figure 5.2 is pressed. Pressing the toolbar button itself simply performs the conventional detection strategy. Before either of these methods is performed at least one project in the package explorer, shown in  REF _Ref225177520  Figure 5.1, needs to be selected. If more than one project is selected then all of the selected projects are analysed, and as seen in  REF _Ref225177520  Figure 5.1, closed projects can be selected too as the user will be prompted to open the project to continue the analysis.<br />The output of Jistory is console-based and focused on being concise so that it gets to the point. Both of the aforementioned detection strategies have slightly different output styles. An example of the conventional detection strategy’s output is shown in  REF _Ref225182877  Figure 5.4, which simply declares which files were detected as god classes. The history-sensitive strategy’s output is only slightly more complex, see  REF _Ref225183024  Figure 5.3 for an example, as it states which files are god classes, but separates them into the two sub-sets: harmful or harmless. The definitions of harmful and harmless can be found in the Background chapter.<br />Figure 5.3 – Output of the history-sensitive strategyFigure 5.4 – Output of the conventional strategy279082585725019050Lastly, in this walkthrough the properties page will be discussed as it is vital to the execution of the history-sensitive detection strategy. The properties page, shown using the Jistory project as an example in  REF _Ref225183833  Figure 5.5, is a page that is used to store project specific properties.<br />Figure 5.5 – Properties page for the Jistory project.<br />Clicking on the “Jistory” button on the left tree menu displays a properties page for the currently selected or used project. Each properties page allows a user of Jistory to set up the plug-in for history-sensitive analysis. The first option is a drop-down box for choosing the ‘Version Control System’ that will be used (currently only SVN is implemented, but is designed this way for easy expansion). The next property is ‘SVN Repository’ which should 259080028575be set to the location of the repository. There is also a button to test the given URL,  REF _Ref225184805  Figure 5.6, which results in an Figure 5.6 – Testing a connectionindeterminate progress bar being displayed until the test connection is successful or an exception is thrown. 41529001228725The next property is the “Repository URL ext” which is the rest of the URL where the Eclipse project resides. In the example in  REF _Ref225183833  Figure 5.5 the URL extension property is set to “/trunk” as that is the folder within the repository that the Jistory project resides. The “Manage Versions” button displays the dialog shown in  REF _Ref225185277  Figure 5.7 which allows a user to inform Jistory which revisions are needed in its analysis. This is done by defining a version name and associating it with a revision number; in  REF _Ref225185277  Figure 5.7 an Figure 5.7 – Version managementincremental number is used for the version names rather than any textual definitions such as “V1.0”. The last option “Analyse workspace version” tells the analysis process whether the files in the workspace should be used in the history analysis. This is useful when a project in the workspace is exactly the same as the last version in a repository because the history analysis does not need to analyse the workspace version against the latest repository version.<br />– Testing & Evaluation<br />Jistory’s history-sensitive detection strategy provides a new technique to analyse design flaws in a software system, therefore it needs to be tested for its validity and then evaluated against other tools or methods to determine its usefulness and applicability. In this chapter the methodology for testing and evaluating the techniques implemented by Jistory is first described, before leading on to sections discussing the evaluation of the metrics and detection strategies implemented in Jistory. <br /> Methodology<br />To establish the validity of metrics and detection strategies implemented in Jistory, several stages of testing and evaluations were undertaken. The testing stages will help indicate whether automating history-sensitive detection strategies can be done and if it can be done accurately. The evaluation stages will either support or refute the hypothesis that a history-sensitive detection strategy can improve the detection of design flaws. Before the methodology is explained in any detail, it is a good idea to understand that the conventional metrics and detection strategy are being tested and evaluated first because there are so many tools that gather or calculate different values for those metrics. Once the conventional metrics and strategy are approved, the history-sensitive metrics and strategy can be tested and evaluated, but only after the conventional metrics have been completely finished, this is because they build on top of the conventional metrics.<br />The first step was to test the accuracy of the conventional and history-sensitive metrics implemented in Jistory by comparing their output with metrics gathered from a manual inspection. This is to test for any discrepancies in the metric that should not be there, and to ensure that they are calculating the correct numbers. These metrics were tested with JUnit test cases which were implemented into an Eclipse plug-in project (“Jistory_Test-Platform”). When executed, the test plug-in loads a new Eclipse environment with Jistory and then performs the test cases on example files which had previously been manually measured.<br />With the metrics calculating the correct values for the example files, the next stage of testing could commence. This stage was focused on testing Jistory’s communication with SVN repositories and its ability to retrieve the required data for processing (entire files and diffs of files). This was a manual testing process which started by retrieving specific files from a repository and then retrieving a diff of those files. Small test programs were written for this purpose, and every test passed.<br />Once Jistory passed the testing stage it then had to be evaluated. The main focus of the evaluation was the history-sensitive detection strategies of god classes and whether it enhanced the detection of design flaws, but due to the controversy with the metrics and their implementation (pointed out in the ‘Feasibility Study’ chapter), the metrics will be evaluated just before. To evaluate Jistory, three projects are being used:<br />,[object Object]
MobileMedia
XPairtiseHealthWatcher and MobileMedia have been introduced before, but not XPairtise. XPairtise was chosen to aid Jistory’s evaluation because of the project’s relatively large size and clearly defined versions. To evaluate Jistory a project must be an Eclipse project stored in a SVN repository. There must also be information about which revision of the repository holds a particular version. In the case of XPairtise, all of the versions had been tagged, which is the process of copying all of the contents of the main ‘trunk’ folder in a repository into a ‘tags’ folder. Each subfolder of ‘tags’ contains a different version. Each of the above projects had a custom SVN repository created to store their Eclipse project in because either they did not reside in a repository or because the repository was accessed through the internet, which as discussed in the design chapter would prove to be slow. Each version of every project was then committed to their respected repository on a local computer and then replaced with the next version of that project, and so on until there were no more versions to commit. <br />To effectively evaluate Jistory, other tools were required to perform a relative comparison of Jistory’s output. There were few tools available that were suitable for this comparison as the tools were required to either detect god classes or calculate all of the three god class metrics. After much research, two pre-existing tools were chosen for the evaluation: ‘Together’ and ‘Eclipse Metrics’. ‘Together’ has two main functionalities that are of interest to Jistory’s evaluation process, the audit feature and metric collection feature. The audit feature allows a developer to perform an analysis of the source code of a project and search for predetermined problems; for the purpose of this evaluation, the audit feature will be utilised to search for god classes. The metric collection feature has an extensive library of metrics that can be collected, but the evaluation process will only employ the ability to collect the three god class metrics (AOFD, TCC & WMC). ‘Eclipse Metrics’ is an Eclipse plug-in for calculating metrics and displaying warnings to the user. The metric of interest that this plug-in calculates is WMC. As previously mentioned, there are very few tools that calculate all of the god class metrics, therefore a tool that calculated just one of those metrics would be settled on. This tool was settled upon because of the difficulty in finding a definition of the WMC metric and how it should be correctly calculated; therefore this tool will specifically help evaluate the WMC metric implemented into Jistory. Secondly this plug-in was one of the few that were tried out that were not out-of-date and it seemingly worked correctly. Many other plug-ins required an older version of Eclipse to run correctly, which obviously could be worked around but formed new problems because the projects themselves relied on the newer versions of Eclipse or a JRE (Java Runtime Environment).<br />The next two sections discuss the evaluation process of the metrics and the detection strategy, as a whole, in more depth.<br /> Metrics Evaluation<br />Before the detection strategies implemented into Jistory can be evaluated, the metrics that the strategy is built upon require evaluating to determine if they accurately gather the values described by their textual definitions. These definitions, discussed in the ‘Design and Implementation’ chapter, describe the purpose of each metric and how they should be gathered in the Java language. These metrics were evaluated manually, and where possible, against the previously discussed tools that collected those metrics. This section will discuss the evaluation of each metric implemented into Jistory individually; starting with AOFD, TCC, WMC and then LOCC & CCON. The projects mentioned above (HealthWatcher, MobileMedia & XPairtise) were supposed to be used as studies in the evaluation process, however, only ‘HealthWatcher’ and ‘MobileMedia’ will be used for the main evaluations because ‘XPairtise’ could not be analysed by ‘Together’ as the tool threw errors when analysing that project. ‘Together’ was the main tool that was used to compare against Jistory as it collected all of the required metrics, and even though ‘Together’ could not evaluate ‘XPairtise’, the project is still used for other aspects of evaluation discussed later in this chapter in the Detection Strategy Evaluation section. <br />AOFD (Access of Foreign Data)<br />Finding a textual description of this metric was very straightforward, but only proved simple to implement because of the Abstract Syntax Tree that is generated by Eclipse when analysing a file, see section 4.4 for more details. A comparison of the AOFD metrics gathered by Jistory and  ‘Together’ for HealthWatcher can be found in appendix A and B, and for MobileMedia in appendix C. The evaluation of this metric was essentially done in the testing stage, because of its simplicity, as part of the unit tests. It was determined that this metric was calculating its values correctly, compared to the description of what this metric should achieve. In appendices A, B and C it can be clearly seen that Jistory’s AOFD values are consistently equal or higher to those of ‘Together’, with 73% of Jistory’s AOFD values being equal, and the remaining 27% having higher values by a factor of 1 or 2. The classes with different values were manually re-checked, but it was determined that the metric is being collected correctly. It is theorised that ‘Together’ calculates the metric based on a slightly different AOFD definition.<br />TCC (Tight Class Cohesion)<br />This metric also proved straightforward to find a description for, although there was a little confusion to start with as to which definition to use; the differences in the definitions are discussed in the Feasibility Study chapter. The version implemented into Jistory focuses only on public methods, so it keeps in scope of its description. Using the HealthWatcher examples in appendix A and B, Jistory calculates the TCC metric the same as Together 64% of the time; however, all of those times the TCC value was one of the extreme values (0 or 1). Jistory and ‘Together’ never aligned their intermediate values. This again forced additional testing to assure that the metric values calculated by Jistory were correct; the results of the testing showed that the metrics were correct in the scope of their definition. The inbuilt TCC definition that ‘Together’ provides does not state its stance on visible methods, i.e. whether it only focuses on public methods or all methods. This however does not explain the large differences between a few of the values. Further evaluation could be performed to attempt to discover why there are differences between Jistory and Together, but the TCC metric was deemed to be working accurately, so the evaluation simply went on to the next metric.<br />WMC (Weighted Methods per Class)<br />This metric posed the most problems with discovering the correct definition and the proper way to calculate it correctly in Java. There were many tools that claimed to calculate WMC, and as discussed in the Feasibility Study chapter many had different methods of calculating the metric. So after much deliberation, a method for calculating this metric was decided upon and implemented into Jistory, see section ‘4.4 Metrics’ for further information. The first stage of evaluation took place during the testing stage to prove that the metric did correctly calculate the WMC metric based upon the definition set out previously. After testing was complete, the values that Jistory calculated were to be compared against ‘Together’ and Eclipse Metrics. Below in  REF _Ref225132316  Figure 6.1 is a graph summarising the comparison between the WMC values gathered by Jistory, ‘Together’ and Eclipse Metrics on the latest version of the HealthWatcher system. The data can be found in appendix B:<br />Figure 6.1 – WMC metric comparison of the HealthWatcher system.<br />At a glance,  REF _Ref225132316  Figure 6.1 shows that all of the WMC values calculated by Jistory and Eclipse Metrics are very similar, and clearly have a similar definition of how WMC should be calculated. ‘Together’ on the other hand rises and falls with the other tools but on a lesser scale. It is observed that smaller files have a very similar WMC value for all of the tools, but larger files have a much higher WMC value. With respect to the rest of the evaluation process, ‘Together’ has consistently gathered lower values than Jistory; this could lead to believe that Eclipse Metrics is a better tool to compare the WMC metric than ‘Together’, and as such shows that the values gathered by Jistory are seemingly accurate.<br />It is also observable that around classes 57-61 in  REF _Ref225132316  Figure 6.1 that ‘Together’ collects a WMC value but Jistory and Eclipse Metrics do not. This is because those objects are actually Interfaces, and therefore should not have a WMC value calculated. WMC is defined as the weighted method count per class, not interface.<br />LOCC (Lines of Code Changed) & CCON (Class Constancy)<br />The last metrics to evaluate are LOCC and CCON which are unique to Jistory. Again, for more implementation specific detail, refer to section ‘4.4 Metrics’. It is first prudent to point out that to the author’s knowledge there are no tools available that implement any metrics similar to these, therefore there are no tools to compare against.<br />Firstly, LOCC was evaluated for its effectiveness of calculating the number of lines of code that have changed between two file versions. This metric, based on its description in previous sections, calculates how many lines of code have been added, modified and removed. Using the diff function of SVN, a file would be returned that annotates each line of a file that is in one version but not in the other and vice-versa. Using this file, the number of lines that were 1838325800100Figure 6.2 – An example section of a diff file with three modified linesadded and removed can easily be calculated, and with a little more effort determines which lines were modified. An example of three modified lines appears in  REF _Ref225138935  Figure 6.2. A set of modified lines, when detected by Jistory, is defined as a group of lines prefixed with a minus followed by a group of lines prefixed with a plus. The down-side to this method is that if a line is completely removed, and a new line is added, it is not reflected in the LOCC metric. This could be improved with some sort of String matching algorithm to determine if a line is actually a modified version of the previous version or a completely new line. For the purpose of this project, the current implementation is sufficient as it does determine how many lines have changed.<br />The next metric that was evaluated was CCON. This metric is relatively simple in comparison to LOCC, as it takes n number of LOCC metrics and the number of lines of code in each of the respected files that LOCC was calculated for and then calculates a percentage of how many lines have changed compared to the total number of line written. This metric does successfully determine how constant a class has been over time, which is proved in the testing section. However there is one short coming that has been observed with this metric, although it does impact the current detection strategy, it could improve the strategy at a later stage. CCON does not reflect how much a file has changed through particular versions, for example, if a file has not changed through its entire history but suddenly changes a lot in the final few versions then the CCON value will not reflect this and still generate a low metric value as the overall change will be extremely low. In this example, the file could be considered a possible harmful god class because of the amount of recent change.<br />All of the metrics implemented in Jistory have been evaluated for their effectiveness against their textual descriptions and against other tools. In summary, all of the metrics are calculating their values correctly when compared to their descriptions, and in certain cases seem to be calculating to similar accuracy as other tools.<br /> Detection Strategy Evaluation<br />Once the evaluation of the metrics in the previous section was completed, the detection strategies could be evaluated. Jistory implements two main detection strategies; the first is the conventional detection strategy which simply identifies god classes and the second detection strategy is history-sensitive, which identifies god classes and then analyses each of the flaws history to determine if it is a harmful or a harmless god class. The conventional detection strategy will be evaluated first as the history-sensitive strategy is based upon the conventional strategy.<br />Conventional Detection Strategy<br />Jistory’s implementation of a conventional detection strategy is very straightforward as it simply applies thresholds to metrics, but the metrics and the values that the metrics calculate differ from other tools, as mentioned in previous chapters. However, Jistory’s metrics were tested and evaluated as accurate, so the next logical step was to evaluate whether the thresholds applied in the detection strategies are feasible. The thresholds used, discussed in the Feasibility Study chapter, are based one of Marinescu’s god class detection strategies [7]. To evaluate this effectiveness of this detection strategy a relative comparison will be made between the output of Together and Jistory when analysing the HealthWatcher system, and then compared to the list of god classes that the original developers decided existed in their system. Below are two tables that show the god classes that were detected in version 1 and 10 of HealthWatcher.  REF _Ref225231674  Table 6.1 shows the god classes detected in version 1, whereas  REF _Ref225231676  Table 6.2 shows the god classes detected in version 10 (the latest version). In these tables the list of all detected god classes is in the far left column, and each detection method that was used to detect any of those suspects is stated in the column headers. Below is a description of what each column means:<br />,[object Object]
Jistory – This column represents classes that were flagged as god classes by Jistory’s conventional detection strategy.
Together (audit) – This column represents all of the classes that were flagged as god classes by Together’s audit feature.
Together (metrics) – This column represents all of the classes that were flagged as god classes using the metrics gathered by Together, but applying them to Marinescu’s god class detection strategy [7]. This is the same strategy that is implemented into Jistory.Table 6.1 – HealthWatcher (version 1) god classes<br />Classes v1OriginalDevelopersJistoryTogether(audit)Together(metrics)HealthWatcherFacade---HealthWatcherFacadeInit---PersistenceMechanism-ComplaintRepositoryRDB---Date---<br />Table 6.2 – HealthWatcher (version 10) god classes<br />Classes v10OriginalDevelopersJistoryTogether(audit)Together(metrics)HealthWatcherFacade?---HealthWatcherFacadeInitPersistenceMechanism?-ComplaintRepositoryRDB?--Date?--<br />The original developers state the classes they suspect are god classes, but only in version 1 and not for later versions (hence the ‘?’ in  REF _Ref225231676  Table 6.2 under the ‘Original Developers’ column). It is also sensible to point out that ‘HealthWatcherFacadeInit’ does not exist in version 10 of HealthWatcher; this is because that particular class was refactored into other classes in version 5. For this reason  REF _Ref225231676  Table 6.2 represents the missing class with crosses in each cell in that row.<br />In version 1 of HealthWatcher the original developers claimed that there were three god classes. Two of these suspect god classes, ‘HealthWatcherFacade’ and ‘HealthWatcherFacadeInit’, are not detected by any other means used in this evaluation. However, the third suspect god class that the original developers found, ‘PersistenceMechanism’, was detected by Jistory and by the metrics gathered by ‘Together’; this class is obviously a god class because of the number of positive hits from all of the tools. Two other classes were flagged as god classes from the metrics gathered by Together: ‘ComplaintRepositoryRDB’ and ‘Date’. According to Jistory, these classes are too cohesive to be considered as a god class (refer to appendix A and B for more information on the metric values gathered). The evaluation of Jistory’s metrics compared to Together showed that the differences between the metric values gathered varied greatly between the two tools. As discussed in the TCC metric evaluation, it is not definitive, but Together may be incorrectly collecting the metric or collecting it against a different definition of TCC. Lastly, the audit function in Together does not detect any god classes in any of the versions of HealthWatcher, especially in versions 1 and 10. This is reflected in the above tables.<br />Slightly different behaviour is observed in MobileMedia’s extreme versions (1 and 5) shown in  REF _Ref225258479  Table 6.3 and  REF _Ref225258481  Table 6.4.<br />Table 6.3 – MobileMedia (version 1) god classes<br />Classes v1JistoryTogether(audit)Together(metrics)BaseController-<br />Table 6.4 – MobileMedia (version 5) god classes<br />Classes v5JistoryTogether(audit)Together(metrics)BaseController---PhotoController--<br />The ‘BaseController’ class was detected by both of Together’s features, Jistory on the other hand came close to flagging it as a god class but found that the class was slightly too cohesive to be classed as a god class (this information can be found in appendices C and D). This class was refactored in later versions and by version 5 it was no longer a god class. In version 5 Jistory detected one god class, ‘PhotoController’, but no other tool detected it. However, Together’s metrics came very close to flagging it as a god class, but it was apparently not complex enough.<br />In summary, Jistory’s conventional detection strategy is successful in detecting god classes. The suspect god classes that were not detected by Jistory are arguably false-positives; this is based on all of the tools combined knowledge, such as the collected metrics and strategies performed. This argument is also based on Together’s ability to accurately detect metrics, which is discussed earlier in this chapter. These apparent false-positives cannot be proved without a lengthy evaluation using more tools or other detection strategies.<br />History-Sensitive Detection Strategy<br />The purpose of the history-sensitive detection strategy implemented into Jistory was to improve the detection of the god class design flaw. The detection strategy has been improved by adding the ability to determine if a design flaw is harmful or not. To evaluate this new detection strategy two projects were required: one project which stays highly consistent throughout its history (HealthWatcher), and another project which varies throughout its history (MobileMedia). XPairtise will also be included in the evaluation of the history-sensitive detection strategy, but will not be the main focus because of Together’s inability to analyse it, as explained at the beginning of this chapter, instead it will simply be an extension of the analysis. <br />All 10 versions of HealthWatcher have been analysed by Jistory’s conventional detection strategy, and through all of the versions there has only been one consistent god class, ‘PersistenceMechanism’. The metrics that were gathered to determine if this class is a god class show that it has changed very little through its life time, and applying the history-sensitive detection strategy returned an anticipated result stating “Harmless god class: PersistenceMechanism.java”. A god class is considered “harmless” if it has changed very little over time; this change, or lack of change, is represented by the CCON metric and must be above 0.9 to be considered constant. ‘PersistenceMechanism’ has a CCON value of 0.977699530516432, as the file only changes once throughout the 10 versions. This result backs up all of the previous findings of the manual analyses. As described in the Feasibility Study chapter, harmless god classes are not seen as currently dangerous to the system, and it is down to the developer to assess whether the class should be refactored or not.<br />To further evaluate the history-sensitive detection strategy MobileMedia was also analysed. All 5 versions of the project were analysed by Jistory’s conventional detection strategy and only one god class was found again, ‘PhotoController.java’. However, it only occurred in the last two versions (4 and 5). This means that it is a relatively new god class. Just like the previous HealthWatcher analysis, MobileMedia’s god class was then analysed further. This class was created in version 2 of the project, and then became a god class in version 4, and based on the metrics gathered from the conventional detection strategy, it was observed that this class changes enough for it to be a concern to the developer and future developers. To back up this theory MobileMedia had the history-sensitive detection strategy applied to it, which once again returned an anticipated result stating “Harmful god class: PhotoController.java” and had a CCON value of 0.6646067559438354. Harmful god classes are god classes that should be resolved as they are detrimental to the development process because they are hard to understand, maintain and reuse. God classes labelled as harmful have changed a lot previously and therefore are likely to change in the future too. The results of this analysis back up the findings from the manual analysis of MobileMedia and the theories made about the potential god class.<br />The final project that will be used in the evaluation of the history-sensitive detection strategy is XPairtise. After finding this suitable project and setting up the repository to allow Jistory to analyse the history, Together failed to analyse it, so this project is only being used for this section of the analysis. Using the same approach as the last two analyses, XPairtise had the conventional detection strategy applied which found 14 god classes. The next step was to inspect all of the 7 versions of the project and theorise whether each of the god class suspects from the conventional detection strategy are harmful or harmless. After the inspection was complete, all of the suspects were generally seen as harmless and not changing a huge amount through their existence. It was not surprising to find that after the history-sensitive detection strategy was applied, 12 of the 14 were considered harmless, but 2 were flagged as harmful. The average CCON value of all of the 14 god class suspects was 0.957750488, but the two classes that were flagged as harmful had CCON values of 0.8622687488666871 and 0.8541955313062749. Those two classes came very close to the 0.9 constancy threshold, which goes to show that manual inspection is not accurate enough to make decisions like whether a god class is harmful or not, especially when history of software is involved as the amount of data and files that needs to be analysed increases a great deal with every new version. Three of the files that were flagged as god classes in XPairtise had no history (the files did not exist in previous versions), which brings about the argument of deciding whether a class is a harmful or harmless god class if it has no history. This could be argued either way, but it was implemented into Jistory based on this notion; a file is constant until it has been changed, and therefore harmless.<br />To summarise the evaluation of the detection strategies, both the conventional and history-sensitive strategies return feasible and accurate results, especially when compared to another tool. The history-sensitive strategy successfully improves the detection of design flaws by adding the notion of harmful and harmless god classes, and obviously has also been successfully automated.<br />– Conclusions<br /> Review of Goals<br />At the beginning of this project two hypotheses were made. The first hypothesis was that “history-sensitive detection strategies can improve the detection of design flaws” and the second hypothesis stated that “history-sensitive detection strategies can be automated and provide accurate results”.<br />After a large feasibility study, lots of research, extensive implementation and thorough testing and evaluation, it is safe to say that the history-sensitive detection strategy created for this project does in fact improve the detection of design flaws. The history-sensitive detection strategy, in comparison to the equivalent conventional detection strategy, has the additional ability to determine if a suspect god class is harmful or harmless allowing a developer to take more precise or informed action on that design flaw. <br />Jistory is proof that supports the second hypothesis, as Jistory is a successful implementation of a detection strategy tool that has an automated history-sensitive detection strategy. The Testing and Evaluation chapter can also prove that Jistory provides accurate results when performing any of its analyses. However, further evaluation of the tool should take place as the thresholds held in the detection strategies are absolute values, and as mentioned before thresholds are usually based on experience of developers. Jistory is a very new tool, and has not had nearly enough projects to analyse to consider the thresholds implemented within it as experienced.<br />During the Testing and Evaluation chapter there was difficulty in finding projects that fit certain criteria. The main criterion that was required (and not found) was a list of known design flaws within that project. There needs to be a paper or “online catalogue” discussing and evaluating a set of well understood case studies or software projects and their design flaws. This would provide a set of agreed upon metrics and qualified design flaws to use as a control to evaluate projects and tools such as Jistory. <br /> Critical Assessment of Design & Implementation<br />Jistory was designed to be as flexible as possible, but at the same time the final working implementation was to be focused on particular technologies (Eclipse, SVN etc.). However, Jistory did get flexibility built in, for example Jistory is extensible and can have any number of metrics and detection strategies added to it. The high level architecture was designed to be compartmentalised and allow for components to be swapped out for other components, specifically the network components as it currently only supports SVN, but could be extended with a CVS component.<br />Another good feature of Jistory is the database, within which all data that is collected from the history of a project is cached. The history of a project does not change, and it speeds up the analysis speed tenfold. With data in the database, particular calculations/algorithms do not need to be performed, and if the project repository is stored on the network/internet, which they commonly are, then network calls are reduced, possibly down to 0 network calls.<br />One minor flaw that is known to exist in Jistory is that the Eclipse projects that are being analysed using their history from a repository must have a particular path, and not change through the history. This means that the common task of “tagging” a version of a project in an SVN repository can be done but an analysis of each of those tagged projects on different repository paths cannot be performed. This could be fixed by associating a repository path with each project version rather than simply associating a project revision in the database.<br /> Future Work<br />Through this project many “new” research areas have cropped up, and could provide some interesting research if mixed with this project. Below are some topic areas that would be interesting to integrate into Jistory and some sections simply about improvements to be made:<br />Detect Refactorings<br />Jistory uses history-sensitive approaches during its analyses of software systems to analyse how files (specifically classes) change over time, so there is an inherent problem when a developer refactors the system, for example renaming a class or moving a method to another class etc. In the renaming example a developer could rename file “A” to “B” but according to the history analysis techniques employed, the developer is removing file A and adding file B, as the file named “A” no longer exists. Essentially, if software is refactored between versions, you cannot compare its history accurately. There needs to be a system or detection method in place to automatically detect these refactorings, such as the algorithms implemented in the Eclipse plug-in RefactoringCrawler [12], or at least allow a developer to manually define these changes, which would then be stored in the database.<br />Apply Automatic Refactorings<br />Design Flaws are often corrected by refactoring the code. Jistory currently only detects design flaws, but an ability to apply automatic refactorings to resolve these design flaws could greatly improve its applicability into the development process.<br />Machine Learning<br />Detection strategies used to detect design flaws and the thresholds that are applied in those strategies are usually based on an individual’s experience [6]. The history-sensitive approaches implemented by Jistory currently only have absolute threshold values (based on the developer’s experience). To avoid these absolute values and attempt to break down the barriers between experienced and novice developers a machine learning technique could be incorporated. This machine learning technique would provide an adaptive learning approach to detecting design flaws which could be trained with examples of known design flaws [13]. This opens a new area focused on the evolution of design flaws with a heightened ability to predict design flaws.<br />Visualisation<br />Software visualisations are a great way to summarise and display a system, or an aspect of a system to a developer. Jistory could be extended to encompass such visualisation techniques. Moreover, visualisation techniques could be implemented that not only visualise emphasis on specific design flaws within a system, but to visualise the history of the design flaws as well. Similar research has been conducted by Van Rysselberghe and Demeyer on visualising change history [14], but it does not focus on visualising design flaws and how they have evolved through the software’s history.<br />Authors Future Work<br />Future efforts will be made by the author to add more detection strategies to Jistory for other design flaws. All of the implemented strategies will be continually evaluated to further support or refute whether the application of history-sensitive detection strategies enhances the detection of design flaws. The only true way to prove a detection strategy’s effectiveness is through large-scale evaluations of multiple software systems.<br />Secondly, based mainly on the inaccuracy of metrics gathered by other tools, solid definitions on how to calculate metrics is needed. The definitions of what metrics are and what they are used for is clear, and the majority are obvious as to how they are calculated. However, there are some important discrepancies on how some are calculated. Using the Cyclomatic Complexity metric example for Java there are many definitions on how a developer should go about calculating the metric   but as far as the author knows there are no definitive approaches.<br />References<br /> BIBLIOGRAPHY   2057 [1] Gırba, T., & Ducasse, S. (2006). Modeling History to Analyze Software Evolution. Journal of Software Maintenance And Evolution: Research And Practice , 207-236.<br />[2] Ratiu, D., Marinescu, R., Ducasse, S., & Gırba, T. (2004). Evolution Enriched Detection of God Classes. Proceedings of Computer Aided Verification of Information Systems Workshop (CAVIS 2004). Timisoara: eAustria Institute.<br />[3] Marinescu, R. (2002). Measurement and Quality in Object-Oriented Design. Ph.D. thesis, Department of Computer Science, quot;
Politehnicaquot;
 University of Timisoara.<br />[4] Meyer, B. (1997). Object-oriented software construction. Upper Saddle River, N.J.: Prentice Hall PTR.<br />[5] Fowler, M. (n.d.). Refactoring Home. Retrieved from Refactoring: http://www.refactoring.com/<br />[6] Mihancea, P. F., & Marinescu, R. (2004). Improving the automatic detection of design flaws in object-oriented software systems.<br />[7] Marinescu, R. (2001). Detecting Design Flaws via Metrics in Object-Oriented Systems. Proceedings of the 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems (TOOLS39) (pp. 173-183). Washington, DC, USA: IEEE Computer Society.<br />[8] McCabe, T. J. (1976). A Complexity Measure. IEEE Transactions on Software Engineering Vol. 2, No. 4 , 308-320.<br />[9] Laplante, P. A. (2007). What Every Engineer Should Know about Software Engineering. CRC Press.<br />[10] Mens, T., & Demeyer, S. (2001). Future Trends in Software Evolution Metrics. Proceedings IWPSE'2001 (International Workshop on Principles of Software Evolution) (pp. 83-86). ACM Press.<br />[11] Nguyen, H. T., Prasad, N. R., & Walker, E. A. (2003). A First Course in Fuzzy and Neural Control. CRC Press.<br />[12] Dig, D., Comertoglu, C., Marinov, D., & Johnson, R. (2006). Automated Detection of Refactorings in Evolving Components. Lecture Notes in Computer Science , 404-428.<br />[13] Kreimer, J. (2005). Adaptive Detection of Design Flaws. Electronic Notes in Theoretical Computer Science 141 , 117–136.<br />[14] Van Rysselberghe, F., & Demeyer, S. (2004). Studying Software Evolution Information By Visualizing the Change History. Proceedings. 20th IEEE International Conference on Software Maintenance (pp. 328- 337). Washington: IEEE Computer Society.<br />[15] Chidamber, S. R., & Kemerer, C. F. (1994). A metrics Suite for Object Oriented Design. IEEE Transactions on Softw. Engineering, Vol.20, No.6. <br />Appendix<br />Appendix A – Metrics for HealthWatcher (version 1)<br />This appendix contains a table of metrics collected by Jistory, Together and Eclipse Metrics for version 1 of HealthWatcher. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />Appendix B – Metrics for HealthWatcher (version 10)<br />This appendix contains a table of metrics collected by Jistory and Together for version 10 of HealthWatcher. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />Appendix C – Metrics for MobileMedia (version 1)<br />This appendix contains a table of metrics collected by Jistory and Together for version 1 of MobileMedia. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />Appendix D – Metrics for MobileMedia (version 5)<br />This appendix contains a table of metrics collected by Jistory, Together and Eclipse Metrics for version 5 of MobileMedia. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />Appendix E – Final Year Project Proposal<br />This is the specification document that was produced at the end of the 2nd academic year, and outlines what this project originally intended to achieve.<br />Appendix A<br />Metrics for HealthWatcher (version 1)<br />This appendix contains a table of metrics collected by Jistory, Together and Eclipse Metrics for version 1 of HealthWatcher. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />The god classes found in this table are:<br />,[object Object]
Date – detected by Together
PersistenceMechanism – detected by Jistory and TogetherClassesTogetherEclipse MetricsJistoryAOFDTCCWMCWMCAOFDTCCWMCAddress00.1181800.1418AddressRepositoryRDB20.1101620.2016AnimalComplaint00.1101000.2710CommunicationException0 1100.001Complaint00.1252500.1225ComplaintRecord00.2101000.6710ComplaintRepositoryArray00.1181800.6719ComplaintRepositoryRDB40.16211140.71110ConcreteIterator00.27700.307ConcurrencyManager115711.007Constants001100.001Date30437630.4577DiseaseRecord013301.003DiseaseType00.1151500.1815DiseaseTypeRepositoryArray00.1181800.8019DiseaseTypeRepositoryRDB20.1142120.2121Employee00.28800.438EmployeeRecord10.55511.005EmployeeRepositoryArray00.1171700.8318EmployeeRepositoryRDB20.271520.6715ExceptionMessages0 0000.000FoodComplaint00.1161600.1816Functions00101000.0010HTMLCode10192210.0721HWServlet202520.005HealthUnit00.2111100.2811HealthUnitRecord00.27701.007HealthUnitRepositoryArray00.1262600.8527HealthUnitRepositoryRDB20.1214220.6242HealthWatcherFacade20283420.7034HealthWatcherFacadeInit00.1238000.2780IAddressRepository005-00.005IComplaintRepository006-00.006IDiseaseRepository005-00.005IEmployeeRepository005-00.005IFacade0016-00.0016IHealthUnitRepository008-00.008IIteratorRMITargetAdapter002-00.002IPersistenceMechanism007-00.007ISpecialityRepository006-00.006ISymptomRepository005-00.005InsertEntryException0 5500.005InvalidDateException0 2200.002InvalidSessionException0 2200.002IteratorDsk004-00.004IteratorRMISourceAdapter20.2151920.6720IteratorRMITargetAdapter014401.005Library00131400.0014LocalIterator004-00.004MedicalSpeciality00.36600.476MedicalSpecialityRecord002201.002ObjectAlreadyInsertedException0 1100.001ObjectNotFoundException0 1100.001ObjectNotValidException0 1100.001PersistenceMechanism30.1314230.0346PersistenceMechanismException0 1100.001PersistenceSoftException0 1100.001RepositoryException0 1100.001Schedule10.1213010.4229ServletConfigRMI001200.002ServletGetDataForSearchByDiseaseType103820.009ServletGetDataForSearchByHealthUnit103920.009ServletGetDataForSearchBySpeciality103920.009ServletInsertAnimalComplaint101820.008ServletInsertEmployee002710.007ServletInsertFoodComplaint101820.008ServletInsertSpecialComplaint101820.008ServletLogin114921.009ServletSearchComplaintData20172330.0023ServletSearchDiseaseData003910.009ServletSearchHealthUnitsBySpecialty002810.008ServletSearchSpecialtiesByHealthUnit002810.008ServletUpdateComplaintData402350.003ServletUpdateComplaintSearch20102030.0021ServletUpdateEmployeeData103420.004ServletUpdateEmployeeSearch202420.004ServletUpdateHealthUnitData102320.003ServletUpdateHealthUnitSearch1061520.0016ServletWebServer113311.003Situation003300.673SituationFacadeException0 1100.001SpecialComplaint00.1101000.2710SpecialityRepositoryArray00.1212100.8422SpecialityRepositoryRDB20.1112020.2920Symptom00.34400.334SymptomRepositoryArray00.1191900.8320TransactionException0 1100.001UpdateEntryException0 4400.004<br />Appendix B<br />Metrics for HealthWatcher (version 10)<br />This appendix contains a table of metrics collected by Jistory, Together and Eclipse Metrics for version 10 of HealthWatcher. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />The god classes found in this table are:<br />,[object Object]
Date – detected by Together
PersistenceMechanism – detected by Jistory and Together
The constancy value is given for this class as it was detected by Jistory as a god class.ResourceTogetherEclipse MetricsJistoryAOFDTCCWMCWMCAOFDTCCWMCCCON (Constancy)AbstractFacadeFactory0020002 AbstractRepositoryFactory0060006 Address00.07181800.1437908518 AddressRepositoryRDB10.1101610.216 AnimalComplaint10.1114141114 AnimalComplaintState10.05171210.15384615417 AnimalComplaintStateClosed0077007 AnimalComplaintStateOpen0077107 ArrayRepositoryFactory0066006 Command0043004 CommandRequest006-000 CommandResponse001-000 CommunicationException0011001 Complaint10.04333810.80788177338 ComplaintRecord00.2101000.66666666710 ComplaintRepositoryArray00.1181800.66666666719 ComplaintRepositoryRDB40.056311240.714285714111 ComplaintState10.02291810.0829 ComplaintStateClosed1014141014 ComplaintStateOpen1013132013 ConcreteIterator00.177700.37 ConcurrencyManager1157117 ConfigRMI0023103 ConnectionPersistenceMechanismException1011101 Constants0011001 Date30.04437630.45238095277 DiseaseRecord10.555115 DiseaseType00.08151500.18095238115 DiseaseTypeRepositoryArray00.11181800.819 DiseaseTypeRepositoryRDB10.12152810.53571428628 Employee00.11121700.27272727317 EmployeeRecord10.555115 EmployeeRepositoryArray00.13171700.83333333318 EmployeeRepositoryRDB20.271520.66666666715 ExceptionMessages0000000 FacadeFactory0022002 FacadeUnavailableException0000000 FoodComplaint10.0720201120 FoodComplaintState10.04211410.11029411821 FoodComplaintStateClosed0099009 FoodComplaintStateOpen0099109 Functions0010100010 GetDataForSearchByDiseaseType1048209 GetDataForSearchByHealthUnit1049209 GetDataForSearchBySpeciality1049209 HTMLCode10.01192210.06593406621 HWServer1012102 HWServlet10.058131013 HealthUnit00.11152000.19696969720 HealthUnitRecord00.1788018 HealthUnitRepositoryArray00.09262600.84848484827 HealthUnitRepositoryRDB20.11214420.844 HealthWatcherFacade00.033513500.172413793135 IAddressRepository005-000 IComplaintRepository006-000 IDiseaseRepository005-000 IEmployeeRepository005-000 IFacade0025-000 IFacadeRMITargetAdapter0025-000 IHealthUnitRepository008-000 IIteratorRMITargetAdapter002-000 IPersistenceMechanism007-000 ISpecialityRepository006-000 ISymptomRepository006-000 InsertAnimalComplaint1028208 InsertDiseaseType003101010 InsertEmployee003101010 InsertEntryException0055005 InsertFoodComplaint1028208 InsertHealthUnit003101010 InsertMedicalSpeciality003101010 InsertSpecialComplaint1028208 InsertSymptom003101010 InvalidDateException0022002 InvalidSessionException0022002 IteratorDsk004-000 IteratorRMISourceAdapter20.17151920.66666666720 IteratorRMITargetAdapter0144015 Library0013140014 LocalIterator004-000 LogMechanism10.05121310.06666666713 Login1038208 LoginMenu1034204 MedicalSpeciality00.14101500.27777777815 MedicalSpecialityRecord00.3355015 ObjectAlreadyInsertedException0011001 ObjectNotFoundException0011001 ObjectNotValidException0011001 Observer001-000 PersistenceMechanism30.06354930.072727273530.977699530516432PersistenceMechanismException0011001 RDBRepositoryFactory00.277017 RMIFacadeAdapter20.04293120.92307692331 RMIFacadeFactory1023103 RMIServletAdapter10.03346310.8597883663 RepositoryException0011001 RepositoryFactory0033003 SQLPersistenceMechanismException1011101 Schedule10.07213010.42222222229 SearchComplaintData2018233023 SearchDiseaseData0049109 SearchHealthUnitsBySpecialty0038108 SearchSpecialtiesByHealthUnit0038108 ServletRequestAdapter00.210100110 ServletResponseAdapter0022012 ServletWebServer1137117 Situation003300.6666666673 SpecialComplaint10.1114141114 SpecialComplaintState10.07151110.16363636415 SpecialComplaintStateClosed0066006 SpecialComplaintStateOpen0066106 SpecialityRepositoryArray00.11212100.84444444422 SpecialityRepositoryRDB10.17122610.71428571426 Subject003-000 Symptom00.1791400.2514 SymptomRecord10.3366116 SymptomRepositoryArray00.11202000.82222222221 SymptomRepositoryRDB10.17122610.71428571426 ThreadLogging1011101 TransactionException0011001 UpdateComplaintData4034504 UpdateComplaintList1058209 UpdateComplaintSearch207133013 UpdateEmployeeData1045205 UpdateEmployeeSearch2035305 UpdateEntryException0044004 UpdateHealthUnitData1034204 UpdateHealthUnitList105102011 UpdateHealthUnitSearch1038208 UpdateMedicalSpecialityData1034204 UpdateMedicalSpecialityList105102011 UpdateMedicalSpecialitySearch1038208 UpdateSymptomData1034204 UpdateSymptomList105102011 UpdateSymptomSearch1038208 <br />Appendix C<br />Metrics for MobileMedia (version 1)<br />This appendix contains a table of metrics collected by Jistory and Together for version 1 of MobileMedia. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />The god classes found in this table are:<br />,[object Object],ResourceTogetherJistoryAOFDTCCWMCAOFDTCCWMCAddPhotoToAlbum20320.66666673AlbumData00.111100.672727318AlbumListScreen209209BaseController50.065550.404411866Constants000000ControllerInterface002000ImageAccessor20.062720.428571437ImageData00.14900.27777789ImageNotFoundException00400.16666674ImagePathNotValidException00400.16666674ImageUtil00.331100.166666716InvalidArrayFormatException003003InvalidImageDataException00400.16666674InvalidImageFormatException003003InvalidPhotoAlbumNameException002002MainUIMidlet004004NewAlbumScreen20320.33333333NullAlbumDataReference00400.16666674PersistenceMechanismException00400.16666674PhotoListScreen204204PhotoViewScreen515517UnavailablePhotoAlbumException00400.16666674<br />Appendix D<br />Metrics for MobileMedia (version 5)<br />This appendix contains a table of metrics collected by Jistory and Together for version 5 of MobileMedia. Metrics that are above or below the required threshold for the god class detection strategy are highlighted.<br />The god classes found in this table are:<br />,[object Object]
The CCON metric is given for this class as Jistory detected this class as being a god class.ResourceTogetherJistoryAOFDTCCWMCAOFDTCCWMCCCON (Constancy)AbstractController10.071510.16666666715 AddPhotoToAlbum20.2720.3333333337 AlbumController60187022 AlbumData00.071600.76666666723 AlbumListScreen209209 BaseController20103011 BaseMessaging003003 BaseThread102102 Constants000000 ControllerInterface002-10-1 ImageAccessor20.053120.41176470645 ImageData00.081500.15238095215 ImageNotFoundException00400.1666666674 ImagePathNotValidException00400.1666666674 ImageUtil00.331200.16666666718 InvalidArrayFormatException003003 InvalidImageDataException00400.1666666674 InvalidImageFormatException003003 InvalidPhotoAlbumNameException002002 MainUIMidlet104104 NetworkScreen20420.1666666674 NewLabelScreen20.33520.45 NullAlbumDataReference00400.1666666674 PersistenceMechanismException00400.1666666674 PhotoController40.073050.214285714480.664606755943835PhotoListController30164017 PhotoListScreen202202 PhotoViewController30114116 PhotoViewScreen50.25950.53333333311 ScreenSingleton00.2700.27 SmsMessaging20.11920.34848484825 SmsReceiverController21630.3333333336 SmsReceiverThread203215 SmsSenderController205318 SmsSenderThread10.2710.5714285717 SplashScreen00.16007 SplashScreen.CountDown001    UnavailablePhotoAlbumException00400.1666666674 <br />Appendix E<br />Final Year Project Proposal<br />This is the specification document that was produced at the end of the 2nd academic year, and outlines what this project originally intended to achieve.<br />
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...
Christopher N. Bull History-Sensitive Detection of Design Flaws B ...

Mais conteúdo relacionado

Mais procurados

Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...CS, NcState
 
A software fault localization technique based on program mutations
A software fault localization technique based on program mutationsA software fault localization technique based on program mutations
A software fault localization technique based on program mutationsTao He
 
A survey of fault prediction using machine learning algorithms
A survey of fault prediction using machine learning algorithmsA survey of fault prediction using machine learning algorithms
A survey of fault prediction using machine learning algorithmsAhmed Magdy Ezzeldin, MSc.
 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
 
EdgarDB - the simple, powerful database for scientific research
EdgarDB - the simple, powerful database for scientific researchEdgarDB - the simple, powerful database for scientific research
EdgarDB - the simple, powerful database for scientific researchMark Khoury
 
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...IEEEFINALYEARSTUDENTPROJECTS
 
The adoption of machine learning techniques for software defect prediction: A...
The adoption of machine learning techniques for software defect prediction: A...The adoption of machine learning techniques for software defect prediction: A...
The adoption of machine learning techniques for software defect prediction: A...RAKESH RANA
 
Applications of genetic algorithms to malware detection and creation
Applications of genetic algorithms to malware detection and creationApplications of genetic algorithms to malware detection and creation
Applications of genetic algorithms to malware detection and creationUltraUploader
 
A Survey of Security of Multimodal Biometric Systems
A Survey of Security of Multimodal Biometric SystemsA Survey of Security of Multimodal Biometric Systems
A Survey of Security of Multimodal Biometric SystemsIJERA Editor
 
Software testing defect prediction model a practical approach
Software testing defect prediction model   a practical approachSoftware testing defect prediction model   a practical approach
Software testing defect prediction model a practical approacheSAT Journals
 
A Simplified Model for Evaluating Software Reliability at the Developmental S...
A Simplified Model for Evaluating Software Reliability at the Developmental S...A Simplified Model for Evaluating Software Reliability at the Developmental S...
A Simplified Model for Evaluating Software Reliability at the Developmental S...Waqas Tariq
 
Towards a Better Understanding of the Impact of Experimental Components on De...
Towards a Better Understanding of the Impact of Experimental Components on De...Towards a Better Understanding of the Impact of Experimental Components on De...
Towards a Better Understanding of the Impact of Experimental Components on De...Chakkrit (Kla) Tantithamthavorn
 
Development of software defect prediction system using artificial neural network
Development of software defect prediction system using artificial neural networkDevelopment of software defect prediction system using artificial neural network
Development of software defect prediction system using artificial neural networkIJAAS Team
 
Using Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorUsing Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorTim Menzies
 

Mais procurados (19)

Software testing
Software testingSoftware testing
Software testing
 
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
 
A software fault localization technique based on program mutations
A software fault localization technique based on program mutationsA software fault localization technique based on program mutations
A software fault localization technique based on program mutations
 
D0423022028
D0423022028D0423022028
D0423022028
 
@#$@#$@#$"""@#$@#$"""
@#$@#$@#$"""@#$@#$"""@#$@#$@#$"""@#$@#$"""
@#$@#$@#$"""@#$@#$"""
 
A survey of fault prediction using machine learning algorithms
A survey of fault prediction using machine learning algorithmsA survey of fault prediction using machine learning algorithms
A survey of fault prediction using machine learning algorithms
 
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksSoftware Defect Prediction Using Radial Basis and Probabilistic Neural Networks
Software Defect Prediction Using Radial Basis and Probabilistic Neural Networks
 
EdgarDB - the simple, powerful database for scientific research
EdgarDB - the simple, powerful database for scientific researchEdgarDB - the simple, powerful database for scientific research
EdgarDB - the simple, powerful database for scientific research
 
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...
IEEE 2014 JAVA DATA MINING PROJECTS Security evaluation of pattern classifier...
 
The adoption of machine learning techniques for software defect prediction: A...
The adoption of machine learning techniques for software defect prediction: A...The adoption of machine learning techniques for software defect prediction: A...
The adoption of machine learning techniques for software defect prediction: A...
 
Applications of genetic algorithms to malware detection and creation
Applications of genetic algorithms to malware detection and creationApplications of genetic algorithms to malware detection and creation
Applications of genetic algorithms to malware detection and creation
 
Ssbse12b.ppt
Ssbse12b.pptSsbse12b.ppt
Ssbse12b.ppt
 
A Survey of Security of Multimodal Biometric Systems
A Survey of Security of Multimodal Biometric SystemsA Survey of Security of Multimodal Biometric Systems
A Survey of Security of Multimodal Biometric Systems
 
Software testing defect prediction model a practical approach
Software testing defect prediction model   a practical approachSoftware testing defect prediction model   a practical approach
Software testing defect prediction model a practical approach
 
A Simplified Model for Evaluating Software Reliability at the Developmental S...
A Simplified Model for Evaluating Software Reliability at the Developmental S...A Simplified Model for Evaluating Software Reliability at the Developmental S...
A Simplified Model for Evaluating Software Reliability at the Developmental S...
 
Towards a Better Understanding of the Impact of Experimental Components on De...
Towards a Better Understanding of the Impact of Experimental Components on De...Towards a Better Understanding of the Impact of Experimental Components on De...
Towards a Better Understanding of the Impact of Experimental Components on De...
 
Development of software defect prediction system using artificial neural network
Development of software defect prediction system using artificial neural networkDevelopment of software defect prediction system using artificial neural network
Development of software defect prediction system using artificial neural network
 
Using Developer Information as a Prediction Factor
Using Developer Information as a Prediction FactorUsing Developer Information as a Prediction Factor
Using Developer Information as a Prediction Factor
 
Dc35579583
Dc35579583Dc35579583
Dc35579583
 

Destaque

Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...butest
 
SATANJEEV BANERJEE
SATANJEEV BANERJEESATANJEEV BANERJEE
SATANJEEV BANERJEEbutest
 
T2L3.doc
T2L3.docT2L3.doc
T2L3.docbutest
 
Teaching Machine Learning to Design Students
Teaching Machine Learning to Design StudentsTeaching Machine Learning to Design Students
Teaching Machine Learning to Design Studentsbutest
 
Advanced Web Design and Development - Spring 2005.doc
Advanced Web Design and Development - Spring 2005.docAdvanced Web Design and Development - Spring 2005.doc
Advanced Web Design and Development - Spring 2005.docbutest
 
BenMartine.doc
BenMartine.docBenMartine.doc
BenMartine.docbutest
 
Learning for Optimization: EDAs, probabilistic modelling, or ...
Learning for Optimization: EDAs, probabilistic modelling, or ...Learning for Optimization: EDAs, probabilistic modelling, or ...
Learning for Optimization: EDAs, probabilistic modelling, or ...butest
 
EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEbutest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同butest
 

Destaque (9)

Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...Techniques for integrating machine learning with knowledge ...
Techniques for integrating machine learning with knowledge ...
 
SATANJEEV BANERJEE
SATANJEEV BANERJEESATANJEEV BANERJEE
SATANJEEV BANERJEE
 
T2L3.doc
T2L3.docT2L3.doc
T2L3.doc
 
Teaching Machine Learning to Design Students
Teaching Machine Learning to Design StudentsTeaching Machine Learning to Design Students
Teaching Machine Learning to Design Students
 
Advanced Web Design and Development - Spring 2005.doc
Advanced Web Design and Development - Spring 2005.docAdvanced Web Design and Development - Spring 2005.doc
Advanced Web Design and Development - Spring 2005.doc
 
BenMartine.doc
BenMartine.docBenMartine.doc
BenMartine.doc
 
Learning for Optimization: EDAs, probabilistic modelling, or ...
Learning for Optimization: EDAs, probabilistic modelling, or ...Learning for Optimization: EDAs, probabilistic modelling, or ...
Learning for Optimization: EDAs, probabilistic modelling, or ...
 
EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 

Semelhante a Christopher N. Bull History-Sensitive Detection of Design Flaws B ...

Implementation of reducing features to improve code change based bug predicti...
Implementation of reducing features to improve code change based bug predicti...Implementation of reducing features to improve code change based bug predicti...
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
 
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...zillesubhan
 
DESQA a Software Quality Assurance Framework
DESQA a Software Quality Assurance FrameworkDESQA a Software Quality Assurance Framework
DESQA a Software Quality Assurance FrameworkIJERA Editor
 
CRIME EXPLORATION AND FORECAST
CRIME EXPLORATION AND FORECASTCRIME EXPLORATION AND FORECAST
CRIME EXPLORATION AND FORECASTIRJET Journal
 
Sofware Engineering Important Past Paper 2019
Sofware Engineering Important Past Paper 2019Sofware Engineering Important Past Paper 2019
Sofware Engineering Important Past Paper 2019MuhammadTalha436
 
Software Engineering with Objects (M363) Final Revision By Kuwait10
Software Engineering with Objects (M363) Final Revision By Kuwait10Software Engineering with Objects (M363) Final Revision By Kuwait10
Software Engineering with Objects (M363) Final Revision By Kuwait10Kuwait10
 
Bt0081 software engineering
Bt0081 software engineeringBt0081 software engineering
Bt0081 software engineeringTechglyphs
 
Defect effort prediction models in software maintenance projects
Defect  effort prediction models in software maintenance projectsDefect  effort prediction models in software maintenance projects
Defect effort prediction models in software maintenance projectsiaemedu
 
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
 
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comwww.testersforum.com
 
Quality Attribute: Testability
Quality Attribute: TestabilityQuality Attribute: Testability
Quality Attribute: TestabilityPranay Singh
 
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...A Review on Software Fault Detection and Prevention Mechanism in Software Dev...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
 
Clone of an organization
Clone of an organizationClone of an organization
Clone of an organizationIRJET Journal
 
Defect effort prediction models in software
Defect effort prediction models in softwareDefect effort prediction models in software
Defect effort prediction models in softwareIAEME Publication
 

Semelhante a Christopher N. Bull History-Sensitive Detection of Design Flaws B ... (20)

Implementation of reducing features to improve code change based bug predicti...
Implementation of reducing features to improve code change based bug predicti...Implementation of reducing features to improve code change based bug predicti...
Implementation of reducing features to improve code change based bug predicti...
 
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...Integrated Analysis of Traditional Requirements Engineering Process with Agil...
Integrated Analysis of Traditional Requirements Engineering Process with Agil...
 
Slcm sharbani bhattacharya
Slcm sharbani bhattacharyaSlcm sharbani bhattacharya
Slcm sharbani bhattacharya
 
Chapter three
Chapter threeChapter three
Chapter three
 
DESQA a Software Quality Assurance Framework
DESQA a Software Quality Assurance FrameworkDESQA a Software Quality Assurance Framework
DESQA a Software Quality Assurance Framework
 
CRIME EXPLORATION AND FORECAST
CRIME EXPLORATION AND FORECASTCRIME EXPLORATION AND FORECAST
CRIME EXPLORATION AND FORECAST
 
Sofware Engineering Important Past Paper 2019
Sofware Engineering Important Past Paper 2019Sofware Engineering Important Past Paper 2019
Sofware Engineering Important Past Paper 2019
 
Software Engineering with Objects (M363) Final Revision By Kuwait10
Software Engineering with Objects (M363) Final Revision By Kuwait10Software Engineering with Objects (M363) Final Revision By Kuwait10
Software Engineering with Objects (M363) Final Revision By Kuwait10
 
Bt0081 software engineering
Bt0081 software engineeringBt0081 software engineering
Bt0081 software engineering
 
Defect effort prediction models in software maintenance projects
Defect  effort prediction models in software maintenance projectsDefect  effort prediction models in software maintenance projects
Defect effort prediction models in software maintenance projects
 
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...
 
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...
 
Too many files
Too many filesToo many files
Too many files
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.com
 
Quality Attribute: Testability
Quality Attribute: TestabilityQuality Attribute: Testability
Quality Attribute: Testability
 
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...A Review on Software Fault Detection and Prevention Mechanism in Software Dev...
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...
 
F017652530
F017652530F017652530
F017652530
 
Clone of an organization
Clone of an organizationClone of an organization
Clone of an organization
 
Object oriented analysis and design unit- v
Object oriented analysis and design unit- vObject oriented analysis and design unit- v
Object oriented analysis and design unit- v
 
Defect effort prediction models in software
Defect effort prediction models in softwareDefect effort prediction models in software
Defect effort prediction models in software
 

Mais de butest

LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jacksonbutest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer IIbutest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.docbutest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1butest
 
Facebook
Facebook Facebook
Facebook butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTbutest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docbutest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docbutest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.docbutest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!butest
 
Download
DownloadDownload
Downloadbutest
 
resume.doc
resume.docresume.doc
resume.docbutest
 

Mais de butest (20)

LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 
Download
DownloadDownload
Download
 
resume.doc
resume.docresume.doc
resume.doc
 

Christopher N. Bull History-Sensitive Detection of Design Flaws B ...

  • 1.
  • 2. – Metrics for HealthWatcher (version 10)
  • 3. – Metrics for MobileMedia (version 1)
  • 5.
  • 6. WMC (Weighted Method Count) is the sum of complexity for all of the methods in a class.
  • 7.
  • 9.
  • 11.
  • 13.
  • 14. Jistory – This column represents classes that were flagged as god classes by Jistory’s conventional detection strategy.
  • 15. Together (audit) – This column represents all of the classes that were flagged as god classes by Together’s audit feature.
  • 16.
  • 17. Date – detected by Together
  • 18.
  • 19. Date – detected by Together
  • 20. PersistenceMechanism – detected by Jistory and Together
  • 21.
  • 22. The CCON metric is given for this class as Jistory detected this class as being a god class.ResourceTogetherJistoryAOFDTCCWMCAOFDTCCWMCCCON (Constancy)AbstractController10.071510.16666666715 AddPhotoToAlbum20.2720.3333333337 AlbumController60187022 AlbumData00.071600.76666666723 AlbumListScreen209209 BaseController20103011 BaseMessaging003003 BaseThread102102 Constants000000 ControllerInterface002-10-1 ImageAccessor20.053120.41176470645 ImageData00.081500.15238095215 ImageNotFoundException00400.1666666674 ImagePathNotValidException00400.1666666674 ImageUtil00.331200.16666666718 InvalidArrayFormatException003003 InvalidImageDataException00400.1666666674 InvalidImageFormatException003003 InvalidPhotoAlbumNameException002002 MainUIMidlet104104 NetworkScreen20420.1666666674 NewLabelScreen20.33520.45 NullAlbumDataReference00400.1666666674 PersistenceMechanismException00400.1666666674 PhotoController40.073050.214285714480.664606755943835PhotoListController30164017 PhotoListScreen202202 PhotoViewController30114116 PhotoViewScreen50.25950.53333333311 ScreenSingleton00.2700.27 SmsMessaging20.11920.34848484825 SmsReceiverController21630.3333333336 SmsReceiverThread203215 SmsSenderController205318 SmsSenderThread10.2710.5714285717 SplashScreen00.16007 SplashScreen.CountDown001    UnavailablePhotoAlbumException00400.1666666674 <br />Appendix E<br />Final Year Project Proposal<br />This is the specification document that was produced at the end of the 2nd academic year, and outlines what this project originally intended to achieve.<br />