SlideShare a Scribd company logo
1 of 58
Download to read offline
ICESCRUM
                 Application ICESCRUM2

                           Audit Report
                                2010-02-10



     This document is a sample audit report produced automatically
from the results of the analysis of the application on the Kalistick platform.
        It does not include any specific comments on the results.


        Its purpose is to serve as a model to build custom reports,
          it illustrates the ability of the platform to render a clear
                 and comprehensible quality of an application.




      This document is confidential and is the property of Kalistick.
        It should not be circulated or modified without permission.

                                  Kalistick
                             13 av Albert Einstein
                             F-69100 Villeurbanne
                             +33 (0) 486 68 89 42
                            contact@kalistick.com
                             www.kalistick.com
Code audit of IceScrum2 application                                                                      2010-02-10



        1 Executive Summary
        The Quality Cockpit uses static analysis techniques: it does not execute the application, but analyzes the
        elements that compose it (code, test results, architecture ...). The results are correlated, aggregated and
        compared within the project context to identify risks related to quality. This report presents the results.



                                                                        Variation compared to the objective

                                                                    This chart compares the current status of the project
                                                                    to the objectives set for each quality factor.

                                                                    The goal, set at the initialization of the audit,
                                                                    represents the importance of each quality factor. It
                                                                    is intended to define the rules to follow during
                                                                    development and the accepted tolerance.




                                                                           Rate of overall non-compliance

                                                                    This gauge shows the overall level of quality of the
                                                                    application compared to its objective. It displays the
                                                                    percentage of the application (source code)
                                                                    regarded as not-compliant.

                                                                    According to the adopted configuration, a rate
                                                                    higher than 15% indicates the need for further
                                                                    analysis.




                                                                              Origin of non-compliances

                                                                    This graph identifies the technical origin of
                                                                    detected non-compliances, and the main areas of
                                                                    improvement.

                                                                    According to elements submitted for the analysis,
                                                                    some quality domains may not be evaluated.




Confidential – This document is the property of Kalistick                                                         2/58
Code audit of IceScrum2 application                                                                                                                        2010-02-10




        Report Organization
        This report presents the concepts of Quality Cockpit, the goal and the associated technical requirements
        before proceeding with the summary results and detailed results for each technical area.




        1     Executive Summary ...................................................................................................................................... 2
        2     Introduction .................................................................................................................................................. 4
            2.1      The Quality Cockpit............................................................................................................................... 4
            2.2      The analytical........................................................................................................................................ 4
        3     Quality objective........................................................................................................................................... 7
            3.1      The quality profile ................................................................................................................................ 7
            3.2      The technical requirements ................................................................................................................. 7
        4     Summary of results..................................................................................................................................... 10
            4.1      Project status ...................................................................................................................................... 10
            4.2      Benchmarking ..................................................................................................................................... 13
            4.3      Modeling application.......................................................................................................................... 17
        5     Detailed results........................................................................................................................................... 19
            5.1      Detail by quality factors...................................................................................................................... 19
            5.2      Implementation .................................................................................................................................. 20
            5.3      Structure ............................................................................................................................................. 24
            5.4      Test ..................................................................................................................................................... 31
            5.5      Architecture ........................................................................................................................................ 38
            5.6      Duplication ......................................................................................................................................... 39
            5.7      Documentation................................................................................................................................... 41
        6     Action Plan.................................................................................................................................................. 43
        7     Glossary ...................................................................................................................................................... 45
        8     Annex .......................................................................................................................................................... 47
            8.1      Cyclomatic complexity........................................................................................................................ 47
            8.2      The coupling ....................................................................................................................................... 49
            8.3      TRI and TEI .......................................................................................................................................... 50
            8.4      Technical Requirements ..................................................................................................................... 52




Confidential – This document is the property of Kalistick                                                                                                              3/58
Code audit of IceScrum2 application                                                                          2010-02-10



        2 Introduction

        2.1 The Quality Cockpit
        This audit is based on an industrialized process of code analysis. This industrialization ensures reliable results
        and easily comparable with the results of other audits.

        The analysis process is based on the "Quality Cockpit" platform, available through SaaS1 model
        (https://cockpit.kalistick.com). This platform has the advantage of providing a knowledge base unique in that
        it centralizes the results from statistical analysis of millions code lines, enriched continuously with new
        analyses. It allows performing comparative analysis with other similar projects.

        2.2 The analytical
        The analysis focuses on the code of the application (source code and binary code), for Java (JEE) or C# (. Net)
        technologies. It is a static analysis (without runtime execution), supplemented by correlation with
        information from development tools already implemented for the project: version control system, unit
        testing frameworks, code coverage tools.

        The results are given through an analytical approch based around three main dimensions:

                  The quality factors, which determine the nature of the impact of non-compliances detected, and the
                   impact on the quality of the application
                  The quality domains, which specify the technical origin of non-compliances
                  The severity levels, which positions the non-compliances on a severity scale to characterize their
                   priority




        1
            Software as a Service: application accessible remotely via Internet (using a standard browser)

Confidential – This document is the property of Kalistick                                                          4/58
Code audit of IceScrum2 application                                                                              2010-02-10


        2.2.1 The quality factors
        The quality factors standardize a set of quality attributes which should claim the application according to ISO
        912623:

                 Maintainability. Ability of software to be easily repaired, depending on the effort required to locate,
                  identify and correct errors.

                 Reliability. Ability of software to function properly in making the service expected in normal
                  operation.

                 Changeability. Ability of software to be able to evolve, depending on the effort required to add,
                  delete, and modify the functions of an operating system.

                 Security. Ability of software to operate within the constraints of integrity, confidentiality and
                  traceability requirements.

                 Transferability. Ability to perform maintenance and evolution of software by a new team separate
                  from the one which developed the original software.

                 Efficiency. Relationship between the level of software performance and the number of resources
                  required to operate in nominal conditions.




        2.2.2 The quality domains
        The quality domains determine the nature of problems according to their technical origin. There is six of it:

                 Implementation. The problems inherent in coding: misuse of language, potential bugs, code hard to
                  understand ... These problems can affect one or more of the six quality factors.

                 Structure. Problems related to the code organization: methods too long, too complex, with too many
                  dependencies ... These issues impact maintainability and changeability of the application.

                 Test. Describes how the application is tested based on results of unit tests (failure rate, execution
                  time ...) but also of the nature of the code covered by the test execution. The objective is to ensure
                  that the tests cover the critical parts of the application.




        2
          ISO/IEC 9126-1:2001 Software engineering — Product quality — Part 1: Quality model :
         http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=22749
        3
          The analysis focuses on a subset of ISO 9126 in order to focus on controllable dimensions automatically.

Confidential – This document is the property of Kalistick                                                              5/58
Code audit of IceScrum2 application                                                                        2010-02-10


                 Architecture. Problems with the software architecture of the application. The platform allows the
                  definition of an architectural model to modularize the application into layers or components and
                  define communication constraints between them. The analysis identifies in the code all the calls
                  which do not satisfy these constraints, to detect the maintainability, changeability and security risk
                  levels.

                 Documentation. Problems related to lack of documentation in the code. This area primarily impacts
                  the transferability of code.

                 Duplication. Identification of all significant copy-pastes in the application. They impact reliability,
                  maintainability, transferability and changeability.



        2.2.3 Severity levels
        The severity levels are intended to characterize the priority of correction of non-compliances. This priority
        depends on the severity of the impact of non-compliance, but also on the effort required for correction:
        some moderately critical problems might be marked with a high level of severity because of the triviality of
        their resolution.

        To simplify interpretation, the severity levels are expressed using a four-level scale. The first is an error, the
        others are warnings, from most to least severe:

                 Forbidden

                 Highly inadvisable

                 Inadvisable

                 To be avoided



        Compared to the Forbidden level, other levels of severity are managed with a tolerance threshold, which
        increases inversely with gravity.




Confidential – This document is the property of Kalistick                                                          6/58
Code audit of IceScrum2 application                                                                       2010-02-10



        3 Quality objective
        One of distinctive features of "Quality Cockpit" is to perform the analysis according to real needs of the
        project in terms of quality, in order to avoid unnecessary efforts and to ensure greater relevance of quality
        risks.

        These requirements are formalized by defining the "quality profile" of the application, which characterizes
        the quality levels expected on each of the six main quality factors. This profile is then translated as "technical
        requirements" which are technical rules to be followed by the developers.

        3.1 The quality profile
        For this audit, the profile is established as follows:




                                                            See the Quality Cockpit




        3.2 The technical requirements
        Based on the above quality profile, technical requirements have been selected from the “Quality Cockpit”
        knowledge base. These technical requirements cover the six quality domains (implementation, structure,
        testing, architecture, documentation, duplication) and are configured according to the quality profile
        (thresholds, levels of severity ...). The objective is to ensure a calibration of requirements that ensures the
        highest return on investment.




Confidential – This document is the property of Kalistick                                                          7/58
Code audit of IceScrum2 application                                                                                 2010-02-10


        Here are the details of these technical requirements:

        Domain                Rule                    Explanation, goal and possible thresholds
                              -                       According to your profile, between 150 and 200 rules were selected. They
             Implementation

                                                      are exhaustively presented in the appendix of the report (8.4.1
                                                      Implementation rules).

                                                      Objective: avoid bad practices and apply best practices related to the
                                                      technology used.




                              Size of methods         Number of statements. This measure is different from the number of lines
                                                      of code: it does not include comment lines or blank lines but only lines with
                                                      at least one statement.

                                                      Objective: avoid processing blocks difficult to understand.

                                                      The threshold for the project is:
                                                           Number of lines: 100

                              Complexity of methods   Cyclomatic complexity of a method. It measures the complexity of the
                                                      control flow of a method by counting the number of independent paths
                                                      covering all possible cases. The higher the number, the harder the code is to
                                                      maintain and test.
             Structure




                                                      Objective: avoid processing blocks difficult to understand, not testable and
                                                      which tend to have a significant rate of failure.

                                                      The threshold for the project is:
                                                           Cyclomatic complexity: 20

                              Complexity and          Identifies methods difficult to understand, test and maintain because of
                              coupling of methods     moderate complexity (cyclomatic complexity) and numerous references to
                                                      other types (efferent coupling).

                                                      Objective: avoid processing blocks difficult to understand and not testable.

                                                      The thresholds for the project are:
                                                           Cyclomatic complexity: 15
                                                           Efferent coupling: 20




Confidential – This document is the property of Kalistick                                                                    8/58
Code audit of IceScrum2 application                                                                                   2010-02-10




        Domain               Rule                       Explanation, goal and possible thresholds
                             Test coverage methods      Rate of code coverage for a method. This metric is standardized by our
                                                        platform based on raw measures of code coverage when they are provided
                                                        in the project archive.

                                                        This rule requires a minimum level of testing (code coverage) for each
                                                        method of the application according to the TRI (TestRelevancyIndex); TRI
                                                        for each method assesses the risk that it contains bugs. His calculation takes
                                                        into account the business risks defined for the application.
             Test




                                                        Objective: focus the test strategy and test efforts towards sensitive areas of
                                                        the application and check them. These sensitive areas are evaluated
                                                        according to their propensity to contain bugs and according to business
                                                        risks defined for the application.

                                                        Details of the thresholds are provided in the annex to the report (8.4.2 Code
                                                        coverage).

                             Rules defined              See the architecture model defined for the application to check architecture
                             specifically through the   constraints.
             Architecture




                             architecture model.
                                                        Objective: ensure that developments follow the expected architecture
                                                        model and do not introduce inconsistencies which could be security holes,
                                                        maintenance or evolution issues.

                                                        Note: violations of architecture are not taken into account in the calculation
                                                        of non-compliance.

                             Header documentation       Identifies methods of moderate complexity which have no documentation
                             of methods                 header. The methods considered are those whose cyclomatic complexity
             Documentation




                                                        and number of statements exceed the thresholds defined specifically for
                                                        the project.

                                                        Objective: ensure that documentation is available in key processing blocks
                                                        to facilitate any changes in the development team (transferability).

                                                        The thresholds for the project are:
                                                             Cyclomatic complexity: 10
                                                             Number of lines: 50


                             Detection of               Duplicated blocks are invalid beyond 20 Statements
             Duplication




                             duplications
                                                        Objective: detect identical blocks of code in several places in the
                                                        application, which often causes inconsistencies when making changes, and
                                                        which are factor of increased costs of testing and development.




Confidential – This document is the property of Kalistick                                                                      9/58
Code audit of IceScrum2 application                                                                     2010-02-10




        4 Summary of results
        This chapter summarizes the status of the project using global indicators. These indicators measure the
        intrinsic quality of the project, but also compare its situation to other projects using “Quality Cockpit”
        knowledge base.

        4.1 Project status
        The following indicators are related to the intrinsic situation of the project.


        4.1.1 Rate of overall non-compliance
        The rate of non-compliance measures the percentage of application code considered as non-compliant.




                                                            See the Quality Cockpit




                       Specifically, this represents the ratio between the total number of statements, and the
                number of statements in non-compliant classes. A class is considered as non-compliant if at least
                one of the following statements is true:

                  - A forbidden non-compliance is detected in the class
                  - A set of non-compliances highly inadvisable, inadvisable, or to be avoided are detected in
                the class, and beyond a certain threshold. This calculation depends on the severity of each non-
                compliance and on the quality profile that adjusts the threshold of tolerance.




Confidential – This document is the property of Kalistick                                                       10/58
Code audit of IceScrum2 application                                                                           2010-02-10


        4.1.2 Deviation from target
        This chart summarizes the difference between the target as represented by the quality profile and the
        current status of the project. This difference is shown for each quality factor:




                                                            See the Quality Cockpit




                      The level of non-compliance is calculated for each quality factor, and then weighted by the
                level of requirements set for the related quality factor.




                          Quality theme           Classes       Significant non-compliances   % application
                          Changeability             27                       107                  29%
                          Efficiency                 7                        8                    5%
                          Maintainability           40                       216                  41%
                          Reliability               40                       136                  37%
                          Security                   0                        0                    0%
                          Transferability           32                       131                  38%
                          [Total]                   53                       264                 49.99%




                      Detailed results specify for each quality factor: the number of non-compliant classes, the
                number of violations for selected rules, and the percentage of application code involved in non-
                compliant classes.




Confidential – This document is the property of Kalistick                                                           11/58
Code audit of IceScrum2 application                                                                    2010-02-10


        4.1.3 Origin of non-compliances
        The following chart shows the distribution of non-compliances according to their technical origin:




                                                            See the Quality Cockpit




                      This chart compares each field according to the impact of rules that are associated with
                the quality of the application. The impact is measured from the number of statements in classes
                non-compliant.




        4.1.4 Volumetry
        The following table specifies the volume of the analyzed application:

                                         Metric                                 Value     Trend
                                         Line count                             47671   +14.93%
                                         Statement count                        24034   +18.36%
                                         Method count                           4384    +13.75%
                                         Class count                             230    +10.58%
                                         Package count                           43      +4.88%



                                                            See the Quality Cockpit




                     A "line" corresponds to a physical line of a source file. It may involve a white line or a
                comment line. A "statement" is a primary unit of code, it can be written on multiple lines, but
                also a line may contain multiple statements. For simplicity, a statement is delimited by a
                semicolon (;) or a left brace ({).




Confidential – This document is the property of Kalistick                                                     12/58
Code audit of IceScrum2 application                                                                 2010-02-10


        4.2 Benchmarking
        The “Quality Cockpit" knowledge base allows a comparative analysis of the project with other projects
        reviewed on the platform. The objective is to measure its level of quality compared to an overall average.

        This comparison benchmarking is proposed in relation to two categories of projects:

                 The “Intra-Cockpit” projects: projects analyzed continuously on the platform, therefore, with a
                  quality level above average (a priori)
                 The “Extra-Cockpit” projects: the projects reviewed from time to time on the platform in audit
                  mode, so with a highly heterogeneous quality.

        Note: each project having its own specific quality profile, benchmarking does not take in account project
        configuration, but uses instead raw measures.


        4.2.1 Comparison on implementation issues
        The chart below shows the status of the project implementation compared to the Extra-Cockpit projects,
        therefore analyzed promptly on the platform. For each level of severity, the quality of the project is
        positioned relative to others:




                                                            See the Quality Cockpit




Confidential – This document is the property of Kalistick                                                  13/58
Code audit of IceScrum2 application                                                                      2010-02-10



                      The project is positioned relative to other projects according to the rate of violations for
                each rule. The distribution is based on the quartile method, three groups are distinguished,
                "Better": the 25% best projects, "On the average": the 50% average projects, "Worse": the 25%
                worse projects. This information is then synthesized by level of severity.




                     The implementation rules compared are not necessarily the same as quality profiles, but
                here we compare the rules according to their severity level set for each project.



        The following graph provides the same analysis, but this time with the Intra-Cockpit projects, analyzed
        continuously on the platform, so with a level of quality normally above average since detected violations
        should be more corrected:




                                                            See the Quality Cockpit




                      A dominant red color indicates that the other projects tend to correct the violations
                detected on this project.




Confidential – This document is the property of Kalistick                                                        14/58
Code audit of IceScrum2 application                                                                   2010-02-10


        4.2.2 Mapping the structure
        The following chart compares the size of the methods of the current project with those of other projects,
        "Intra-Cockpit" and "Extra-Cockpit", comparing the ratio of the application (as a percentage of statements)
        which is located in processing blocks (methods) with a high number of statements:




                                                            See the Quality Cockpit




                      A significant proportion of the application in the right area is an indicator of greater
                maintenance and evolution costs.
                NB: The application analyzed is indicated by the term "Release".




Confidential – This document is the property of Kalistick                                                    15/58
Code audit of IceScrum2 application                                                                           2010-02-10


        A similar comparison is provided for the cyclomatic complexity4 of methods, comparing the proportion of the
        application (as a percentage of statements) that is located within complex methods:




                                                            See the Quality Cockpit




                      A significant proportion of the application in the right area shows not only greater
                maintenance and evolution costs, but also problems of reliability because this code is difficult to
                test.



        4.2.3 Comparison of main metrics
        The following table compares the project with other projects, "Intra-Cockpit" and "Extra-cockpit", on the
        main metrics related to the structure of the code. Recommended interval values are provided for
        information purposes.

             Metric                                            Project     Extra-Cockpit Intra-Cockpit   Recommended
                                                                                                            interval
             Classes per package                                5.35            7.57         50.68           6 - 26
             Methods per class                                  19.06           10.71        8.74            4 - 10
             Statements per method                              5.48            8.05         7.26            7 - 13
             Cyclomatic complexity per statement                0.34             0.3         0.29          0.16 - 0.24



                                                       See the Quality Cockpit




        4
         Cyclomatic complexity measures the complexity of the code, and thus its ability to test it,
        cf.http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf

Confidential – This document is the property of Kalistick                                                           16/58
Code audit of IceScrum2 application                                                                            2010-02-10


        4.3 Modeling application
        To facilitate understanding of analysis results, the application is modeled in two ways: a functional
        perspective to better identify the business features of the application and link them to the source code, and
        a technical perspective to verify the technical architecture of the application.

        These models are built using the modeling wizard available in the Cockpit. You can modify these templates
        on the pages Functional modelization           et Technical Architecture      (depending on your user rights).


        4.3.1 Functional model
        The functional model represents the business view of application, which may be understood by all project
        members.




                                                            See the Quality Cockpit




                       The functional model is composed of modules, each one representing a business feature,
                or a group of functionalities. These modules have been identified from a lexical corpus generated
                from the application code which allows isolating the business vocabulary of the application.




        4.3.2 Technical model
        The technical model represents the technical architecture of the application code. The idea is to define a
        target architecture model, which identifies the layers and / or technical components within the application,
        and sets constraints to allow or prohibit communications between each of these elements.



Confidential – This document is the property of Kalistick                                                                17/58
Code audit of IceScrum2 application                                                                    2010-02-10


        The aim is threefold:

                 Homogenize the behavior of an application. For example, to ensure that the logging traces are
                  written through a specific API, that data accesses pass through a dedicated layer, that some third-
                  party library is only used by specific components ...

                 Ensure tightness of some components to facilitate their development and limit unintended
                  consequences, but also make them shareable with other applications. Dependency cycles are for
                  instance forbidden.

                 Avoid security flaws for example by ensuring that calls to data layer always pass through a business
                  layer in charge of validation controls.

        Results of the architecture analysis are provided in chapter 5.5 Architecture.




                                                            See the Quality Cockpit




                       Green arrows formalize allowed communications between modules, while red arrows
                 formalize forbidden communications.




Confidential – This document is the property of Kalistick                                                      18/58
Code audit of IceScrum2 application                                                                      2010-02-10



        5 Detailed results
        This chapter details the results by focusing, for each quality domain, non-compliant elements.

        5.1 Detail by quality factors
        The histogram below details the non-compliance rate for each quality factor, displaying also the number of
        non-compliant classes. As a reminder, the rate of non-compliance is based on the number of statements
        defined in non-compliant classes compared to the total number of statements in the project.

        These rates of non-compliance directly depend on the quality profile and on the level of requirements that
        have been selected:




                                                            See the Quality Cockpit




                      Same class may be non-compliant on several factors, the total does not necessarily
                correspond to the sum of the factors.




Confidential – This document is the property of Kalistick                                                      19/58
Code audit of IceScrum2 application                                                                      2010-02-10


        5.2 Implementation
        Implementation domain covers the rules related to coding techniques. Unlike other domains, these rules are
        often specific to the characteristics of a language (Java / C#). They identify, for example:

                 Potential bugs: uninitialized variables, concurrency issues, recursive calls ...
                 Optimizations in terms of memory or CPU
                 Security vulnerabilities
                 Obsolete code
                 Code deviating from recommended standards
                 ...

        Implementations rules are the most numerous of the technical requirements. They are called "practice".


        5.2.1 Breakdown by severity
        The objective of this indicator is to identify the severity of the practices that led to the invalidation of the
        classes. Here, severity is divided in two levels: forbidden practices (Forbidden security level) and inadvisable
        practices (Highly inadvisable, Inadvisable and To be avoided security levels).

        The following pie compares the number of non-compliant classes in implementation, according to the
        practices that participated in this invalidation:

                 When a class only violates forbidden practices, it is in the group “Forbidden practices”
                 When a class only violates inadvisable practices, it is in the group “Inadvisable practices”
                 Otherwise, the class violates practices of both categories and is in the group “Inadvisable and
                  forbidden practices”




                                                            See the Quality Cockpit




Confidential – This document is the property of Kalistick                                                        20/58
Code audit of IceScrum2 application                                                                      2010-02-10



                      The effort of correction related to forbidden practices is generally less important compared
                to lower severities: a single violation is sufficient to cause a forbidden non-compliance when
                several inadvisable practices are needed to cause non-compliance, depending on tolerance
                thresholds.



        The table below completes the previous graph by introducing the concept of “Significant non-compliance”. A
        significant violation is a violation whose correction can fix fully or partially the non-compliance of a class.
        Indeed, due to tolerance thresholds associated with levels of severity, the correction of some violations has
        no impact on the non-compliance of the class.

             Severity                     Significant non-    New non-        Corrected non-         Other non-
                                            compliances      compliances       compliances          compliances
             Forbidden                           14              3                  2                    0
             Highly inadvisable                  71              29                 7                   70
             Inadvisable                         44              48                 4                   29
             To be avoided                       35              26                 10                  107




                     The columns "New non-compliance" and "Corrected non-compliances" are only relevant if
                the audit follows a previous audit.




Confidential – This document is the property of Kalistick                                                         21/58
Code audit of IceScrum2 application                                                                             2010-02-10


        5.2.2 Practices to fix in priority
        The two following tables provide a list of forbidden practices and highly inadvisable practices detected in the
        application. These are generally the rules to correct first.

        These tables provide for each practice the number of new non-compliances (if a previous audit has been
        done), the total number of non-compliances for this practice, the number of non-compliant classes where
        this practice has been detected and the percentage of statements of these classes compared to the overall
        number of statement in the project.

        These figures help to set up an action plan based on the impact associated with each practice.

        5.2.2.1 Forbidden practices


           Practice                                                            New       Non-          NC            %
                                                                                      compliances    classes    application
           DontUseNewToInstantiateIntegers                                       0        6             5         2.33%
           AlwaysDeclareCloneableInterfaceWhenImplementingC                      3        3             3         2.01%
           loneMethod
           AlwaysSynchronizeDateFormatter                                        0        1            1            1%
           DontUseNewToInstantiateStrings                                        0        1            1           2.61%
           MisplacedNullCheck                                                    0        1            1            1%
           NPEAlwaysThrown                                                       0        1            1            1%
           UseAppendMethodForStringBuffer                                        0        1            1            1%



                                                            See the Quality Cockpit




        5.2.2.2 Practice highly inadvisable


        Practice                                                               New        Non-        NC classes        %
                                                                                       compliances                 application
        TraceErrorsWithLogger                                                   20         80              28        33.98%
        NeverMakeCtorCallInnerMethod                                            3          27              18        26.38%
        UseLoggerRatherThanPrintMethods                                         4          27              7          9.47%
        DontAssignVariablesInOperands                                           2          5               2          3.04%
        DontIgnoreMethodsReturnValue                                            0          1               1          1.56%
        OverrideEqualsWhenImplementingCompareTo                                 0          1               0           1%



                                                            See the Quality Cockpit



        5.2.3 Classes to fix in priority on the implementation issues
        The two following tables provide an additional view about the impact of implementation issues in listing the
        main classes involved in forbidden practices or highly inadvisable practices.

Confidential – This document is the property of Kalistick                                                              22/58
Code audit of IceScrum2 application                                                                             2010-02-10


        For each class is associated the number of existing violations (forbidden or highly inadvisable practices), the
        number of new violations (if a previous audit has been done), and the compliance status of the class.

        5.2.3.1 Classes with forbidden practices


         Class                                                                        NC    New      Non-       Instructions
                                                                                                  compliances
         icescrum2.dao.model.impl.RemainingEstimationArray                            Yes    0        2             74
         icescrum2.dao.model.impl.Sprint                                              Yes    1        2             218
         icescrum2.service.chart.PointBurnupChartProduct                              Yes    0        2             121
         icescrum2.presentation.app.chat.PrivateChat                                  Yes    0        1             184
         icescrum2.service.impl.HibernateManagerImpl                                  Yes    0        1             76
         icescrum2.dao.impl.ProductDao                                                Yes    0        1             69
         icescrum2.dao.impl.UserDao                                                   Yes    0        1             76
         icescrum2.dao.model.ISprint                                                  Yes    1        1             56
         icescrum2.presentation.model.SprintImpl                                      Yes    1        1             208
         icescrum2.service.impl.ExportXMLServiceImpl                                  Yes    0        1             628



                                                            See the Quality Cockpit




Confidential – This document is the property of Kalistick                                                              23/58
Code audit of IceScrum2 application                                                                        2010-02-10


        5.2.3.2 Classes with practice highly inadvisable


      Class                                                                           NC    New      Non-     Instructions
                                                                                                  compliances
      icescrum2.service.impl.RepositoryServiceImpl                                    Yes    0        13           51
      icescrum2.service.impl.ExportPDFServiceImpl                                     Yes    1         9          455
      icescrum2.service.impl.HibernateManagerImpl                                     Yes    0         5           76
      icescrum2.listeners.IS2ServletListener                                          Yes    0         4           20
      icescrum2.presentation.broadcast.RenderableSession                              Yes    0         4          110
      icescrum2.service.chart.BurndownChartProduct                                    Yes    0         4          172
      icescrum2.service.chart.GlobalChartTest                                         Yes    0         4          122
      icescrum2.service.impl.ConfigurationServiceImpl                                 Yes    1         4          103
      icescrum2.service.impl.ExportXMLServiceImpl                                     Yes    0         4          628
      icescrum2.service.impl.ImportXMLServiceImpl                                     Yes    0         4          482
      icescrum2.service.impl.UserServiceImpl                                          Yes    0         4           75
      icescrum2.presentation.app.productbacklog.ProductBacklogUI                      Yes    2         3          817
      icescrum2.service.chart.BurnupChartProduct                                      Yes    0         3          184
      icescrum2.service.chart.PointBurnupChartProduct                                 Yes    0         3          121
      icescrum2.service.impl.ExportPDFSprint                                          Yes    3         3           97
      icescrum2.presentation.app.product.ProductUI                                    Yes    0         2          375
      icescrum2.dao.impl.ExceptionManager                                             No     0         2          103
      icescrum2.dao.impl.ProblemDao                                                   No     0         2           51
      icescrum2.filters.OpenSessionInViewPhaseListener                                No     0         2           97



                                                            See the Quality Cockpit




        5.3 Structure
        The Structure domain targets rules related to the code structure, for example:

                 The size of methods
                 The cyclomatic complexity of methods
                 Coupling, or the dependencies of methods towards other classes

        The objective is to ensure that the code is structured in such a way that it can be easily maintained, tested,
        and can evolve.

        These rules are “metric”. They measure values (e.g. A number of statements) and are conditioned by
        thresholds (e.g. 100 statements / method). Only metrics on which developers are able to act are presented
        here. They apply to all methods.




Confidential – This document is the property of Kalistick                                                         24/58
Code audit of IceScrum2 application                                                                          2010-02-10


        5.3.1 Typology of structural problems
        This histogram shows for each rule of structure domain number of non-compliance (thus methods) and the
        percentage of related statements compared to the total number of statements in the application:




                                                            See the Quality Cockpit




                     The percentage of statements shown is interesting since there is often only a few methods
                concentrating a large part of the application code.




                      When some rules have been configured to be excluded from the analysis, they are
                displayed in this graph but without any results.




                      One method may be affected by several rules; therefore, the total does not correspond to
                the sum of numbers.



        The following table completes this view by introducing the number of new violations and the number of
        violations corrected in the case where a previous audit was conducted:

                               Anomaly                              Significant non-    New non-     Corrected non-    NC
                                                                      compliances      compliances    compliances     rate
         Statement count higher than 100                                    2              1               0           1%
         Cyclomatic complexity higher than 20                              14              4               0           3%




Confidential – This document is the property of Kalistick                                                             25/58
Code audit of IceScrum2 application                                                                 2010-02-10


                                                            See the Quality Cockpit




        5.3.2 Mapping methods by size
        The histogram below shows a mapping of methods according to their size. The size is expressed in number of
        statements to ignore the writing styles conventions.

        The last interval identifies the methods with a number of statements which exceeds the threshold. These
        methods are considered non-compliant because they are generally difficult to maintain and extend, and also
        show a high propensity to reveal bugs because they are difficult to test.

        The percentage of statements is provided because larger methods usually focus a significant part of the
        application:




                                                            See the Quality Cockpit




        The following table details the main non-compliant methods identified in the last interval of the previous
        graph:




Confidential – This document is the property of Kalistick                                                  26/58
Code audit of IceScrum2 application                                                                          2010-02-10


            Method                                                           Instructions   Lines   Complexity     New
                                                                                                                 violation
            icescrum2.service.impl.ExportPDFServiceImpl.addRelea                  131       227        38          New
            sePlan ( icescrum2.dao.model.IUser, int[], int[],
            icescrum2.dao.model.IProduct)
            icescrum2.service.impl.ClicheServiceImpl.createCliche (               208       343        42
            icescrum2.dao.model.IProduct, java.util.Date)



        5.3.3 Mapping methods by complexity
        The histogram below shows a mapping of methods according to their cyclomatic complexity (see 8.1
        Cyclomatic complexity).

        Cyclomatic complexity is a measure aiming to characterize the complexity of a block of code, by identifying
        all possible execution paths. This concept has been standardized by Mc Cabe5, but several calculation
        methods exist. The one used here is the most popular and the simplest: it counts the number of branching
        operators (if, for, while,? ...) and conditions (??, && ...).

        The last interval identifies methods whose complexity exceeds the threshold. These methods are considered
        non-compliant for the same reasons as for the long methods: they are generally difficult to maintain and
        extend, and also show a high propensity to reveal bugs.

        The percentage of statements and the percentage of complexity are provided because the most complex
        methods generally focus a significant part of the application.




                                                            See the Quality Cockpit




        5
         1976, IEEE Transactions on Software Engineering: 308–320.
        http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf.

Confidential – This document is the property of Kalistick                                                            27/58
Code audit of IceScrum2 application                                                                   2010-02-10


        The following table details the main non-compliant methods identified in the last interval of the previous
        graph:

          Method                                                    Instructions   Lines   Complexity      New
                                                                                                         violation
          icescrum2.service.impl.ExportPDFServiceImpl.addRelea          131        227          38         New
          sePlan ( icescrum2.dao.model.IUser, int[], int[],
          icescrum2.dao.model.IProduct)
          icescrum2.service.impl.ReleaseServiceImpl.saveRelease         59          84          30         New
          ( icescrum2.dao.model.IRelease,
          icescrum2.dao.model.IProduct, boolean,
          icescrum2.dao.model.IUser)
          icescrum2.service.impl.SprintServiceImpl.closeSprint (        70         120          21         New
          icescrum2.dao.model.IRelease,
          icescrum2.dao.model.ISprint,
          icescrum2.dao.model.IUser,
          icescrum2.dao.model.IProduct)
          icescrum2.service.impl.SprintServiceImpl.saveSprint (         43          69          20         New
          icescrum2.dao.model.ISprint,
          icescrum2.dao.model.IRelease, java.lang.Integer,
          icescrum2.dao.model.IUser,
          icescrum2.dao.model.IProduct)
          icescrum2.dao.model.impl.Sprint.equals (                      47          70          84
          java.lang.Object)
          icescrum2.service.impl.ClicheServiceImpl.createCliche (       208        343          42
          icescrum2.dao.model.IProduct, java.util.Date)
          icescrum2.service.impl.ImportXMLServiceImpl.parsePro          73         169          41
          duct ( org.w3c.dom.Element)
          icescrum2.dao.model.impl.ProductBacklogItem.equals (          45          40          37
          java.lang.Object)
          icescrum2.dao.model.impl.Build.equals (                       29          27          24
          java.lang.Object)
          icescrum2.dao.model.impl.CustomRole.equals (                  27          27          24
          java.lang.Object)
          icescrum2.service.impl.UserServiceImpl.saveUser (             24          36          23
          icescrum2.dao.model.IUser)
          icescrum2.dao.model.impl.ExecTest.equals (                    27          25          22
          java.lang.Object)
          icescrum2.dao.model.impl.Task.equals (                        27          26          22
          java.lang.Object)
          icescrum2.dao.model.impl.Test.equals (                        27          25          22
          java.lang.Object)



        5.3.4 Mapping methods by their complexity and efferent coupling
        This rule is intended to identify methods whose code has many dependencies to other classes. The concept
        of “efferent coupling” refers to those outgoing dependencies.

        The principle is that a method with a strong efferent coupling is difficult to understand, maintain and test.
        First because it requires knowledge of the different types it depends on, then because the risk of
        destabilization is higher because of these dependencies.




Confidential – This document is the property of Kalistick                                                     28/58
Code audit of IceScrum2 application                                                                  2010-02-10


        This rule is crossed with the cyclomatic complexity to ignore some trivial methods, such as initialization
        methods of graphical interfaces that make calls to many classes of widgets without presenting any real
        complexity.

        This rule considers that a method is non-compliant if it exceeds a threshold of efferent coupling and
        threshold of cyclomatic complexity.

        The chart below shows a mapping of methods according to their complexity and their efferent coupling. Each
        dot represents one or more methods with the same values of complexity and coupling. They are divided into
        four zones according to their status in relation to both thresholds:

                 The area on the lower left (green dots) contains compliant methods, below both thresholds
                 The area on the lower right (gray dots) contains compliant methods; they have reached the
                  complexity threshold, but remain below the coupling threshold
                 The area in the upper left (gray dots) contains compliant methods; they have reached the coupling
                  threshold, but remain below the complexity threshold
                 The area in the upper right (red dots) contains non-compliant methods; above both thresholds




                                                            See the Quality Cockpit




                       The intensity of the color of the dots depends on the number of methods that share the
                 same values in complexity and coupling: the more the color of the point is marked, the more
                 involved methods.




Confidential – This document is the property of Kalistick                                                   29/58
Code audit of IceScrum2 application                                                                      2010-02-10


        The histogram below provides an additional view of this mapping and precise figures for the four zones in
        terms of percentage of methods and statements of the application. The last bars indicate the area of non-
        compliance:




                                                            See the Quality Cockpit




        The following table details the main non-compliant methods:

         Method                                                                       Efferent   Complexity     New
                                                                                      Coupling                violation
         icescrum2.service.impl.ExportPDFServiceImpl.addReleasePlan (                    35         38          New
         icescrum2.dao.model.IUser, int[], int[],
         icescrum2.dao.model.IProduct)
         icescrum2.service.impl.SprintServiceImpl.closeSprint (                         29          21
         icescrum2.dao.model.IRelease, icescrum2.dao.model.ISprint,
         icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct)
         icescrum2.service.impl.ImportXMLServiceImpl.parseProduct (                     23          41
         org.w3c.dom.Element)
         icescrum2.service.impl.SprintServiceImpl.autoSaveSprint (                      21          16
         icescrum2.dao.model.IRelease, icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct)
         icescrum2.service.impl.SprintServiceImpl.saveSprint (                          20          20
         icescrum2.dao.model.ISprint, icescrum2.dao.model.IRelease,
         java.lang.Integer, icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct)
         icescrum2.service.impl.ImportXMLServiceImpl.importProduct (                    20          18          New
         java.io.InputStream, icescrum2.dao.model.IUser,
         icescrum2.service.beans.ProgressObject)
                                                            See the Quality Cockpit




Confidential – This document is the property of Kalistick                                                        30/58
Code audit of IceScrum2 application                                                                          2010-02-10


        5.4 Test
        The Test domain provides rules to ensure that the application is sufficiently tested, quantitatively but also
        qualitatively, i.e. tests should target risk areas.


        5.4.1 Issues
        It is important to situate the problems inherent in managing tests to understand the results of analysis for
        this area.

        5.4.1.1 Unit testing and code coverage
        The results of this domain depend on the testing process applied to the project: if automated unit testing
        process and / or code coverage are implemented on the project, then the analysis uses the results of these
        processes.

        As a reminder, we must distinguish unit testing and code coverage:

                     A unit test is an automated test, which usually focus on a simple method inside source code. But
                      since this method has generally dependencies on other methods or classes, a unit test can test a
                      more or less important part of the application (the larger is this part, the less relevant is the test)

                     Code coverage measures the amount of code executed from tests, by identifying each element
                      actually executed at runtime (statements, conditional branches, methods ...). These tests can be
                      unit tests (automated) or integration tests / functional (manual or automated).

        Code coverage is interesting to combine with the unit tests because it is the only way to measure the code
        actually tested. However, many projects still do not check the code coverage, which does not allow verifying
        the quality of testing in this type of analysis.

        The indicators presented next address both cases; they are useful for projects with unit tests and/or code
        coverage but also for other projects.

        5.4.1.2 Relevance of code coverage
        Code coverage provides figures indicating the proportion of code executed after the tests, for example 68%
        of statements of a method are covered or 57% of the project statements...

        The problem is that these figures do not take into account the relevance to test the code. For example a
        coverage of 70% of the application is a good figure, but the covered code could be trivial and without any
        real interest for the tests (e.g. accessors or generated code), whereas the critical code may be located in the
        remaining 30%.

        The analysis performed here captures the relevance to test of each method, which is used to calibrate the
        code coverage requirements and to set appropriate thresholds to better target testing effort towards risk
        areas.




Confidential – This document is the property of Kalistick                                                             31/58
Code audit of IceScrum2 application                                                                                    2010-02-10


        5.4.2 TestRelevancyIndex metrics (TRI) and TestEffortIndex (TEI)
        To refine the analysis of tests, two new metrics were designed by the Centre of Excellence in Information and
        Communication Technologies (CETIC) based on researches conducted during the past 20 years and from the
        “Quality Cockpit” knowledge base6.

        The TestRelevancyIndex (TRI) measures the relevancy of testing a method in accordance with its technical
        risks and its business risk.

        Technical risk assesses the probability of finding a defect; it is based on different metrics such as cyclomatic
        complexity, number of variables, number of parameters, efferent coupling, cumulative number of non-
        compliances...

        The business risk associates a risk factor to business features which should be tested in priority (higher risk),
        or instead which should not be tested (minor risk). It must be determined at the initialization of the audit to
        be considered in the TRI calculations. The objective is to guide the testing effort on the important features.

        For this, the TRI is used to classify the methods according to a scale of testing priority, and thus to distinguish
        the truly relevant methods to test from trivial and irrelevant methods in this area. For each level of the scale,
        a specific threshold to achieve with code coverage can be set. This allows setting a high threshold for critical
        methods, and a low threshold for low-priority methods.

        The TestEffortIndex (TEI) completes the TRI by measuring the level of effort required to test a method. Like
        the TRI, it is based on a set of unit metrics characterizing a method. It helps to refine the decisions to select
        the code to be tested by balancing the effort over the relevance test.

        The details of calculating these two indexes are providing in annex (8.2 The coupling).


        5.4.3 Mapping methods by testing priority
        The histogram below shows a mapping of methods according to their priority of testing, using a scale of four
        levels based on TRI of methods (each level corresponding to a range of TRI).

        This mapping uses the code coverage information only if they were supplied for analysis. For each priority
        level are indicated:

                  The average coverage rate (0 if coverage information was not provided)
                  The number of methods not covered (no coverage)
                  The number of methods insufficiently covered (coverage rate below the target rate set for this level
                   of priority)
                  The number of methods sufficiently covered (coverage greater than or equal to the target rate set
                   for this level of priority)




        6
            CETIC, Kalistick. Statistically Calibrated Indexes for Unit Test Relevancy and Unit Test Writing Effort, 2010

Confidential – This document is the property of Kalistick                                                                    32/58
Code audit of IceScrum2 application                                                                          2010-02-10




        The table below shows these figures for each priority level, also adding a fifth level corresponding to the
        methods without test priority:

                                  Test priority        Covered Uncovered              Insufficient covered
                                  Critical                   0             3                  2
                                  High                       4            13                  5
                                  Medium                     6            46                  2
                                  Low                       18            96                  3
                                  None                      115          3093                 0
                                  [Total]                   143          3251                 12
                                                            See the Quality Cockpit




        5.4.4 Coverage of application by tests
        This graph, called “TreeMap” shows code coverage of the application against test objectives. It helps to
        identify parts of the application that are not sufficiently tested regarding identified risks. It gathers the
        classes of project into technical subsets, and characterizes them following two dimensions:

                 size, which depends on the number of statements
                 color, which represents the deviation from the test objective set for the classes: the color red
                  indicates that the current coverage is far from the goal, whereas the green color indicates that the
                  goal is reached




Confidential – This document is the property of Kalistick                                                          33/58
Code audit of IceScrum2 application                                                                         2010-02-10




                                                            See the Quality Cockpit




                        A class can be green even if it is not or little tested: for example, classes with a low
                 probability of technical defects or without business risk. Conversely, a class already tested can be
                 stated as insufficient (red / brown) if its objective is very demanding.




                      An effective strategy to improve its coverage is to focus on large classes close to the goal.




        5.4.5 Most important classes to test (Top Risks)
        The following chart allows quickly identifying the most relevant classes to test, the “Top Risks”. It is a
        representation known as "cloud" that displays the classes using two dimensions:

                 The size of the class name depends on its relevancy in being tested (TRI cumulated for all methods of
                  this class)
                 The color represents the deviation from the coverage goal set for the class, just as in the previous
                  TreeMap




Confidential – This document is the property of Kalistick                                                             34/58
Code audit of IceScrum2 application                                                                          2010-02-10




                                                            See the Quality Cockpit




                        This representation identifies the critical elements, but if you want to take into account the
                 effort of writing tests, you must focus on the following representation to select items to be
                 corrected.



        5.4.6 Most important classes to test and require the least effort (Quick Wins)
        The “Quick Wins” complements “Top Risks” by taking into account the testing effort required for testing the
        class (TEI):

                 The size of the class name depends on its interest in being tested (TRI), but weighted by the effort
                  required (TEI accumulated for all methods): a class with a high TRI and a high TEI (therefore difficult
                  to test) appears smaller than a class with an average TRI but a low TEI
                 The color represents the deviation from the coverage goal set for the class, just like the TreeMap or
                  QuickWin




                                                            See the Quality Cockpit




        5.4.7     Methods to test in priority

Confidential – This document is the property of Kalistick                                                            35/58
Code audit of IceScrum2 application                                                                 2010-02-10


        The following table details the main methods to be tested first. Each method is associated with its current
        coverage rate, the raw value of its TRI and its level of TEI:




Confidential – This document is the property of Kalistick                                                   36/58
Code audit of IceScrum2 application                                                                     2010-02-10


         Method                                             Coverage   Relevancy   Priority    Effort   New
                                                                         (TRI)                        violation
         icescrum2.service.impl.ExportPDFServiceImpl.         0%          39.0     Critical Very high   New
         addReleasePlan ( icescrum2.dao.model.IUser,
         int[], int[], icescrum2.dao.model.IProduct)
         icescrum2.service.impl.ImportXMLServiceImpl.         0%         39.0      Critical Very high
         parseProduct ( org.w3c.dom.Element)
         icescrum2.service.impl.ClicheServiceImpl.creat       0%         37.0      Critical Very high
         eCliche ( icescrum2.dao.model.IProduct,
         java.util.Date)
         icescrum2.service.impl.ProductBacklogServiceI        76%        36.0      Critical     High
         mpl.saveProductBacklogitem (
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IProduct,
         icescrum2.dao.model.ISprint,
         icescrum2.dao.model.IUser,
         icescrum2.dao.model.ICustomRole)
         icescrum2.service.impl.TaskServiceImpl.updat         51%        35.0      Critical     High
         eTask ( icescrum2.dao.model.ITask,
         icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct,
         java.lang.String)
         icescrum2.service.impl.ProductBacklogServiceI        60%        34.0       High        High
         mpl.associateItem (
         icescrum2.dao.model.ISprint,
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IProduct,
         icescrum2.dao.model.ISprint,
         icescrum2.dao.model.IUser)
         icescrum2.dao.model.impl.Sprint.equals (             0%         33.0       High      Very high
         java.lang.Object)
         icescrum2.service.impl.ExportPDFServiceImpl.         0%         33.0       High        High         New
         addProject ( java.util.HashMap,
         icescrum2.dao.model.IProduct,
         icescrum2.dao.model.IUser)
         icescrum2.service.chart.VelocityChartSprint.ini      0%         33.0       High        High
         t()
         icescrum2.service.chart.BurndownChartReleas          0%         33.0       High        High
         e.init ( )
         icescrum2.service.impl.ReleaseServiceImpl.up         45%        32.0       High        High
         dateRelease ( icescrum2.dao.model.IRelease,
         icescrum2.dao.model.IProduct)
         icescrum2.service.impl.TestServiceImpl.saveTe        79%        32.0       High        High
         st ( icescrum2.dao.model.ITest,
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IUser)
         icescrum2.service.impl.SprintServiceImpl.auto        0%         32.0       High      Very high
         SaveSprint ( icescrum2.dao.model.IRelease,
         icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct)


Confidential – This document is the property of Kalistick                                                     37/58
Code audit of IceScrum2 application                                                                          2010-02-10


         icescrum2.service.impl.SprintServiceImpl.calcu               47%             32.0   High     High
         lateDailyHours ( icescrum2.dao.model.ISprint,
         int)
         icescrum2.service.impl.SprintServiceImpl.close               61%             32.0   High   Very high
         Sprint ( icescrum2.dao.model.IRelease,
         icescrum2.dao.model.ISprint,
         icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct)
         icescrum2.dao.model.impl.ProductBacklogIte                   0%              31.0   High     High
         m.equals ( java.lang.Object)
         icescrum2.service.impl.ProductBacklogServiceI                0%              31.0   High     High
         mpl.changeRank (
         icescrum2.dao.model.IProduct,
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IUser)
         icescrum2.service.impl.ProductBacklogServiceI                0%              30.0   High     High
         mpl.getStory ( org.w3c.dom.Element,
         java.util.Map)
         icescrum2.service.impl.ProductBacklogServiceI                0%              30.0   High     High
         mpl.updateProductBacklogItem (
         icescrum2.dao.model.IStory,
         icescrum2.dao.model.IUser,
         icescrum2.dao.model.IProduct,
         icescrum2.dao.model.ISprint,
         icescrum2.dao.model.ICustomRole)
                                                            See the Quality Cockpit




        5.5 Architecture
        The Architecture domain aims to monitor compliance of a software architecture model. The target
        architecture model has been presented in Chapter 4.3.2 Technical model. The following diagram shows the
        results of architecture analysis by comparing this target model with current application code.


                     Currently, architecture non-compliances are not taken into account in the calculation of
                non-compliance of the application.




Confidential – This document is the property of Kalistick                                                          38/58
Code audit of IceScrum2 application                                                                    2010-02-10




                                                            See the Quality Cockpit




                      Non-compliances related to communication constraints between two elements are
                represented using arrows. The starting point is the calling element, the destination is the one
                called. The orange arrows involve direct communication between a top layer and bottom layer
                non-adjacent (sometimes acceptable). The black arrows refer to communications totally
                prohibited.



        5.6 Duplication
        The Duplication domain is related to the “copy-and-paste” identified in the application. To avoid many false
        positives in this area, a threshold is defined to ignore blocks with few statements.

        Duplications should be avoided for several reasons: maintenance and changeability issues, testing costs, lack
        of reliability...


        5.6.1 Mapping of duplication
        The chart below shows a mapping of duplications within the application. It does not take into account the
        duplication involving a number of statements below the threshold, because they are numerous and mostly
        irrelevant (e.g. duplication of accessors between different classes sharing similar properties).




Confidential – This document is the property of Kalistick                                                     39/58
Code audit of IceScrum2 application                                                                       2010-02-10


        Duplicates are categorized by ranges of duplicated statements. For each range is presented:

                 The number of different duplicated blocks (each duplicated at least once)
                 The maximum number of duplications of the same block




                                                            See the Quality Cockpit




        5.6.2 Duplications to fix in priority
        The following table lists the main duplicates to fix in priority. Each block is identified by a unique identifier,
        and each duplication is located in the source code. If a previous audit were completed, a flag indicates
        whether duplication is new or not.




Confidential – This document is the property of Kalistick                                                          40/58
Code audit of IceScrum2 application                                                                     2010-02-10


         Duplication       Duplicated Class involved                                            Lines         New
          number           blocks size                                                                      violation
              239             111      icescrum2.presentation.app.roadmap.RoadmapUI          858:1003         New
              239             111      icescrum2.presentation.app.releasebrowser.ReleaseBr   1045:1190        New
                                       owserUI
              238              69      icescrum2.presentation.app.roadmap.RoadmapUI           590:688         New
              238              69      icescrum2.presentation.app.releasebrowser.ReleaseBr    731:830         New
                                       owserUI
              237              56      icescrum2.service.impl.ClicheServiceImpl               309:373
              237              56      icescrum2.service.impl.ClicheServiceImpl               201:263
              236              52      icescrum2.service.chart.GlobalChartTest                243:316
              236              52      icescrum2.service.chart.VelocityChartSprint            219:292
              236              52      icescrum2.service.chart.ExecChartTest                  156:229
              235              50      icescrum2.service.chart.VelocityChartSprint            221:290
              235              50      icescrum2.service.chart.ExecChartTest                  158:227
              235              50      icescrum2.service.chart.BurndownChartProduct           322:391
              235              50      icescrum2.service.chart.GlobalChartTest                245:314
              234              49      icescrum2.presentation.app.releasebrowser.ReleaseBr    877:944         New
                                       owserUI
              234              49      icescrum2.presentation.app.roadmap.RoadmapUI           698:765         New
              233              48      icescrum2.service.chart.GlobalChartTest                249:316
              233              48      icescrum2.service.chart.ExecChartTest                  162:229
              233              48      icescrum2.service.chart.BurndownChartRelease           202:268
              233              48      icescrum2.service.chart.VelocityChartSprint            225:292
                                                            See the Quality Cockpit




        5.7 Documentation
        The Documentation domain aims to control the level of technical documentation of the code. Only the
        definition of standard comment header of the methods is verified: Javadoc for Java, XmlDoc for C#. Inline
        comments (in the method bodies) are not evaluated because of the difficulty to verify their relevance (often
        commented code or generated comments).

        In addition, the header documentation is verified only for methods considered quite long and complex.
        Because the effort to document trivial methods is rarely justified. For this, a threshold on the cyclomatic
        complexity and a threshold on the number of statements are defined to filter out methods to check.


        5.7.1 Mapping documentation issues
        The chart below shows the status of header documentations for all methods with a complexity greater than
        the threshold. The methods are grouped by ranges of size (number of statements). For each range are given
        the number of methods with header documentation and the number of methods without header
        documentation. The red area in the last range corresponds to the methods not documented therefore non-
        compliant.




Confidential – This document is the property of Kalistick                                                      41/58
Code audit of IceScrum2 application                                                                             2010-02-10




        5.7.2 Methods to document in priority
        The following table lists the main methods to document in priority:

                                           Method                                     Instructions Complexity      New
                                                                                                                 violation
             icescrum2.service.impl.ExportXMLServiceImpl.exportSprint                    81           10           New
             icescrum2.service.impl.ReleaseServiceImpl.saveRelease                       59           30           New
             icescrum2.service.impl.ClicheServiceImpl.createCliche                       208          42
             icescrum2.service.impl.SprintServiceImpl.autoSaveSprint                     89           16
             icescrum2.service.chart.VelocityChartSprint.init                            78           13
             icescrum2.service.impl.SprintServiceImpl.closeSprint                        70           21
             icescrum2.service.impl.ExportXMLServiceImpl.exportItem                      66           10
             icescrum2.service.chart.BurndownChartRelease.init                           66           14
                                                            See the Quality Cockpit




Confidential – This document is the property of Kalistick                                                              42/58
Code audit of IceScrum2 application                                                                    2010-02-10



        6 Action Plan
        For each domain, a recommendation of corrections was established on the basis of tables detailing the rules
        and code elements to correct. The following graph provides a comprehensive strategy to establish a plan of
        corrections by defining a list of actions. This list is prioritized according to the expected return on
        investment: the actions recommended in the first place are those with the best ratio between effort to
        produce and gain on the overall rate of non-compliance.




        Here is the explanation of each step:

             1. Correction of forbidden practices
                These practices are often easy to correct, and because they invalidate the classes directly, the
                correction generally leads to significantly improve the overall rate of non-compliance (if classes are
                not invalidated by other rules).

             2. Splitting long methods
                Using some IDE, it is often easy to break a method too long into several unit methods. This is
                achieved using automated operations performing refactorings, avoiding any risk of regression
                associated with manual intervention.

             3. Documentation of complex methods
                This step aims to document methods identified as non-compliant in documentation domain, this is a
                simple but potentially tedious operation.

             4. Correction of inadvisable practices
                Correspond to all practices remaining after correction of forbidden practices: practices highly
                inadvisable, inadvisable and to be avoided.




Confidential – This document is the property of Kalistick                                                      43/58
Code audit of IceScrum2 application                                                                    2010-02-10


             5. Removing of duplications
                This operation is more or less difficult depending on the case: you have first to determine whether
                the duplication should really be factorized, because two components may share the same code base
                but be independent. Note that the operation can be automated by some IDE and according to the
                type of duplication.

             6. Modularization of complex operations
                This operation is similar to splitting long methods, but is often more difficult to achieve due to the
                complexity of the code.


                     The action plan can be refined on the Quality Cockpit using the mechanism of "tags." Tags
                allow labeling the results of analysis to facilitate operations such as the prioritization of
                corrections, their assignment to developers or the targeting of their fix version.




Confidential – This document is the property of Kalistick                                                      44/58
Code audit of IceScrum2 application                                                                       2010-02-10




        7 Glossary


        Block coverage
        Block coverage measures the rate of code blocks executed during testing compared to total blocks. A code
        block is a code path with a single entry point, a single exit point and a set of statements executed in
        sequence. It ends when it reaches a conditional statement, a function call, an exception, or a try / catch.

        Branch coverage
        Branch coverage measures the rate of branches executed during tests by the total number of branches.

        if (value)
        {
          //
        }
        This code will be covered by branches to 100% if the if condition was tested in the case of true and false.

        Line coverage
        Lines (or statements) coverage measures the rate of executed lines during testing against the total number
        of lines. This measure is insensitive to conditional statements, coverage of lines can reach 100% whereas all
        conditions are not executed.

        Line of code
        A physical line of a source code in a text file. White line or comment line are counted in lines of code.

        Non-compliance
        A test result that does not satisfy the technical requirements defined for the project. Non-compliance is
        related to a quality factor and a quality domain.

        Synonym (s): violation

        Quality domain
        The test results are broken down into four areas depending on the technical origin of the non-compliances:

                 Implementation: Issues related to the use of language or algorithmic
                 Structure: Issues related to the organization of the source code: methods size, cyclomatic complexity
                  ...
                 Test: Related to unit testing and code coverage
                 Architecture: Issues related to the software architecture
                 Documentation : Issues related to the code documentation: comments headers, inline comments ...
                 Duplication : The “copy-pastes” found in the source code

        Quality factor
        The test results are broken down into six quality factors following application needs in terms of quality:

                 Efficiency: Does the application ensure required execution performance?
                 Changeability: Do the code changes require higher development costs?


Confidential – This document is the property of Kalistick                                                           45/58
Code audit of IceScrum2 application                                                                    2010-02-10


                 Reliability: Does the application contain bugs that affect its expected behavior?
                 Maintainability: Do the maintenance updates require a constant development cost?
                 Security: Has the application security flaws?
                 Transferability: Is the transfer of the application towards a new development team a problem?

        Statement
        A statement is a primary code unit. For simplicity, a statement is delimited by a semicolon (;) or by a left
        brace ({). Examples of statements in Java:

                 int i = 0;
                 if (i == 0) {
                   } else {}
                 public final class SomeClass
                 {
                 import com.project.SomeClass;
                 package com.project;
        Unlike lines of code, statements do not include blank lines and comment lines. In addition, a line can contain
        multiple statements.




Confidential – This document is the property of Kalistick                                                      46/58
Code audit of IceScrum2 application                                                                        2010-02-10



        8 Annex

        8.1 Cyclomatic complexity
        Cyclomatic complexity is an indicator of the number of possible paths of execution.




        Its high value is a sign that the source code will be hard to understand, to test, to validate, to maintain and to
        evolve.

        8.1.1     Definition
        Imagine a control graph that represents the code that you want to measure the complexity. Then, count the
        number of faces of the graph. This gives the structural complexity of the code, also called cyclomatic
        complexity.




Confidential – This document is the property of Kalistick                                                          47/58
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing
Java source code analysis for better testing

More Related Content

What's hot

Empirical Study of Software Development Life Cycle and its Various Models
Empirical Study of Software Development Life Cycle and its Various ModelsEmpirical Study of Software Development Life Cycle and its Various Models
Empirical Study of Software Development Life Cycle and its Various ModelsCSCJournals
 
Mca se chapter_07_software_validation
Mca se chapter_07_software_validationMca se chapter_07_software_validation
Mca se chapter_07_software_validationAman Adhikari
 
Neil Tompson - SoftTest Ireland
Neil Tompson - SoftTest IrelandNeil Tompson - SoftTest Ireland
Neil Tompson - SoftTest IrelandDavid O'Dowd
 
Phytel: NCQA Prevalidation for PCMH 2011 Autocredit
Phytel: NCQA Prevalidation for PCMH 2011 AutocreditPhytel: NCQA Prevalidation for PCMH 2011 Autocredit
Phytel: NCQA Prevalidation for PCMH 2011 AutocreditPhytel
 
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...IRJET- Factors Affecting the Delivery of Quality Software and their Relations...
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...IRJET Journal
 

What's hot (7)

STAC Report Highlights
STAC Report HighlightsSTAC Report Highlights
STAC Report Highlights
 
Antony Lenat Raja S Resume
Antony Lenat Raja S ResumeAntony Lenat Raja S Resume
Antony Lenat Raja S Resume
 
Empirical Study of Software Development Life Cycle and its Various Models
Empirical Study of Software Development Life Cycle and its Various ModelsEmpirical Study of Software Development Life Cycle and its Various Models
Empirical Study of Software Development Life Cycle and its Various Models
 
Mca se chapter_07_software_validation
Mca se chapter_07_software_validationMca se chapter_07_software_validation
Mca se chapter_07_software_validation
 
Neil Tompson - SoftTest Ireland
Neil Tompson - SoftTest IrelandNeil Tompson - SoftTest Ireland
Neil Tompson - SoftTest Ireland
 
Phytel: NCQA Prevalidation for PCMH 2011 Autocredit
Phytel: NCQA Prevalidation for PCMH 2011 AutocreditPhytel: NCQA Prevalidation for PCMH 2011 Autocredit
Phytel: NCQA Prevalidation for PCMH 2011 Autocredit
 
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...IRJET- Factors Affecting the Delivery of Quality Software and their Relations...
IRJET- Factors Affecting the Delivery of Quality Software and their Relations...
 

Similar to Java source code analysis for better testing

VAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdfVAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdfSamehMostafa33
 
Agile for Software as a Medical Device
Agile for Software as a Medical DeviceAgile for Software as a Medical Device
Agile for Software as a Medical DeviceOrthogonal
 
IRJET- Research Study on Testing Mantle in SDLC
IRJET- Research Study on Testing Mantle in SDLCIRJET- Research Study on Testing Mantle in SDLC
IRJET- Research Study on Testing Mantle in SDLCIRJET Journal
 
Deployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V CoreDeployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V CoreIRJET Journal
 
Basic of Software Testing.pptx
Basic of Software Testing.pptxBasic of Software Testing.pptx
Basic of Software Testing.pptxaparna14patil
 
Unit II Software Testing and Quality Assurance
Unit II Software Testing and Quality AssuranceUnit II Software Testing and Quality Assurance
Unit II Software Testing and Quality AssuranceVinothkumaR Ramu
 
Qaprocess 121210082249-phpapp01
Qaprocess 121210082249-phpapp01Qaprocess 121210082249-phpapp01
Qaprocess 121210082249-phpapp01Viviana Lesmes
 
White paper quality at the speed of digital
White paper   quality at the speed of digitalWhite paper   quality at the speed of digital
White paper quality at the speed of digitalrajni singh
 
Software engineering
Software engineeringSoftware engineering
Software engineeringGuruAbirami2
 
Agile in Medical Software Development
Agile in Medical Software DevelopmentAgile in Medical Software Development
Agile in Medical Software DevelopmentBernhard Kappe
 
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET Journal
 
Peck shield audit-report-umee-v1.0
Peck shield audit-report-umee-v1.0Peck shield audit-report-umee-v1.0
Peck shield audit-report-umee-v1.0KennyNajarro2
 
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the ProblemsTakanori Suzuki
 
STLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptxSTLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptxssusere4c6aa
 
Software Testing - Online Guide
Software Testing - Online GuideSoftware Testing - Online Guide
Software Testing - Online Guidebigspire
 
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...IRJET Journal
 

Similar to Java source code analysis for better testing (20)

VAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdfVAL-210-Computer-Validati-Plan-sample.pdf
VAL-210-Computer-Validati-Plan-sample.pdf
 
Agile for Software as a Medical Device
Agile for Software as a Medical DeviceAgile for Software as a Medical Device
Agile for Software as a Medical Device
 
IRJET- Research Study on Testing Mantle in SDLC
IRJET- Research Study on Testing Mantle in SDLCIRJET- Research Study on Testing Mantle in SDLC
IRJET- Research Study on Testing Mantle in SDLC
 
Deployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V CoreDeployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V Core
 
Basic of Software Testing.pptx
Basic of Software Testing.pptxBasic of Software Testing.pptx
Basic of Software Testing.pptx
 
Unit II Software Testing and Quality Assurance
Unit II Software Testing and Quality AssuranceUnit II Software Testing and Quality Assurance
Unit II Software Testing and Quality Assurance
 
Qaprocess 121210082249-phpapp01
Qaprocess 121210082249-phpapp01Qaprocess 121210082249-phpapp01
Qaprocess 121210082249-phpapp01
 
ETCA_4
ETCA_4ETCA_4
ETCA_4
 
Scale
ScaleScale
Scale
 
White paper quality at the speed of digital
White paper   quality at the speed of digitalWhite paper   quality at the speed of digital
White paper quality at the speed of digital
 
Software engineering
Software engineeringSoftware engineering
Software engineering
 
Qa analyst training
Qa analyst training Qa analyst training
Qa analyst training
 
Agile in Medical Software Development
Agile in Medical Software DevelopmentAgile in Medical Software Development
Agile in Medical Software Development
 
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
IRJET - A Valuable and Speculative Approach to Manage the Item Testing by usi...
 
Peck shield audit-report-umee-v1.0
Peck shield audit-report-umee-v1.0Peck shield audit-report-umee-v1.0
Peck shield audit-report-umee-v1.0
 
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
5WCSQ - Quality Improvement by the Real-Time Detection of the Problems
 
STLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptxSTLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptx
 
Slides chapters 26-27
Slides chapters 26-27Slides chapters 26-27
Slides chapters 26-27
 
Software Testing - Online Guide
Software Testing - Online GuideSoftware Testing - Online Guide
Software Testing - Online Guide
 
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...
IRJET- Use of Artificial Intelligence in Software Development Life Cycle Requ...
 

Recently uploaded

Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Nikki Chapple
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 

Recently uploaded (20)

Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 

Java source code analysis for better testing

  • 1. ICESCRUM Application ICESCRUM2 Audit Report 2010-02-10 This document is a sample audit report produced automatically from the results of the analysis of the application on the Kalistick platform. It does not include any specific comments on the results. Its purpose is to serve as a model to build custom reports, it illustrates the ability of the platform to render a clear and comprehensible quality of an application. This document is confidential and is the property of Kalistick. It should not be circulated or modified without permission. Kalistick 13 av Albert Einstein F-69100 Villeurbanne +33 (0) 486 68 89 42 contact@kalistick.com www.kalistick.com
  • 2. Code audit of IceScrum2 application 2010-02-10 1 Executive Summary The Quality Cockpit uses static analysis techniques: it does not execute the application, but analyzes the elements that compose it (code, test results, architecture ...). The results are correlated, aggregated and compared within the project context to identify risks related to quality. This report presents the results. Variation compared to the objective This chart compares the current status of the project to the objectives set for each quality factor. The goal, set at the initialization of the audit, represents the importance of each quality factor. It is intended to define the rules to follow during development and the accepted tolerance. Rate of overall non-compliance This gauge shows the overall level of quality of the application compared to its objective. It displays the percentage of the application (source code) regarded as not-compliant. According to the adopted configuration, a rate higher than 15% indicates the need for further analysis. Origin of non-compliances This graph identifies the technical origin of detected non-compliances, and the main areas of improvement. According to elements submitted for the analysis, some quality domains may not be evaluated. Confidential – This document is the property of Kalistick 2/58
  • 3. Code audit of IceScrum2 application 2010-02-10 Report Organization This report presents the concepts of Quality Cockpit, the goal and the associated technical requirements before proceeding with the summary results and detailed results for each technical area. 1 Executive Summary ...................................................................................................................................... 2 2 Introduction .................................................................................................................................................. 4 2.1 The Quality Cockpit............................................................................................................................... 4 2.2 The analytical........................................................................................................................................ 4 3 Quality objective........................................................................................................................................... 7 3.1 The quality profile ................................................................................................................................ 7 3.2 The technical requirements ................................................................................................................. 7 4 Summary of results..................................................................................................................................... 10 4.1 Project status ...................................................................................................................................... 10 4.2 Benchmarking ..................................................................................................................................... 13 4.3 Modeling application.......................................................................................................................... 17 5 Detailed results........................................................................................................................................... 19 5.1 Detail by quality factors...................................................................................................................... 19 5.2 Implementation .................................................................................................................................. 20 5.3 Structure ............................................................................................................................................. 24 5.4 Test ..................................................................................................................................................... 31 5.5 Architecture ........................................................................................................................................ 38 5.6 Duplication ......................................................................................................................................... 39 5.7 Documentation................................................................................................................................... 41 6 Action Plan.................................................................................................................................................. 43 7 Glossary ...................................................................................................................................................... 45 8 Annex .......................................................................................................................................................... 47 8.1 Cyclomatic complexity........................................................................................................................ 47 8.2 The coupling ....................................................................................................................................... 49 8.3 TRI and TEI .......................................................................................................................................... 50 8.4 Technical Requirements ..................................................................................................................... 52 Confidential – This document is the property of Kalistick 3/58
  • 4. Code audit of IceScrum2 application 2010-02-10 2 Introduction 2.1 The Quality Cockpit This audit is based on an industrialized process of code analysis. This industrialization ensures reliable results and easily comparable with the results of other audits. The analysis process is based on the "Quality Cockpit" platform, available through SaaS1 model (https://cockpit.kalistick.com). This platform has the advantage of providing a knowledge base unique in that it centralizes the results from statistical analysis of millions code lines, enriched continuously with new analyses. It allows performing comparative analysis with other similar projects. 2.2 The analytical The analysis focuses on the code of the application (source code and binary code), for Java (JEE) or C# (. Net) technologies. It is a static analysis (without runtime execution), supplemented by correlation with information from development tools already implemented for the project: version control system, unit testing frameworks, code coverage tools. The results are given through an analytical approch based around three main dimensions:  The quality factors, which determine the nature of the impact of non-compliances detected, and the impact on the quality of the application  The quality domains, which specify the technical origin of non-compliances  The severity levels, which positions the non-compliances on a severity scale to characterize their priority 1 Software as a Service: application accessible remotely via Internet (using a standard browser) Confidential – This document is the property of Kalistick 4/58
  • 5. Code audit of IceScrum2 application 2010-02-10 2.2.1 The quality factors The quality factors standardize a set of quality attributes which should claim the application according to ISO 912623:  Maintainability. Ability of software to be easily repaired, depending on the effort required to locate, identify and correct errors.  Reliability. Ability of software to function properly in making the service expected in normal operation.  Changeability. Ability of software to be able to evolve, depending on the effort required to add, delete, and modify the functions of an operating system.  Security. Ability of software to operate within the constraints of integrity, confidentiality and traceability requirements.  Transferability. Ability to perform maintenance and evolution of software by a new team separate from the one which developed the original software.  Efficiency. Relationship between the level of software performance and the number of resources required to operate in nominal conditions. 2.2.2 The quality domains The quality domains determine the nature of problems according to their technical origin. There is six of it:  Implementation. The problems inherent in coding: misuse of language, potential bugs, code hard to understand ... These problems can affect one or more of the six quality factors.  Structure. Problems related to the code organization: methods too long, too complex, with too many dependencies ... These issues impact maintainability and changeability of the application.  Test. Describes how the application is tested based on results of unit tests (failure rate, execution time ...) but also of the nature of the code covered by the test execution. The objective is to ensure that the tests cover the critical parts of the application. 2 ISO/IEC 9126-1:2001 Software engineering — Product quality — Part 1: Quality model : http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=22749 3 The analysis focuses on a subset of ISO 9126 in order to focus on controllable dimensions automatically. Confidential – This document is the property of Kalistick 5/58
  • 6. Code audit of IceScrum2 application 2010-02-10  Architecture. Problems with the software architecture of the application. The platform allows the definition of an architectural model to modularize the application into layers or components and define communication constraints between them. The analysis identifies in the code all the calls which do not satisfy these constraints, to detect the maintainability, changeability and security risk levels.  Documentation. Problems related to lack of documentation in the code. This area primarily impacts the transferability of code.  Duplication. Identification of all significant copy-pastes in the application. They impact reliability, maintainability, transferability and changeability. 2.2.3 Severity levels The severity levels are intended to characterize the priority of correction of non-compliances. This priority depends on the severity of the impact of non-compliance, but also on the effort required for correction: some moderately critical problems might be marked with a high level of severity because of the triviality of their resolution. To simplify interpretation, the severity levels are expressed using a four-level scale. The first is an error, the others are warnings, from most to least severe:  Forbidden  Highly inadvisable  Inadvisable  To be avoided Compared to the Forbidden level, other levels of severity are managed with a tolerance threshold, which increases inversely with gravity. Confidential – This document is the property of Kalistick 6/58
  • 7. Code audit of IceScrum2 application 2010-02-10 3 Quality objective One of distinctive features of "Quality Cockpit" is to perform the analysis according to real needs of the project in terms of quality, in order to avoid unnecessary efforts and to ensure greater relevance of quality risks. These requirements are formalized by defining the "quality profile" of the application, which characterizes the quality levels expected on each of the six main quality factors. This profile is then translated as "technical requirements" which are technical rules to be followed by the developers. 3.1 The quality profile For this audit, the profile is established as follows: See the Quality Cockpit 3.2 The technical requirements Based on the above quality profile, technical requirements have been selected from the “Quality Cockpit” knowledge base. These technical requirements cover the six quality domains (implementation, structure, testing, architecture, documentation, duplication) and are configured according to the quality profile (thresholds, levels of severity ...). The objective is to ensure a calibration of requirements that ensures the highest return on investment. Confidential – This document is the property of Kalistick 7/58
  • 8. Code audit of IceScrum2 application 2010-02-10 Here are the details of these technical requirements: Domain Rule Explanation, goal and possible thresholds - According to your profile, between 150 and 200 rules were selected. They Implementation are exhaustively presented in the appendix of the report (8.4.1 Implementation rules). Objective: avoid bad practices and apply best practices related to the technology used. Size of methods Number of statements. This measure is different from the number of lines of code: it does not include comment lines or blank lines but only lines with at least one statement. Objective: avoid processing blocks difficult to understand. The threshold for the project is:  Number of lines: 100 Complexity of methods Cyclomatic complexity of a method. It measures the complexity of the control flow of a method by counting the number of independent paths covering all possible cases. The higher the number, the harder the code is to maintain and test. Structure Objective: avoid processing blocks difficult to understand, not testable and which tend to have a significant rate of failure. The threshold for the project is:  Cyclomatic complexity: 20 Complexity and Identifies methods difficult to understand, test and maintain because of coupling of methods moderate complexity (cyclomatic complexity) and numerous references to other types (efferent coupling). Objective: avoid processing blocks difficult to understand and not testable. The thresholds for the project are:  Cyclomatic complexity: 15  Efferent coupling: 20 Confidential – This document is the property of Kalistick 8/58
  • 9. Code audit of IceScrum2 application 2010-02-10 Domain Rule Explanation, goal and possible thresholds Test coverage methods Rate of code coverage for a method. This metric is standardized by our platform based on raw measures of code coverage when they are provided in the project archive. This rule requires a minimum level of testing (code coverage) for each method of the application according to the TRI (TestRelevancyIndex); TRI for each method assesses the risk that it contains bugs. His calculation takes into account the business risks defined for the application. Test Objective: focus the test strategy and test efforts towards sensitive areas of the application and check them. These sensitive areas are evaluated according to their propensity to contain bugs and according to business risks defined for the application. Details of the thresholds are provided in the annex to the report (8.4.2 Code coverage). Rules defined See the architecture model defined for the application to check architecture specifically through the constraints. Architecture architecture model. Objective: ensure that developments follow the expected architecture model and do not introduce inconsistencies which could be security holes, maintenance or evolution issues. Note: violations of architecture are not taken into account in the calculation of non-compliance. Header documentation Identifies methods of moderate complexity which have no documentation of methods header. The methods considered are those whose cyclomatic complexity Documentation and number of statements exceed the thresholds defined specifically for the project. Objective: ensure that documentation is available in key processing blocks to facilitate any changes in the development team (transferability). The thresholds for the project are:  Cyclomatic complexity: 10  Number of lines: 50 Detection of Duplicated blocks are invalid beyond 20 Statements Duplication duplications Objective: detect identical blocks of code in several places in the application, which often causes inconsistencies when making changes, and which are factor of increased costs of testing and development. Confidential – This document is the property of Kalistick 9/58
  • 10. Code audit of IceScrum2 application 2010-02-10 4 Summary of results This chapter summarizes the status of the project using global indicators. These indicators measure the intrinsic quality of the project, but also compare its situation to other projects using “Quality Cockpit” knowledge base. 4.1 Project status The following indicators are related to the intrinsic situation of the project. 4.1.1 Rate of overall non-compliance The rate of non-compliance measures the percentage of application code considered as non-compliant. See the Quality Cockpit Specifically, this represents the ratio between the total number of statements, and the number of statements in non-compliant classes. A class is considered as non-compliant if at least one of the following statements is true: - A forbidden non-compliance is detected in the class - A set of non-compliances highly inadvisable, inadvisable, or to be avoided are detected in the class, and beyond a certain threshold. This calculation depends on the severity of each non- compliance and on the quality profile that adjusts the threshold of tolerance. Confidential – This document is the property of Kalistick 10/58
  • 11. Code audit of IceScrum2 application 2010-02-10 4.1.2 Deviation from target This chart summarizes the difference between the target as represented by the quality profile and the current status of the project. This difference is shown for each quality factor: See the Quality Cockpit The level of non-compliance is calculated for each quality factor, and then weighted by the level of requirements set for the related quality factor. Quality theme Classes Significant non-compliances % application Changeability 27 107 29% Efficiency 7 8 5% Maintainability 40 216 41% Reliability 40 136 37% Security 0 0 0% Transferability 32 131 38% [Total] 53 264 49.99% Detailed results specify for each quality factor: the number of non-compliant classes, the number of violations for selected rules, and the percentage of application code involved in non- compliant classes. Confidential – This document is the property of Kalistick 11/58
  • 12. Code audit of IceScrum2 application 2010-02-10 4.1.3 Origin of non-compliances The following chart shows the distribution of non-compliances according to their technical origin: See the Quality Cockpit This chart compares each field according to the impact of rules that are associated with the quality of the application. The impact is measured from the number of statements in classes non-compliant. 4.1.4 Volumetry The following table specifies the volume of the analyzed application: Metric Value Trend Line count 47671 +14.93% Statement count 24034 +18.36% Method count 4384 +13.75% Class count 230 +10.58% Package count 43 +4.88% See the Quality Cockpit A "line" corresponds to a physical line of a source file. It may involve a white line or a comment line. A "statement" is a primary unit of code, it can be written on multiple lines, but also a line may contain multiple statements. For simplicity, a statement is delimited by a semicolon (;) or a left brace ({). Confidential – This document is the property of Kalistick 12/58
  • 13. Code audit of IceScrum2 application 2010-02-10 4.2 Benchmarking The “Quality Cockpit" knowledge base allows a comparative analysis of the project with other projects reviewed on the platform. The objective is to measure its level of quality compared to an overall average. This comparison benchmarking is proposed in relation to two categories of projects:  The “Intra-Cockpit” projects: projects analyzed continuously on the platform, therefore, with a quality level above average (a priori)  The “Extra-Cockpit” projects: the projects reviewed from time to time on the platform in audit mode, so with a highly heterogeneous quality. Note: each project having its own specific quality profile, benchmarking does not take in account project configuration, but uses instead raw measures. 4.2.1 Comparison on implementation issues The chart below shows the status of the project implementation compared to the Extra-Cockpit projects, therefore analyzed promptly on the platform. For each level of severity, the quality of the project is positioned relative to others: See the Quality Cockpit Confidential – This document is the property of Kalistick 13/58
  • 14. Code audit of IceScrum2 application 2010-02-10 The project is positioned relative to other projects according to the rate of violations for each rule. The distribution is based on the quartile method, three groups are distinguished, "Better": the 25% best projects, "On the average": the 50% average projects, "Worse": the 25% worse projects. This information is then synthesized by level of severity. The implementation rules compared are not necessarily the same as quality profiles, but here we compare the rules according to their severity level set for each project. The following graph provides the same analysis, but this time with the Intra-Cockpit projects, analyzed continuously on the platform, so with a level of quality normally above average since detected violations should be more corrected: See the Quality Cockpit A dominant red color indicates that the other projects tend to correct the violations detected on this project. Confidential – This document is the property of Kalistick 14/58
  • 15. Code audit of IceScrum2 application 2010-02-10 4.2.2 Mapping the structure The following chart compares the size of the methods of the current project with those of other projects, "Intra-Cockpit" and "Extra-Cockpit", comparing the ratio of the application (as a percentage of statements) which is located in processing blocks (methods) with a high number of statements: See the Quality Cockpit A significant proportion of the application in the right area is an indicator of greater maintenance and evolution costs. NB: The application analyzed is indicated by the term "Release". Confidential – This document is the property of Kalistick 15/58
  • 16. Code audit of IceScrum2 application 2010-02-10 A similar comparison is provided for the cyclomatic complexity4 of methods, comparing the proportion of the application (as a percentage of statements) that is located within complex methods: See the Quality Cockpit A significant proportion of the application in the right area shows not only greater maintenance and evolution costs, but also problems of reliability because this code is difficult to test. 4.2.3 Comparison of main metrics The following table compares the project with other projects, "Intra-Cockpit" and "Extra-cockpit", on the main metrics related to the structure of the code. Recommended interval values are provided for information purposes. Metric Project Extra-Cockpit Intra-Cockpit Recommended interval Classes per package 5.35 7.57 50.68 6 - 26 Methods per class 19.06 10.71 8.74 4 - 10 Statements per method 5.48 8.05 7.26 7 - 13 Cyclomatic complexity per statement 0.34 0.3 0.29 0.16 - 0.24 See the Quality Cockpit 4 Cyclomatic complexity measures the complexity of the code, and thus its ability to test it, cf.http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf Confidential – This document is the property of Kalistick 16/58
  • 17. Code audit of IceScrum2 application 2010-02-10 4.3 Modeling application To facilitate understanding of analysis results, the application is modeled in two ways: a functional perspective to better identify the business features of the application and link them to the source code, and a technical perspective to verify the technical architecture of the application. These models are built using the modeling wizard available in the Cockpit. You can modify these templates on the pages Functional modelization et Technical Architecture (depending on your user rights). 4.3.1 Functional model The functional model represents the business view of application, which may be understood by all project members. See the Quality Cockpit The functional model is composed of modules, each one representing a business feature, or a group of functionalities. These modules have been identified from a lexical corpus generated from the application code which allows isolating the business vocabulary of the application. 4.3.2 Technical model The technical model represents the technical architecture of the application code. The idea is to define a target architecture model, which identifies the layers and / or technical components within the application, and sets constraints to allow or prohibit communications between each of these elements. Confidential – This document is the property of Kalistick 17/58
  • 18. Code audit of IceScrum2 application 2010-02-10 The aim is threefold:  Homogenize the behavior of an application. For example, to ensure that the logging traces are written through a specific API, that data accesses pass through a dedicated layer, that some third- party library is only used by specific components ...  Ensure tightness of some components to facilitate their development and limit unintended consequences, but also make them shareable with other applications. Dependency cycles are for instance forbidden.  Avoid security flaws for example by ensuring that calls to data layer always pass through a business layer in charge of validation controls. Results of the architecture analysis are provided in chapter 5.5 Architecture. See the Quality Cockpit Green arrows formalize allowed communications between modules, while red arrows formalize forbidden communications. Confidential – This document is the property of Kalistick 18/58
  • 19. Code audit of IceScrum2 application 2010-02-10 5 Detailed results This chapter details the results by focusing, for each quality domain, non-compliant elements. 5.1 Detail by quality factors The histogram below details the non-compliance rate for each quality factor, displaying also the number of non-compliant classes. As a reminder, the rate of non-compliance is based on the number of statements defined in non-compliant classes compared to the total number of statements in the project. These rates of non-compliance directly depend on the quality profile and on the level of requirements that have been selected: See the Quality Cockpit Same class may be non-compliant on several factors, the total does not necessarily correspond to the sum of the factors. Confidential – This document is the property of Kalistick 19/58
  • 20. Code audit of IceScrum2 application 2010-02-10 5.2 Implementation Implementation domain covers the rules related to coding techniques. Unlike other domains, these rules are often specific to the characteristics of a language (Java / C#). They identify, for example:  Potential bugs: uninitialized variables, concurrency issues, recursive calls ...  Optimizations in terms of memory or CPU  Security vulnerabilities  Obsolete code  Code deviating from recommended standards  ... Implementations rules are the most numerous of the technical requirements. They are called "practice". 5.2.1 Breakdown by severity The objective of this indicator is to identify the severity of the practices that led to the invalidation of the classes. Here, severity is divided in two levels: forbidden practices (Forbidden security level) and inadvisable practices (Highly inadvisable, Inadvisable and To be avoided security levels). The following pie compares the number of non-compliant classes in implementation, according to the practices that participated in this invalidation:  When a class only violates forbidden practices, it is in the group “Forbidden practices”  When a class only violates inadvisable practices, it is in the group “Inadvisable practices”  Otherwise, the class violates practices of both categories and is in the group “Inadvisable and forbidden practices” See the Quality Cockpit Confidential – This document is the property of Kalistick 20/58
  • 21. Code audit of IceScrum2 application 2010-02-10 The effort of correction related to forbidden practices is generally less important compared to lower severities: a single violation is sufficient to cause a forbidden non-compliance when several inadvisable practices are needed to cause non-compliance, depending on tolerance thresholds. The table below completes the previous graph by introducing the concept of “Significant non-compliance”. A significant violation is a violation whose correction can fix fully or partially the non-compliance of a class. Indeed, due to tolerance thresholds associated with levels of severity, the correction of some violations has no impact on the non-compliance of the class. Severity Significant non- New non- Corrected non- Other non- compliances compliances compliances compliances Forbidden 14 3 2 0 Highly inadvisable 71 29 7 70 Inadvisable 44 48 4 29 To be avoided 35 26 10 107 The columns "New non-compliance" and "Corrected non-compliances" are only relevant if the audit follows a previous audit. Confidential – This document is the property of Kalistick 21/58
  • 22. Code audit of IceScrum2 application 2010-02-10 5.2.2 Practices to fix in priority The two following tables provide a list of forbidden practices and highly inadvisable practices detected in the application. These are generally the rules to correct first. These tables provide for each practice the number of new non-compliances (if a previous audit has been done), the total number of non-compliances for this practice, the number of non-compliant classes where this practice has been detected and the percentage of statements of these classes compared to the overall number of statement in the project. These figures help to set up an action plan based on the impact associated with each practice. 5.2.2.1 Forbidden practices Practice New Non- NC % compliances classes application DontUseNewToInstantiateIntegers 0 6 5 2.33% AlwaysDeclareCloneableInterfaceWhenImplementingC 3 3 3 2.01% loneMethod AlwaysSynchronizeDateFormatter 0 1 1 1% DontUseNewToInstantiateStrings 0 1 1 2.61% MisplacedNullCheck 0 1 1 1% NPEAlwaysThrown 0 1 1 1% UseAppendMethodForStringBuffer 0 1 1 1% See the Quality Cockpit 5.2.2.2 Practice highly inadvisable Practice New Non- NC classes % compliances application TraceErrorsWithLogger 20 80 28 33.98% NeverMakeCtorCallInnerMethod 3 27 18 26.38% UseLoggerRatherThanPrintMethods 4 27 7 9.47% DontAssignVariablesInOperands 2 5 2 3.04% DontIgnoreMethodsReturnValue 0 1 1 1.56% OverrideEqualsWhenImplementingCompareTo 0 1 0 1% See the Quality Cockpit 5.2.3 Classes to fix in priority on the implementation issues The two following tables provide an additional view about the impact of implementation issues in listing the main classes involved in forbidden practices or highly inadvisable practices. Confidential – This document is the property of Kalistick 22/58
  • 23. Code audit of IceScrum2 application 2010-02-10 For each class is associated the number of existing violations (forbidden or highly inadvisable practices), the number of new violations (if a previous audit has been done), and the compliance status of the class. 5.2.3.1 Classes with forbidden practices Class NC New Non- Instructions compliances icescrum2.dao.model.impl.RemainingEstimationArray Yes 0 2 74 icescrum2.dao.model.impl.Sprint Yes 1 2 218 icescrum2.service.chart.PointBurnupChartProduct Yes 0 2 121 icescrum2.presentation.app.chat.PrivateChat Yes 0 1 184 icescrum2.service.impl.HibernateManagerImpl Yes 0 1 76 icescrum2.dao.impl.ProductDao Yes 0 1 69 icescrum2.dao.impl.UserDao Yes 0 1 76 icescrum2.dao.model.ISprint Yes 1 1 56 icescrum2.presentation.model.SprintImpl Yes 1 1 208 icescrum2.service.impl.ExportXMLServiceImpl Yes 0 1 628 See the Quality Cockpit Confidential – This document is the property of Kalistick 23/58
  • 24. Code audit of IceScrum2 application 2010-02-10 5.2.3.2 Classes with practice highly inadvisable Class NC New Non- Instructions compliances icescrum2.service.impl.RepositoryServiceImpl Yes 0 13 51 icescrum2.service.impl.ExportPDFServiceImpl Yes 1 9 455 icescrum2.service.impl.HibernateManagerImpl Yes 0 5 76 icescrum2.listeners.IS2ServletListener Yes 0 4 20 icescrum2.presentation.broadcast.RenderableSession Yes 0 4 110 icescrum2.service.chart.BurndownChartProduct Yes 0 4 172 icescrum2.service.chart.GlobalChartTest Yes 0 4 122 icescrum2.service.impl.ConfigurationServiceImpl Yes 1 4 103 icescrum2.service.impl.ExportXMLServiceImpl Yes 0 4 628 icescrum2.service.impl.ImportXMLServiceImpl Yes 0 4 482 icescrum2.service.impl.UserServiceImpl Yes 0 4 75 icescrum2.presentation.app.productbacklog.ProductBacklogUI Yes 2 3 817 icescrum2.service.chart.BurnupChartProduct Yes 0 3 184 icescrum2.service.chart.PointBurnupChartProduct Yes 0 3 121 icescrum2.service.impl.ExportPDFSprint Yes 3 3 97 icescrum2.presentation.app.product.ProductUI Yes 0 2 375 icescrum2.dao.impl.ExceptionManager No 0 2 103 icescrum2.dao.impl.ProblemDao No 0 2 51 icescrum2.filters.OpenSessionInViewPhaseListener No 0 2 97 See the Quality Cockpit 5.3 Structure The Structure domain targets rules related to the code structure, for example:  The size of methods  The cyclomatic complexity of methods  Coupling, or the dependencies of methods towards other classes The objective is to ensure that the code is structured in such a way that it can be easily maintained, tested, and can evolve. These rules are “metric”. They measure values (e.g. A number of statements) and are conditioned by thresholds (e.g. 100 statements / method). Only metrics on which developers are able to act are presented here. They apply to all methods. Confidential – This document is the property of Kalistick 24/58
  • 25. Code audit of IceScrum2 application 2010-02-10 5.3.1 Typology of structural problems This histogram shows for each rule of structure domain number of non-compliance (thus methods) and the percentage of related statements compared to the total number of statements in the application: See the Quality Cockpit The percentage of statements shown is interesting since there is often only a few methods concentrating a large part of the application code. When some rules have been configured to be excluded from the analysis, they are displayed in this graph but without any results. One method may be affected by several rules; therefore, the total does not correspond to the sum of numbers. The following table completes this view by introducing the number of new violations and the number of violations corrected in the case where a previous audit was conducted: Anomaly Significant non- New non- Corrected non- NC compliances compliances compliances rate Statement count higher than 100 2 1 0 1% Cyclomatic complexity higher than 20 14 4 0 3% Confidential – This document is the property of Kalistick 25/58
  • 26. Code audit of IceScrum2 application 2010-02-10 See the Quality Cockpit 5.3.2 Mapping methods by size The histogram below shows a mapping of methods according to their size. The size is expressed in number of statements to ignore the writing styles conventions. The last interval identifies the methods with a number of statements which exceeds the threshold. These methods are considered non-compliant because they are generally difficult to maintain and extend, and also show a high propensity to reveal bugs because they are difficult to test. The percentage of statements is provided because larger methods usually focus a significant part of the application: See the Quality Cockpit The following table details the main non-compliant methods identified in the last interval of the previous graph: Confidential – This document is the property of Kalistick 26/58
  • 27. Code audit of IceScrum2 application 2010-02-10 Method Instructions Lines Complexity New violation icescrum2.service.impl.ExportPDFServiceImpl.addRelea 131 227 38 New sePlan ( icescrum2.dao.model.IUser, int[], int[], icescrum2.dao.model.IProduct) icescrum2.service.impl.ClicheServiceImpl.createCliche ( 208 343 42 icescrum2.dao.model.IProduct, java.util.Date) 5.3.3 Mapping methods by complexity The histogram below shows a mapping of methods according to their cyclomatic complexity (see 8.1 Cyclomatic complexity). Cyclomatic complexity is a measure aiming to characterize the complexity of a block of code, by identifying all possible execution paths. This concept has been standardized by Mc Cabe5, but several calculation methods exist. The one used here is the most popular and the simplest: it counts the number of branching operators (if, for, while,? ...) and conditions (??, && ...). The last interval identifies methods whose complexity exceeds the threshold. These methods are considered non-compliant for the same reasons as for the long methods: they are generally difficult to maintain and extend, and also show a high propensity to reveal bugs. The percentage of statements and the percentage of complexity are provided because the most complex methods generally focus a significant part of the application. See the Quality Cockpit 5 1976, IEEE Transactions on Software Engineering: 308–320. http://classes.cecs.ucf.edu/eel6883/berrios/notes/Paper%204%20(Complexity%20Measure).pdf. Confidential – This document is the property of Kalistick 27/58
  • 28. Code audit of IceScrum2 application 2010-02-10 The following table details the main non-compliant methods identified in the last interval of the previous graph: Method Instructions Lines Complexity New violation icescrum2.service.impl.ExportPDFServiceImpl.addRelea 131 227 38 New sePlan ( icescrum2.dao.model.IUser, int[], int[], icescrum2.dao.model.IProduct) icescrum2.service.impl.ReleaseServiceImpl.saveRelease 59 84 30 New ( icescrum2.dao.model.IRelease, icescrum2.dao.model.IProduct, boolean, icescrum2.dao.model.IUser) icescrum2.service.impl.SprintServiceImpl.closeSprint ( 70 120 21 New icescrum2.dao.model.IRelease, icescrum2.dao.model.ISprint, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.service.impl.SprintServiceImpl.saveSprint ( 43 69 20 New icescrum2.dao.model.ISprint, icescrum2.dao.model.IRelease, java.lang.Integer, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.dao.model.impl.Sprint.equals ( 47 70 84 java.lang.Object) icescrum2.service.impl.ClicheServiceImpl.createCliche ( 208 343 42 icescrum2.dao.model.IProduct, java.util.Date) icescrum2.service.impl.ImportXMLServiceImpl.parsePro 73 169 41 duct ( org.w3c.dom.Element) icescrum2.dao.model.impl.ProductBacklogItem.equals ( 45 40 37 java.lang.Object) icescrum2.dao.model.impl.Build.equals ( 29 27 24 java.lang.Object) icescrum2.dao.model.impl.CustomRole.equals ( 27 27 24 java.lang.Object) icescrum2.service.impl.UserServiceImpl.saveUser ( 24 36 23 icescrum2.dao.model.IUser) icescrum2.dao.model.impl.ExecTest.equals ( 27 25 22 java.lang.Object) icescrum2.dao.model.impl.Task.equals ( 27 26 22 java.lang.Object) icescrum2.dao.model.impl.Test.equals ( 27 25 22 java.lang.Object) 5.3.4 Mapping methods by their complexity and efferent coupling This rule is intended to identify methods whose code has many dependencies to other classes. The concept of “efferent coupling” refers to those outgoing dependencies. The principle is that a method with a strong efferent coupling is difficult to understand, maintain and test. First because it requires knowledge of the different types it depends on, then because the risk of destabilization is higher because of these dependencies. Confidential – This document is the property of Kalistick 28/58
  • 29. Code audit of IceScrum2 application 2010-02-10 This rule is crossed with the cyclomatic complexity to ignore some trivial methods, such as initialization methods of graphical interfaces that make calls to many classes of widgets without presenting any real complexity. This rule considers that a method is non-compliant if it exceeds a threshold of efferent coupling and threshold of cyclomatic complexity. The chart below shows a mapping of methods according to their complexity and their efferent coupling. Each dot represents one or more methods with the same values of complexity and coupling. They are divided into four zones according to their status in relation to both thresholds:  The area on the lower left (green dots) contains compliant methods, below both thresholds  The area on the lower right (gray dots) contains compliant methods; they have reached the complexity threshold, but remain below the coupling threshold  The area in the upper left (gray dots) contains compliant methods; they have reached the coupling threshold, but remain below the complexity threshold  The area in the upper right (red dots) contains non-compliant methods; above both thresholds See the Quality Cockpit The intensity of the color of the dots depends on the number of methods that share the same values in complexity and coupling: the more the color of the point is marked, the more involved methods. Confidential – This document is the property of Kalistick 29/58
  • 30. Code audit of IceScrum2 application 2010-02-10 The histogram below provides an additional view of this mapping and precise figures for the four zones in terms of percentage of methods and statements of the application. The last bars indicate the area of non- compliance: See the Quality Cockpit The following table details the main non-compliant methods: Method Efferent Complexity New Coupling violation icescrum2.service.impl.ExportPDFServiceImpl.addReleasePlan ( 35 38 New icescrum2.dao.model.IUser, int[], int[], icescrum2.dao.model.IProduct) icescrum2.service.impl.SprintServiceImpl.closeSprint ( 29 21 icescrum2.dao.model.IRelease, icescrum2.dao.model.ISprint, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.service.impl.ImportXMLServiceImpl.parseProduct ( 23 41 org.w3c.dom.Element) icescrum2.service.impl.SprintServiceImpl.autoSaveSprint ( 21 16 icescrum2.dao.model.IRelease, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.service.impl.SprintServiceImpl.saveSprint ( 20 20 icescrum2.dao.model.ISprint, icescrum2.dao.model.IRelease, java.lang.Integer, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.service.impl.ImportXMLServiceImpl.importProduct ( 20 18 New java.io.InputStream, icescrum2.dao.model.IUser, icescrum2.service.beans.ProgressObject) See the Quality Cockpit Confidential – This document is the property of Kalistick 30/58
  • 31. Code audit of IceScrum2 application 2010-02-10 5.4 Test The Test domain provides rules to ensure that the application is sufficiently tested, quantitatively but also qualitatively, i.e. tests should target risk areas. 5.4.1 Issues It is important to situate the problems inherent in managing tests to understand the results of analysis for this area. 5.4.1.1 Unit testing and code coverage The results of this domain depend on the testing process applied to the project: if automated unit testing process and / or code coverage are implemented on the project, then the analysis uses the results of these processes. As a reminder, we must distinguish unit testing and code coverage:  A unit test is an automated test, which usually focus on a simple method inside source code. But since this method has generally dependencies on other methods or classes, a unit test can test a more or less important part of the application (the larger is this part, the less relevant is the test)  Code coverage measures the amount of code executed from tests, by identifying each element actually executed at runtime (statements, conditional branches, methods ...). These tests can be unit tests (automated) or integration tests / functional (manual or automated). Code coverage is interesting to combine with the unit tests because it is the only way to measure the code actually tested. However, many projects still do not check the code coverage, which does not allow verifying the quality of testing in this type of analysis. The indicators presented next address both cases; they are useful for projects with unit tests and/or code coverage but also for other projects. 5.4.1.2 Relevance of code coverage Code coverage provides figures indicating the proportion of code executed after the tests, for example 68% of statements of a method are covered or 57% of the project statements... The problem is that these figures do not take into account the relevance to test the code. For example a coverage of 70% of the application is a good figure, but the covered code could be trivial and without any real interest for the tests (e.g. accessors or generated code), whereas the critical code may be located in the remaining 30%. The analysis performed here captures the relevance to test of each method, which is used to calibrate the code coverage requirements and to set appropriate thresholds to better target testing effort towards risk areas. Confidential – This document is the property of Kalistick 31/58
  • 32. Code audit of IceScrum2 application 2010-02-10 5.4.2 TestRelevancyIndex metrics (TRI) and TestEffortIndex (TEI) To refine the analysis of tests, two new metrics were designed by the Centre of Excellence in Information and Communication Technologies (CETIC) based on researches conducted during the past 20 years and from the “Quality Cockpit” knowledge base6. The TestRelevancyIndex (TRI) measures the relevancy of testing a method in accordance with its technical risks and its business risk. Technical risk assesses the probability of finding a defect; it is based on different metrics such as cyclomatic complexity, number of variables, number of parameters, efferent coupling, cumulative number of non- compliances... The business risk associates a risk factor to business features which should be tested in priority (higher risk), or instead which should not be tested (minor risk). It must be determined at the initialization of the audit to be considered in the TRI calculations. The objective is to guide the testing effort on the important features. For this, the TRI is used to classify the methods according to a scale of testing priority, and thus to distinguish the truly relevant methods to test from trivial and irrelevant methods in this area. For each level of the scale, a specific threshold to achieve with code coverage can be set. This allows setting a high threshold for critical methods, and a low threshold for low-priority methods. The TestEffortIndex (TEI) completes the TRI by measuring the level of effort required to test a method. Like the TRI, it is based on a set of unit metrics characterizing a method. It helps to refine the decisions to select the code to be tested by balancing the effort over the relevance test. The details of calculating these two indexes are providing in annex (8.2 The coupling). 5.4.3 Mapping methods by testing priority The histogram below shows a mapping of methods according to their priority of testing, using a scale of four levels based on TRI of methods (each level corresponding to a range of TRI). This mapping uses the code coverage information only if they were supplied for analysis. For each priority level are indicated:  The average coverage rate (0 if coverage information was not provided)  The number of methods not covered (no coverage)  The number of methods insufficiently covered (coverage rate below the target rate set for this level of priority)  The number of methods sufficiently covered (coverage greater than or equal to the target rate set for this level of priority) 6 CETIC, Kalistick. Statistically Calibrated Indexes for Unit Test Relevancy and Unit Test Writing Effort, 2010 Confidential – This document is the property of Kalistick 32/58
  • 33. Code audit of IceScrum2 application 2010-02-10 The table below shows these figures for each priority level, also adding a fifth level corresponding to the methods without test priority: Test priority Covered Uncovered Insufficient covered Critical 0 3 2 High 4 13 5 Medium 6 46 2 Low 18 96 3 None 115 3093 0 [Total] 143 3251 12 See the Quality Cockpit 5.4.4 Coverage of application by tests This graph, called “TreeMap” shows code coverage of the application against test objectives. It helps to identify parts of the application that are not sufficiently tested regarding identified risks. It gathers the classes of project into technical subsets, and characterizes them following two dimensions:  size, which depends on the number of statements  color, which represents the deviation from the test objective set for the classes: the color red indicates that the current coverage is far from the goal, whereas the green color indicates that the goal is reached Confidential – This document is the property of Kalistick 33/58
  • 34. Code audit of IceScrum2 application 2010-02-10 See the Quality Cockpit A class can be green even if it is not or little tested: for example, classes with a low probability of technical defects or without business risk. Conversely, a class already tested can be stated as insufficient (red / brown) if its objective is very demanding. An effective strategy to improve its coverage is to focus on large classes close to the goal. 5.4.5 Most important classes to test (Top Risks) The following chart allows quickly identifying the most relevant classes to test, the “Top Risks”. It is a representation known as "cloud" that displays the classes using two dimensions:  The size of the class name depends on its relevancy in being tested (TRI cumulated for all methods of this class)  The color represents the deviation from the coverage goal set for the class, just as in the previous TreeMap Confidential – This document is the property of Kalistick 34/58
  • 35. Code audit of IceScrum2 application 2010-02-10 See the Quality Cockpit This representation identifies the critical elements, but if you want to take into account the effort of writing tests, you must focus on the following representation to select items to be corrected. 5.4.6 Most important classes to test and require the least effort (Quick Wins) The “Quick Wins” complements “Top Risks” by taking into account the testing effort required for testing the class (TEI):  The size of the class name depends on its interest in being tested (TRI), but weighted by the effort required (TEI accumulated for all methods): a class with a high TRI and a high TEI (therefore difficult to test) appears smaller than a class with an average TRI but a low TEI  The color represents the deviation from the coverage goal set for the class, just like the TreeMap or QuickWin See the Quality Cockpit 5.4.7 Methods to test in priority Confidential – This document is the property of Kalistick 35/58
  • 36. Code audit of IceScrum2 application 2010-02-10 The following table details the main methods to be tested first. Each method is associated with its current coverage rate, the raw value of its TRI and its level of TEI: Confidential – This document is the property of Kalistick 36/58
  • 37. Code audit of IceScrum2 application 2010-02-10 Method Coverage Relevancy Priority Effort New (TRI) violation icescrum2.service.impl.ExportPDFServiceImpl. 0% 39.0 Critical Very high New addReleasePlan ( icescrum2.dao.model.IUser, int[], int[], icescrum2.dao.model.IProduct) icescrum2.service.impl.ImportXMLServiceImpl. 0% 39.0 Critical Very high parseProduct ( org.w3c.dom.Element) icescrum2.service.impl.ClicheServiceImpl.creat 0% 37.0 Critical Very high eCliche ( icescrum2.dao.model.IProduct, java.util.Date) icescrum2.service.impl.ProductBacklogServiceI 76% 36.0 Critical High mpl.saveProductBacklogitem ( icescrum2.dao.model.IStory, icescrum2.dao.model.IProduct, icescrum2.dao.model.ISprint, icescrum2.dao.model.IUser, icescrum2.dao.model.ICustomRole) icescrum2.service.impl.TaskServiceImpl.updat 51% 35.0 Critical High eTask ( icescrum2.dao.model.ITask, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct, java.lang.String) icescrum2.service.impl.ProductBacklogServiceI 60% 34.0 High High mpl.associateItem ( icescrum2.dao.model.ISprint, icescrum2.dao.model.IStory, icescrum2.dao.model.IProduct, icescrum2.dao.model.ISprint, icescrum2.dao.model.IUser) icescrum2.dao.model.impl.Sprint.equals ( 0% 33.0 High Very high java.lang.Object) icescrum2.service.impl.ExportPDFServiceImpl. 0% 33.0 High High New addProject ( java.util.HashMap, icescrum2.dao.model.IProduct, icescrum2.dao.model.IUser) icescrum2.service.chart.VelocityChartSprint.ini 0% 33.0 High High t() icescrum2.service.chart.BurndownChartReleas 0% 33.0 High High e.init ( ) icescrum2.service.impl.ReleaseServiceImpl.up 45% 32.0 High High dateRelease ( icescrum2.dao.model.IRelease, icescrum2.dao.model.IProduct) icescrum2.service.impl.TestServiceImpl.saveTe 79% 32.0 High High st ( icescrum2.dao.model.ITest, icescrum2.dao.model.IStory, icescrum2.dao.model.IUser) icescrum2.service.impl.SprintServiceImpl.auto 0% 32.0 High Very high SaveSprint ( icescrum2.dao.model.IRelease, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) Confidential – This document is the property of Kalistick 37/58
  • 38. Code audit of IceScrum2 application 2010-02-10 icescrum2.service.impl.SprintServiceImpl.calcu 47% 32.0 High High lateDailyHours ( icescrum2.dao.model.ISprint, int) icescrum2.service.impl.SprintServiceImpl.close 61% 32.0 High Very high Sprint ( icescrum2.dao.model.IRelease, icescrum2.dao.model.ISprint, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct) icescrum2.dao.model.impl.ProductBacklogIte 0% 31.0 High High m.equals ( java.lang.Object) icescrum2.service.impl.ProductBacklogServiceI 0% 31.0 High High mpl.changeRank ( icescrum2.dao.model.IProduct, icescrum2.dao.model.IStory, icescrum2.dao.model.IStory, icescrum2.dao.model.IUser) icescrum2.service.impl.ProductBacklogServiceI 0% 30.0 High High mpl.getStory ( org.w3c.dom.Element, java.util.Map) icescrum2.service.impl.ProductBacklogServiceI 0% 30.0 High High mpl.updateProductBacklogItem ( icescrum2.dao.model.IStory, icescrum2.dao.model.IUser, icescrum2.dao.model.IProduct, icescrum2.dao.model.ISprint, icescrum2.dao.model.ICustomRole) See the Quality Cockpit 5.5 Architecture The Architecture domain aims to monitor compliance of a software architecture model. The target architecture model has been presented in Chapter 4.3.2 Technical model. The following diagram shows the results of architecture analysis by comparing this target model with current application code. Currently, architecture non-compliances are not taken into account in the calculation of non-compliance of the application. Confidential – This document is the property of Kalistick 38/58
  • 39. Code audit of IceScrum2 application 2010-02-10 See the Quality Cockpit Non-compliances related to communication constraints between two elements are represented using arrows. The starting point is the calling element, the destination is the one called. The orange arrows involve direct communication between a top layer and bottom layer non-adjacent (sometimes acceptable). The black arrows refer to communications totally prohibited. 5.6 Duplication The Duplication domain is related to the “copy-and-paste” identified in the application. To avoid many false positives in this area, a threshold is defined to ignore blocks with few statements. Duplications should be avoided for several reasons: maintenance and changeability issues, testing costs, lack of reliability... 5.6.1 Mapping of duplication The chart below shows a mapping of duplications within the application. It does not take into account the duplication involving a number of statements below the threshold, because they are numerous and mostly irrelevant (e.g. duplication of accessors between different classes sharing similar properties). Confidential – This document is the property of Kalistick 39/58
  • 40. Code audit of IceScrum2 application 2010-02-10 Duplicates are categorized by ranges of duplicated statements. For each range is presented:  The number of different duplicated blocks (each duplicated at least once)  The maximum number of duplications of the same block See the Quality Cockpit 5.6.2 Duplications to fix in priority The following table lists the main duplicates to fix in priority. Each block is identified by a unique identifier, and each duplication is located in the source code. If a previous audit were completed, a flag indicates whether duplication is new or not. Confidential – This document is the property of Kalistick 40/58
  • 41. Code audit of IceScrum2 application 2010-02-10 Duplication Duplicated Class involved Lines New number blocks size violation 239 111 icescrum2.presentation.app.roadmap.RoadmapUI 858:1003 New 239 111 icescrum2.presentation.app.releasebrowser.ReleaseBr 1045:1190 New owserUI 238 69 icescrum2.presentation.app.roadmap.RoadmapUI 590:688 New 238 69 icescrum2.presentation.app.releasebrowser.ReleaseBr 731:830 New owserUI 237 56 icescrum2.service.impl.ClicheServiceImpl 309:373 237 56 icescrum2.service.impl.ClicheServiceImpl 201:263 236 52 icescrum2.service.chart.GlobalChartTest 243:316 236 52 icescrum2.service.chart.VelocityChartSprint 219:292 236 52 icescrum2.service.chart.ExecChartTest 156:229 235 50 icescrum2.service.chart.VelocityChartSprint 221:290 235 50 icescrum2.service.chart.ExecChartTest 158:227 235 50 icescrum2.service.chart.BurndownChartProduct 322:391 235 50 icescrum2.service.chart.GlobalChartTest 245:314 234 49 icescrum2.presentation.app.releasebrowser.ReleaseBr 877:944 New owserUI 234 49 icescrum2.presentation.app.roadmap.RoadmapUI 698:765 New 233 48 icescrum2.service.chart.GlobalChartTest 249:316 233 48 icescrum2.service.chart.ExecChartTest 162:229 233 48 icescrum2.service.chart.BurndownChartRelease 202:268 233 48 icescrum2.service.chart.VelocityChartSprint 225:292 See the Quality Cockpit 5.7 Documentation The Documentation domain aims to control the level of technical documentation of the code. Only the definition of standard comment header of the methods is verified: Javadoc for Java, XmlDoc for C#. Inline comments (in the method bodies) are not evaluated because of the difficulty to verify their relevance (often commented code or generated comments). In addition, the header documentation is verified only for methods considered quite long and complex. Because the effort to document trivial methods is rarely justified. For this, a threshold on the cyclomatic complexity and a threshold on the number of statements are defined to filter out methods to check. 5.7.1 Mapping documentation issues The chart below shows the status of header documentations for all methods with a complexity greater than the threshold. The methods are grouped by ranges of size (number of statements). For each range are given the number of methods with header documentation and the number of methods without header documentation. The red area in the last range corresponds to the methods not documented therefore non- compliant. Confidential – This document is the property of Kalistick 41/58
  • 42. Code audit of IceScrum2 application 2010-02-10 5.7.2 Methods to document in priority The following table lists the main methods to document in priority: Method Instructions Complexity New violation icescrum2.service.impl.ExportXMLServiceImpl.exportSprint 81 10 New icescrum2.service.impl.ReleaseServiceImpl.saveRelease 59 30 New icescrum2.service.impl.ClicheServiceImpl.createCliche 208 42 icescrum2.service.impl.SprintServiceImpl.autoSaveSprint 89 16 icescrum2.service.chart.VelocityChartSprint.init 78 13 icescrum2.service.impl.SprintServiceImpl.closeSprint 70 21 icescrum2.service.impl.ExportXMLServiceImpl.exportItem 66 10 icescrum2.service.chart.BurndownChartRelease.init 66 14 See the Quality Cockpit Confidential – This document is the property of Kalistick 42/58
  • 43. Code audit of IceScrum2 application 2010-02-10 6 Action Plan For each domain, a recommendation of corrections was established on the basis of tables detailing the rules and code elements to correct. The following graph provides a comprehensive strategy to establish a plan of corrections by defining a list of actions. This list is prioritized according to the expected return on investment: the actions recommended in the first place are those with the best ratio between effort to produce and gain on the overall rate of non-compliance. Here is the explanation of each step: 1. Correction of forbidden practices These practices are often easy to correct, and because they invalidate the classes directly, the correction generally leads to significantly improve the overall rate of non-compliance (if classes are not invalidated by other rules). 2. Splitting long methods Using some IDE, it is often easy to break a method too long into several unit methods. This is achieved using automated operations performing refactorings, avoiding any risk of regression associated with manual intervention. 3. Documentation of complex methods This step aims to document methods identified as non-compliant in documentation domain, this is a simple but potentially tedious operation. 4. Correction of inadvisable practices Correspond to all practices remaining after correction of forbidden practices: practices highly inadvisable, inadvisable and to be avoided. Confidential – This document is the property of Kalistick 43/58
  • 44. Code audit of IceScrum2 application 2010-02-10 5. Removing of duplications This operation is more or less difficult depending on the case: you have first to determine whether the duplication should really be factorized, because two components may share the same code base but be independent. Note that the operation can be automated by some IDE and according to the type of duplication. 6. Modularization of complex operations This operation is similar to splitting long methods, but is often more difficult to achieve due to the complexity of the code. The action plan can be refined on the Quality Cockpit using the mechanism of "tags." Tags allow labeling the results of analysis to facilitate operations such as the prioritization of corrections, their assignment to developers or the targeting of their fix version. Confidential – This document is the property of Kalistick 44/58
  • 45. Code audit of IceScrum2 application 2010-02-10 7 Glossary Block coverage Block coverage measures the rate of code blocks executed during testing compared to total blocks. A code block is a code path with a single entry point, a single exit point and a set of statements executed in sequence. It ends when it reaches a conditional statement, a function call, an exception, or a try / catch. Branch coverage Branch coverage measures the rate of branches executed during tests by the total number of branches. if (value) { // } This code will be covered by branches to 100% if the if condition was tested in the case of true and false. Line coverage Lines (or statements) coverage measures the rate of executed lines during testing against the total number of lines. This measure is insensitive to conditional statements, coverage of lines can reach 100% whereas all conditions are not executed. Line of code A physical line of a source code in a text file. White line or comment line are counted in lines of code. Non-compliance A test result that does not satisfy the technical requirements defined for the project. Non-compliance is related to a quality factor and a quality domain. Synonym (s): violation Quality domain The test results are broken down into four areas depending on the technical origin of the non-compliances:  Implementation: Issues related to the use of language or algorithmic  Structure: Issues related to the organization of the source code: methods size, cyclomatic complexity ...  Test: Related to unit testing and code coverage  Architecture: Issues related to the software architecture  Documentation : Issues related to the code documentation: comments headers, inline comments ...  Duplication : The “copy-pastes” found in the source code Quality factor The test results are broken down into six quality factors following application needs in terms of quality:  Efficiency: Does the application ensure required execution performance?  Changeability: Do the code changes require higher development costs? Confidential – This document is the property of Kalistick 45/58
  • 46. Code audit of IceScrum2 application 2010-02-10  Reliability: Does the application contain bugs that affect its expected behavior?  Maintainability: Do the maintenance updates require a constant development cost?  Security: Has the application security flaws?  Transferability: Is the transfer of the application towards a new development team a problem? Statement A statement is a primary code unit. For simplicity, a statement is delimited by a semicolon (;) or by a left brace ({). Examples of statements in Java:  int i = 0;  if (i == 0) {  } else {}  public final class SomeClass  {  import com.project.SomeClass;  package com.project; Unlike lines of code, statements do not include blank lines and comment lines. In addition, a line can contain multiple statements. Confidential – This document is the property of Kalistick 46/58
  • 47. Code audit of IceScrum2 application 2010-02-10 8 Annex 8.1 Cyclomatic complexity Cyclomatic complexity is an indicator of the number of possible paths of execution. Its high value is a sign that the source code will be hard to understand, to test, to validate, to maintain and to evolve. 8.1.1 Definition Imagine a control graph that represents the code that you want to measure the complexity. Then, count the number of faces of the graph. This gives the structural complexity of the code, also called cyclomatic complexity. Confidential – This document is the property of Kalistick 47/58