SlideShare uma empresa Scribd logo
1 de 25
Baixar para ler offline
Software Testing Framework
                       Document version: 2.0




Harinath V Pudipeddi
hari.nath@sqae.com
http://www.sqae.com
Table of Contents

Table of Contents ...........................................................................................2
Revision History .............................................................................................4
Testing Framework .........................................................................................5


1.0 INTRODUCTION ........................................................................................................................... 5


1.2 TRADITIONAL TESTING CYCLE ...........................................................................5


2.0 VERIFICATION AND VALIDATION TESTING STRATEGIES................................... 6


2.1 VERIFICATION STRATEGIES ..............................................................................6
       2.1.1 REVIEW’ S ......................................................................................7
       2.1.2 INSPECTIONS ..................................................................................8
       2.1.3 WALKTHROUGHS ..............................................................................8
2.2 VALIDATION STRATEGIES ................................................................................8


3.0 TESTING TYPES ............................................................................................................................ 9


3.1 WHITE BOX TESTING .....................................................................................9
W HITE BOX TESTING TYPES................................................................................. 10
        3.1.1 BASIS PATH TESTING ...................................................................... 10
        3.1.2 FLOW GRAPH N OTATION ................................................................... 10
        3.1.3 CYCLOMATIC COMPLEXITY ................................................................. 10
        3.1.4 GRAPH MATRICES .......................................................................... 10
        3.1.5 CONTROL STRUCTURE TESTING ........................................................... 10
                3.1.5.1 Condition Testing ........................................................... 10
                3.1.5.2 Data Flow Testing .......................................................... 10
3.1.6 LOOP TESTING ........................................................................................ 11
        3.1.6.1 Simple Loops .......................................................................... 11
        3.1.6.2 Nested Loops .......................................................................... 11
        3.1.6.3 Concatenated Loops ................................................................. 11
        3.1.6.4 Unstructured Loops .................................................................. 11
3.2 BLACK BOX TESTING ................................................................................... 11
BLACK BOX TESTING TYPES ................................................................................. 11
        3.2.1 GRAPH BASED TESTING METHODS ....................................................... 11
        3.2.2 EQUIVALENCE PARTITIONING .............................................................. 11
        3.2.3 BOUNDARY VALUE ANALYSIS .............................................................. 12
        3.2.4 COMPARISON TESTING ..................................................................... 12
        3.2.5 ORTHOGONAL ARRAY TESTING ............................................................ 12
        3.3 SCENARIO BASED TESTING (SBT).......................................................... 12
        3.4 EXPLORATORY TESTING ....................................................................... 13
4.0 STRUCTURAL SYSTEM TESTING TECHNIQUES ........................................................ 13
5.0 FUNCTIONAL SYSTEM TESTING TECHNIQUES......................................................... 13


4.0 TESTING PHASES ...................................................................................................................... 14


4.2 UNIT TESTING ........................................................................................... 15
4.3 INTEGRATION TESTING ................................................................................. 15
       4.3.1 TOP- DOWN I NTEGRATION.................................................................. 15

Software Testing Framework V2.0                                                                                                       2 of 25
4.3.2 BOTTOM- UP I NTEGRATION ................................................................. 15
4.4 SMOKE TESTING......................................................................................... 16
4.5 SYSTEM TESTING ........................................................................................ 16
       4.5.1. RECOVERY TESTING ....................................................................... 16
       4.5.2. SECURITY TESTING ........................................................................ 16
       4.5.3. STRESS TESTING .......................................................................... 16
       4.5.4. PERFORMANCE TESTING .................................................................. 16
       4.5.5. REGRESSION TESTING .................................................................... 17
4.6 ALPHA TESTING ......................................................................................... 17
4.7 USER ACCEPTANCE TESTING ........................................................................... 17
4.8 BETA TESTING ........................................................................................... 17


5.0 METRICS ......................................................................................................................................... 17



6.0 TEST MODELS .............................................................................................................................. 19


6.1 THE ‘V’ MODEL .......................................................................................... 19
6.2 THE ‘W’ MODEL ......................................................................................... 20
6.3 THE BUTTERFLY MODEL ................................................................................ 21


7.0 DEFECT TRACKING PROCESS.............................................................................................. 23



8.0 TEST PROCESS FOR A PROJECT ........................................................................................ 24



9.0 DELIVERABLES ........................................................................................................................... 25




Software Testing Framework V2.0                                                                                                               3 of 25
Revision History


Version No.      Date             Author     Notes
1.0              August 6, 2003   Harinath   Initial Document Creation and Posting on
                                             web site.
2.0              December 15,     Harinath   Renamed the document to Software
                 2003                        Testing Framework V2.0
                                             Modified the structure of the document.
                                             Added Testing Models section
                                             Added SBT, ET testing types.




Next Version of this framework would include Test Estimation Procedures and More
Metrics.




Software Testing Framework V2.0                                                 4 of 25
Testing Framework

Through experience t hey det erm ined, t hat t here should be 30 defect s per 1000 lines
of code. I f t est ing does not uncover 30 defect s, a logical solut ion is t hat t he t est
process was not effective.


1.0 Introduction

Test ing plays an im port ant role in t oday’s Syst em Developm ent Life Cycle. During
Testing, we follow a systematic procedure to uncover defects at various stages of the
life cycle.

This fram ework is aim ed at providing t he reader various Test Types, Test Phases,
Test Models and Test Met rics and guide as t o how t o perform effect ive Test ing in t he
project.

All t he definit ions and st andards m ent ioned in t his fram ework are exist ing one’s. I
have not alt ered any definit ions, but where ever possible I t ried t o explain t hem in
sim ple words. Also, t he fram ework, approach and suggest ions are m y experiences.
My int ent ion of t his fram ework is t o help Test Engineers t o underst and t he concept s
of t est ing, various t echniques and apply t hem effect ively in t heir daily work. This
framework is not for publication or for monetary distribution.

I f you have any queries, suggest ions for im provem ent s or any point s found m issing,
kindly write back to me.


1.2 Traditional Testing Cycle

Let us look at t he t radit ional Soft ware Developm ent life cycle. The figure below
depicts the same.

       Requirements                              Requirements


            Design                                    Design
                                                                             Te st




             Code                                      Code


              Test                                Maintenance


        Maintenance

             Fig A                                    Fig B

I n t he above diagram ( Fig A) , t he Test ing phase com es aft er t he Coding is com plet e
and before the product is launched and goes into maintenance.


Software Testing Framework V2.0                                                       5 of 25
But , t he recom m ended t est process involves t est ing in every phase of t he life cycle
( Fig B) . During t he requirem ent phase, t he em phasis is upon validat ion t o det erm ine
t hat t he defined requirem ent s m eet t he needs of t he proj ect . During t he design and
program phases, t he em phasis is on verificat ion t o ensure t hat t he design and
program s accom plish t he defined requirem ent s. During t he t est and inst allat ion
phases, t he em phasis is on inspect ion t o det erm ine t hat t he im plem ent ed syst em
meets the system specification.

The chart below describes the Life Cycle verification activities.

Life Cycle Phase         Verification Activities
Requirements                • Determine verification approach.
                            • Determine adequacy of requirements.
                            • Generate functional test data.
                            • Determine consistency of design with requirements.
Design                      • Determine adequacy of design.
                            • Generate structural and functional test data.
                            • Determine consistency with design
Program (Build)             • Determine adequacy of implementation
                            • Generat e st ruct ural and funct ional t est dat a for
                                programs.
Test                        • Test application system.
Installation                • Place tested system into production.
Maintenance                 • Modify and retest.

Throughout the entire lifecycle, neither development nor verification is a straight- line
act ivit y. Modificat ions or correct ions t o a st ruct ure at one phase will require
modifications or re- verification of structures produced during previous phases.


2.0 Verification and Validation Testing Strategies

2.1 Verification Strategies

The Verificat ion St rat egies, persons / t eam s involved in t he t est ing, and t he
deliverable of that phase of testing is briefed below:

Verification             Performed By           Explanation             Deliverable
Strategy
Requirements             Users, Developers,     Requirement             Reviewed and
Reviews                  Test Engineers.        Review’s help in        approved
                                                base lining desired     statement of
                                                requirements to         requirements.
                                                build a system.
Design Reviews           Designers, Test        Design Reviews help     System Design
                         Engineers              in validating if the    Document,
                                                design meets the        Hardware Design
                                                requirements and        Document.
                                                build an effective
                                                system.
Code Walkthroughs        Developers,            Code Walkthroughs       Software ready for
                         Subject Specialists,   help in analyzing the   initial testing by
                         Test Engineers.        coding techniques       the developer.
                                                and if the code is
                                                meeting the coding
                                                standards
Software Testing Framework V2.0                                                       6 of 25
Code Inspections         Developers,             Formal analysis of        Software ready for
                         Subject Specialists,    the program source        testing by the
                         Test Engineers.         code to find defects      testing team.
                                                 as defined by
                                                 meeting system
                                                 design specification.


2.1.1 Review’s

The focus of Review is on a work product ( e.g. Requirem ent s docum ent , Code et c.) .
Aft er t he work product is developed, t he Proj ect Leader calls for a Review. The work
product is dist ribut ed t o t he personnel who involves in t he review. The m ain
audience for t he review should be t he Proj ect Manager, Proj ect Leader and t he
Producer of the work product.

Major reviews include the following:
1. In Process Reviews
2. Decision Point or Phase End Reviews
3. Post Implementation Reviews

Let us discuss in brief about t he above m ent ioned reviews. As per st at ist ics Reviews
uncover over 65% of t he defect s and t est ing uncovers around 30% . So, it ’s very
important to maintain reviews as part of the V&V strategies.

In- Process Review
In- Process Review looks at t he product during a specific t im e period of a life cycle,
such as act ivit y. They are usually lim it ed t o a segm ent of a proj ect , wit h t he goal of
ident ifying defect s as work progresses, rat her t han at t he close of a phase or even
later, when they are more costly to correct.

Decision- Point or Phase- End Review
This review looks at t he product for t he m ain purpose of det erm ining whet her t o
cont inue wit h planned act ivit ies. They are held at t he end of each phase, in a
sem iform al or form al way. Defect s found are t racked t hrough resolut ion, usually by
way of t he exist ing defect t racking syst em . The com m on phase- end reviews are
Software Requirements Review, Critical Design Review and Test Readiness Review.

        •   The Soft w a r e Re qu ir e m e n t s Re vie w is aim ed at validat ing and
            approving t he docum ent ed soft ware requirem ent s for t he purpose of
            est ablishing a baseline and ident ifying analysis packages. The
            Developm ent Plan, Soft ware Test Plan, Configurat ion Managem ent Plan
            are some of the documents reviews during this phase.

        •   The Cr it ica l D e sign Re vie w baselines t he det ailed design specificat ion.
            Test cases are reviewed and approved.

        •   The Te st Re a din e ss Re vie w is perform ed when t he appropriat e
            applicat ion com ponent s are near com plet ing. This review will det erm ine
            the readiness of the application for system and acceptance testing.

Post Implementation Review
These reviews are held aft er im plem ent at ion is com plet e t o audit t he process based
on act ual result s. Post - I m plem ent at ion reviews are also known as Postmortems and
are held t o assess t he success of t he overall process aft er release and ident ify any
opport unit ies for process im provem ent . They can be held up t o t hree t o six m ont hs
after implementation, and are conducted in a format.
Software Testing Framework V2.0                                                          7 of 25
There are three general classes of reviews:
1. Informal or Peer Review
2. Semiformal or Walk- Through
3. Format or Inspections

Pe e r Re vie w is generally a one- to- one m eet ing bet ween t he aut hor of a work
product and a peer, init iat ed as a request for im port regarding a part icular art ifact or
problem . There is no agenda, and result s are not form ally report ed. These reviews
occur on an as needed basis throughout each phase of a project.

2.1.2 Inspections
A knowledgeable individual called a m oderat or, who is not a m em ber of t he t eam or
t he aut hor of t he product under review, facilit at es inspect ions. A recorder who
records the defects found and actions assigned assists the moderator. The meeting is
planned in advance and m at erial is dist ribut ed t o all t he part icipant s and t he
part icipant s are expect ed t o at t end t he m eet ing well prepared. The issues raised
during t he m eet ing are docum ent ed and circulat ed am ong t he m em bers present and
the management.


2.1.3 Walkthroughs
The aut hor of t he m at erial being reviewed facilit at es walk- Through. The part icipant s
are led t hrough t he m at erial in one of t wo form at s; t he present at ion is m ade wit hout
int errupt ions and com m ent s are m ade at t he end, or com m ent s are m ade
t hroughout . I n eit her case, t he issues raised are capt ured and published in a report
dist ribut ed t o t he part icipant s. Possible solut ions for uncovered defect s are not
discussed during the review.


2.2 Validation Strategies

The Validat ion St rat egies, persons / t eam s involved in t he t est ing, and t he
deliverable of that phase of testing is briefed below:

Validation               Performed By            Explanation               Deliverable
Strategy
Unit Testing.            Developers / Test       Testing of single         Software unit
                         Engineers.              program, modules,         ready for testing
                                                 or unit of code.          with other system
                                                                           component.
Integration Testing.     Test Engineers.         Testing of integrated     Portions of the
                                                 programs, modules,        system ready for
                                                 or units of code.         testing with other
                                                                           portions of the
                                                                           system.
System Testing.          Test Engineers.         Testing of entire         Tested computer
                                                 computer system.          system, based on
                                                 This kind of testing      what was specified
                                                 usually includes          to be developed.
                                                 functional and
                                                 structural testing.
Production               Developers, Test        Testing of the whole      Stable application.
Environment              Engineers.              computer system
Testing.                                         before rolling out to
                                                 the UAT.
Software Testing Framework V2.0                                                          8 of 25
User Acceptance          Users.                  Testing of computer       Tested and
Testing.                                         system to make sure       accepted system
                                                 it will work in the       based on the user
                                                 system regardless of      needs.
                                                 what the system
                                                 requirements
                                                 indicate.
Installation             Test Engineers.         Testing of the            Successfully
Testing.                                         Computer System           installed
                                                 during the                application.
                                                 Installation at the
                                                 user place.
Beta Testing             Users.                  Testing of the            Successfully
                                                 application after the     installed and
                                                 installation at the       running
                                                 client place.             application.

3.0 Testing Types
There are two types of testing:

    1. Functional or Black Box Testing,
    2. Structural or White Box Testing.

Before t he Proj ect Managem ent decides on t he t est ing act ivit ies t o be perform ed, it
should have decided t he t est t ype t hat it is going t o follow. I f it is t he Black Box,
t hen t he t est cases should be writ t en addressing t he funct ionalit y of t he applicat ion.
I f it is t he Whit e Box, t hen t he Test Cases should be writ t en for t he int ernal and
functional behavior of the system.

Funct ional t est ing ensures t hat t he requirem ent s are properly sat isfied by t he
applicat ion syst em . The funct ions are t hose t asks t hat t he syst em is designed t o
accomplish.

Structural testing ensures sufficient testing of the implementation of a function.

3.1 White Box Testing
Whit e Box Test ing; also know as glass box t est ing is a t est ing m et hod where t he
tester involves in testing the individual software programs using tools, standards etc.

Using white box testing methods, we can derive test cases that:
1) Guarant ee t hat all independent pat hs wit hin a m odule have been exercised at
lease once,
2) Exercise all logical decisions on their true and false sides,
3) Execute all loops at their boundaries and within their operational bounds, and
4) Exercise internal data structures to ensure their validity.

Advantages of White box testing:
1) Logic errors and incorrect assum pt ions are inversely proport ional t o t he
probability that a program path will be executed.
2) Oft en, a logical pat h is not likely t o be execut ed when, in fact , it m ay be execut ed
on a regular basis.
3) Typographical errors are random.




Software Testing Framework V2.0                                                            9 of 25
White Box Testing Types
There are various t ypes of Whit e Box Test ing. Here in t his fram ework I will address
the most common and important types.

3.1.1 Basis Path Testing
Basis pat h t est ing is a whit e box t est ing t echnique first proposed by Tom McCabe.
The Basis pat h m et hod enables t o derive a logical com plexit y m easure of a
procedural design and use t his m easure as a guide for defining a basis set of
execut ion pat hs. Test Cases derived t o exercise t he basis set are guarant eed t o
execute every statement in the program at least one time during testing.

3.1.2 Flow Graph Notation
The flow graph depict s logical cont rol flow using a diagram m at ic not at ion. Each
structured construct has a corresponding flow graph symbol.

3.1.3 Cyclomatic Complexity
Cyclom at ic com plexit y is a soft ware m et ric t hat provides a quant it at ive m easure of
the logical complexity of a program. When used in the context of a basis path testing
m et hod, t he value com put ed for Cyclom at ic com plexit y defines t he num ber for
independent pat hs in t he basis set of a program and provides us wit h an upper
bound for t he num ber of t est s t hat m ust be conduct ed t o ensure t hat all st at em ent s
have been executed at lease once.
An independent pat h is any pat h t hrough t he program t hat int roduces at least one
new set of processing statements or a new condition.

Computing Cyclomatic Complexity
Cyclom at ic com plexit y has a foundat ion in graph t heory and provides us wit h
extremely useful software metric. Complexity is computed in one of the three ways:
1. The num ber of regions of t he flow graph corresponds t o t he Cyclom at ic
complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
       V (G) = E- N+2
Where E, is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
       V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.

3.1.4 Graph Matrices
The procedure for deriving t he flow graph and even det erm ining a set of basis pat hs
is am enable t o m echanizat ion. To develop a soft ware t ool t hat assist s in basis pat h
testing, a data structure, called a graph matrix can be quite useful.
A Graph Mat rix is a square matrix whose size is equal to the number of nodes on the
flow graph. Each row and colum n corresponds t o an ident ified node, and m at rix
entries correspond to connections between nodes.

3.1.5 Control Structure Testing
Described below are some of the variations of Control Structure Testing.

    3.1.5.1 Condition Testing
    Condit ion t est ing is a t est case design m et hod t hat exercises t he logical
    conditions contained in a program module.

    3.1.5.2 Data Flow Testing
    The dat a flow t est ing m et hod select s t est pat hs of a program according t o t he
    locations of definitions and uses of variables in the program.

Software Testing Framework V2.0                                                        10 of 25
3.1.6 Loop Testing
Loop Test ing is a whit e box t est ing t echnique t hat focuses exclusively on t he validit y
of loop const ruct s. Four classes of loops can be defined: Sim ple loops, Concat enat ed
loops, nested loops, and unstructured loops.

    3.1.6.1 Simple Loops
    The following set s of t est s can be applied t o sim ple loops, where ‘n’ is t he
    maximum number of allowable passes through the loop.
    1. Skip the loop entirely.
    2. Only one pass through the loop.
    3. Two passes through the loop.
    4. ‘m’ passes through the loop where m<n.
    5. n- 1, n, n+1 passes through the loop.

    3.1.6.2 Nested Loops
    I f we ext end t he t est approach for sim ple loops t o nest ed loops, t he num ber of
    possible tests would grow geometrically as the level of nesting increases.
    1. Start at the innermost loop. Set all other loops to minimum values.
    2. Conduct sim ple loop t est s for t he innerm ost loop while holding t he out er loops
    at t heir m inim um it erat ion param et er values. Add ot her t est s for out - of- range or
    exclude values.
    3. Work out ward, conduct ing t est s for t he next loop, but keeping all ot her out er
    loops at minimum values and other nested loops to “typical” values.
    4. Continue until all loops have been tested.

    3.1.6.3 Concatenated Loops
    Concat enat ed loops can be t est ed using t he approach defined for sim ple loops, if
    each of t he loops is independent of t he ot her. However, if t wo loops are
    concat enat ed and t he loop count er for loop 1 is used as t he init ial value for loop
    2, then the loops are not independent.

    3.1.6.4 Unstructured Loops
    Whenever possible, t his class of loops should be redesigned t o reflect t he use of
    the structured programming constructs.


3.2 Black Box Testing
Black box t est ing, also known as behavioral t est ing focuses on t he funct ional
requirem ent s of t he soft ware. All t he funct ional requirem ent s of t he program will be
used to derive sets of input conditions for testing.


Black Box Testing Types
The following are the most famous/frequently used Black Box Testing Types.

      3.2.1 Graph Based Testing Methods
      Soft ware t est ing begins by creat ing a graph of im port ant obj ect s and t heir
      relationships and t hen devising a series of t est s t hat will cover t he graph so
      that each objects and their relationships and then devising a series of tests that
      will cover t he graph so t hat each obj ect and relat ionship is exercised and error
      are uncovered.

      3.2.2 Equivalence Partitioning
      Equivalence part it ioning is a black box t est ing m et hod t hat divides t he input
      domain of a program into classes of data from which test cases can be derived.
      EP can be defined according to the following guidelines:
Software Testing Framework V2.0                                                          11 of 25
1. I f an input condit ion specifies a range, one valid and one t wo invalid classes
      are defined.
      2. I f an input condit ion requires a specific value, one valid and t wo invalid
      equivalence classes are defined.
      3. I f an input condit ion specifies a m em ber of a set , one valid and one invalid
      equivalence class are defined.
      4. If an input condition is Boolean, one valid and one invalid class are defined.

      3.2.3 Boundary Value Analysis
      BVA is a t est case design t echnique t hat com plem ent s equivalence part it ioning.
      Rat her t han select ing any elem ent of an equivalence class, BVA leads t o t he
      select ion of t est cases at t he “ edges” of t he class. Rat her t han focusing solely
      on input conditions, BVA derives test cases from the output domain as well.
      Guidelines for BVA are sim ilar in m any respect s t o t hose provided for
      equivalence partitioning.

      3.2.4 Comparison Testing
      Sit uat ions where independent versions of soft ware be developed for crit ical
      applicat ions, even when only a single version will be used in t he delivered
      com put er based syst em . These independent versions from t he basis of a black
      box testing technique called Comparison testing or back- to- back testing.

      3.2.5 Orthogonal Array Testing
      The orthogonal array testing method is particularly useful in finding errors
      associated with region faults – an error category associated with faulty logic
      within a software component.


3.3 Scenario Based Testing (SBT)
Dr.Cem Kaner in “ A Pat t ern for Scenario Test ing” has explained scenario Based
Testing in great detail that can be found at www.testing.com.

What is Scenario Based Test ing and How/ Where is it useful is an int erest ing
question. I shall explain in brief the above two mentioned points.

Scenario Based Test ing is cat egorized under Black Box Test s and are m ost helpful
when t he t est ing is concent rat ed on t he Business logic and funct ional behavior of t he
applicat ion. Adopt ing SBT is effect ive when t est ing com plex applicat ions. Now, every
applicat ion is com plex, t hen it ’s t he t eam s call as t o im plem ent SBT or not . I would
personally suggest using SBT when t he funct ionalit y t o t est includes various feat ures
and funct ions. A best exam ple would be while t est ing banking applicat ion. As
banking applicat ions require ut m ost care while t est ing, handling various funct ions in
a single scenario would result in effective results.
A sam ple t ransact ion ( scenario) can be, a cust om er logging int o t he applicat ion,
checking his balance, t ransferring am ount t o anot her account , paying his bills,
checking his balance again and logging out.

In brief, use Scenario Based Tests when:
    1. Testing complex applications.
    2. Testing Business functionality.

When   designing scenarios, keep in mind:
  1.   The scenario should be close to the real life scenario.
  2.   Scenarios should be realistic.
  3.   Scenarios should be traceable to any/combination of functionality.
  4.   Scenarios should be supported by sufficient data.
Software Testing Framework V2.0                                                        12 of 25
3.4 Exploratory Testing
Explorat ory Test s are cat egorized under Black Box Test s and are aim ed at t est ing in
conditions when sufficient time is not available for testing or proper documentation is
not available.

Exploratory testing is ‘Testing while Exploring’. When you have no idea of how the
application works, exploring the application with the intent of finding errors can be
termed as Exploratory Testing.

Performing Exploratory Testing
This is one big question for many people. The following can be used to perform
Exploratory Testing:
   • Learn the Application.
   • Learn the Business for which the application is addressed.
   • Learn the technology to the maximum extent on which the application has
        been designed.
   • Learn how to test.
   • Plan and Design tests as per the learning.


4.0 Structural System Testing Techniques

The following are the structural system testing techniques.

Technique                Description                            Example
Stress                   Determine system performance           Sufficient disk space
                         with expected volumes.                 allocated.
Execution                System achieves desired level of       Transaction turnaround
                         proficiency.                           time adequate.
Recovery                 System can be returned to an           Evaluate adequacy of
                         operational status after a failure.    backup data.
Operations               System can be executed in a            Determine systems can
                         normal operational status.             run using document.
Compliance               System is developed in accordance      Standards follow.
                         with standards and procedures.
Security                 System is protected in accordance      Access denied.
                         with importance to organization.


5.0 Functional System Testing Techniques

The following are the functional system testing techniques.

Technique              Description                             Example
Requirements           System performs as specified.           Prove system
                                                               requirements.
Regression             Verifies that anything unchanged        Unchanged system
                       still performs correctly.               segments function.
Error Handling         Errors can be prevented or              Error introduced into the
                       detected and then corrected.            test.
Manual Support         The people- computer interaction        Manual procedures
                       works.                                  developed.
Intersystems.          Data is correctly passed from           Intersystem parameters
                       system to system.                       changed.
Control                Controls reduce system risk to an       File reconciliation
                       acceptable level.                       procedures work.
Software Testing Framework V2.0                                                    13 of 25
Parallel                 Old systems and new system are       Old and new system can
                         run and the results compared to      reconcile.
                         detect unplanned differences.


4.0 Testing Phases

           Requirement Study                     Requirement Checklist




                                                 Software Requirement
                                                     Specification



       Software Requirement                     Functional Specification
           Specification                               Checklist



                                                Functional Specification
                                                       Document



       Functional Specification                   Architecture Design
              Document



           Architecture Design                 Detailed Design Document



                                    Coding


       Functional Specification                Unit Test Case Documents
              Document



                                               Unit Test Case Document
            Design Document
                                                   System Test Case
                                                      Document
       Functional Specification
              Document                           Integration Test Case
                                                       Document


      Unit/Integration/System                    Regression Test Case
       Test Case Documents                            Document



       Functional Specification
              Document                          Performance Test Cases
                                                     and Scenarios
           Performance Criteria


       Software Requirement
           Specification

           Regression Test Case                User Acceptance Test Case
                Document                         Documents/Scenarios

       Performance Test Cases
            and Scenarios

Software Testing Framework V2.0                                                 14 of 25
4.2 Unit Testing

Goal of Unit t est ing is t o uncover defect s using form al t echniques like Boundary
Value Analysis ( BVA) , Equivalence Part it ioning, and Error Guessing. Defect s and
deviat ions in Dat e form at s, Special requirem ent s in input condit ions ( for exam ple
Text box where only num eric or alphabet s should be ent ered) , select ion based on
Com bo Box’s, List Box’s, Opt ion but t ons, Check Box’s would be ident ified during t he
Unit Testing phase.

4.3 Integration Testing

I nt egrat ion t est ing is a syst em at ic t echnique for const ruct ing t he program st ruct ure
while at t he sam e t im e conduct ing t est s t o uncover errors associat ed wit h
int erfacing. The obj ect ive is t o t ake unit t est ed com ponent s and build a program
structure that has been dictated by design.
Usually, the following methods of Integration testing are followed:
       1. Top- down Integration approach.
       2. Bottom- up Integration approach.

4.3.1 Top- down Integration
Top- down int egrat ion t est ing is an increm ent al approach t o const ruct ion of program
st ruct ure. Modules are int egrat ed by m oving downward t hrough t he cont rol
hierarchy, beginning wit h t he m ain cont rol m odule. Modules subordinat e t o t he m ain
cont rol m odule are incorporat ed int o t he st ruct ure in eit her a dept h- first or breadt h-
first manner.

    1. The Integration process is performed in a series of five steps:
    2. The main control module is used as a test driver and stubs are substituted for
       all components directly subordinate to the main control module.
    3. Depending on t he int egrat ion approach select ed subordinat e st ubs are
       replaced one at a time with actual components.
    4. Tests are conducted as each component is integrated.
    5. On com plet ion of each set of t est s, anot her st ub is replaced wit h t he real
       component.
    6. Regression t est ing m ay be conduct ed t o ensure t hat new errors have not
       been introduced.

4.3.2 Bottom- up Integration
Button- up int egrat ion t est ing begins const ruct ion and t est ing wit h at om ic m odules
(i.e. components at the lowest levels in the program structure). Because components
are int egrat ed from t he but t on up, processing required for com ponent s subordinat e
to a given level is always available and the need for stubs is eliminated.

    1. A Bot t om - up int egrat ion st rat egy m ay be im plem ent ed wit h t he following
       steps:
    2. Low level com ponent s are com bined int o clust ers t hat perform a specific
       software sub function.
    3. A driver is written to coordinate test case input and output.
    4. The cluster is tested.
    5. Drivers are rem oved and clust ers are com bined m oving upward in t he
       program structure.




Software Testing Framework V2.0                                                           15 of 25
4.4 Smoke Testing

“Smoke testing might be a characterized as a rolling integration strategy”.
Sm oke t est ing is an int egrat ion t est ing approach t hat is com m only used when
“shrink- wrapped” soft ware product s are being developed. I t is designed as a pacing
m echanism for t im e- critical projects, allowing the software team to assess its project
on a frequent basis.
The sm oke t est should exercise t he ent ire syst em from end t o end. Sm oke t est ing
provides benefits such as:
1) Integration risk is minimized.
2) The quality of the end- product is improved.
3) Error diagnosis and correction are simplified.
4) Progress is easier to asses.

4.5 System Testing

Syst em t est ing is a series of different t est s whose prim ary purpose is t o fully
exercise t he com put er based syst em . Alt hough each t est has a different purpose, all
work t o verify t hat syst em elem ent s have been properly int egrat ed and perform
allocated functions.
The following tests can be categorized under System testing:
    1. Recovery Testing.
    2. Security Testing.
    3. Stress Testing.
    4. Performance Testing.


4.5.1. Recovery Testing
Recovery t est ing is a syst em t est t hat focuses t he soft ware t o fall in a variet y of
ways and verifies t hat recovery is properly perform ed. I f recovery is aut om at ic,
reinit ializat ion, checkpoint ing m echanism s, dat a recovery and rest art are evaluat ed
for correct ness. I f recovery requires hum an int ervent ion, t he m ean- time- to- repair
(MTTR) is evaluated to determine whether it is within acceptable limits.

4.5.2. Security Testing
Securit y t est ing at t em pt s t o verify t hat prot ect ion m echanism s built int o a syst em
will, in fact , prot ect it from im proper penet rat ion. During Securit y t est ing, password
cracking, unaut horized ent ry int o t he soft ware, net work securit y are all t aken int o
consideration.

4.5.3. Stress Testing
St ress t est ing execut es a syst em in a m anner t hat dem ands resources in abnorm al
quant it y, frequency, or volum e. The following t ypes of t est s m ay be conduct ed
during stress testing;
         • Special t est s m ay be designed t hat generat e t en int errupt s per second,
             when one or two is the average rate.
         • I nput dat a rat es m ay be increases by an order of m agnit ude t o det ermine
             how input functions will respond.
         • Test Cases that require maximum memory or other resources.
         • Test Cases that may cause excessive hunting for disk- resident data.
         • Test Cases that my cause thrashing in a virtual operating system.

4.5.4. Performance Testing
Perform ance t est s are coupled wit h st ress t est ing and usually require bot h hardware
and software instrumentation.

Software Testing Framework V2.0                                                          16 of 25
4.5.5. Regression Testing
Regression testing is the re- execution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side affects.
Regression may be conducted manually, by re- executing a subset of al test cases or
using automated capture/playback tools.
The Regression test suit contains three different classes of test cases:
       • A representative sample of tests that will exercise all software functions.
       • Additional tests that focus on software functions that are likely to be
          affected by the change.
       • Tests that focus on the software components that have been changed.

4.6 Alpha Testing
The Alpha testing is conducted at the developer sites and in a controlled environment
by the end- user of the software.


4.7 User Acceptance Testing
User Accept ance t est ing occurs j ust before t he soft ware is released t o t he cust om er.
The end- users along with the developers perform the User Acceptance Testing with a
certain set of test cases and typical scenarios.


4.8 Beta Testing
The Bet a t est ing is conduct ed at one or m ore cust om er sit es by t he end- user of t he
soft ware. The bet a t est is a live applicat ion of t he soft ware in an environm ent t hat
cannot be controlled by the developer.


5.0 Metrics
Met rics are t he m ost im port ant responsibilit y of t he Test Team . Met rics allow for
deeper underst anding of t he perform ance of t he applicat ion and it s behavior. The
fine t uning of t he applicat ion can be enhanced only wit h m et rics. I n a t ypical QA
process, there are many metrics which provide information.
The following can be regarded as the fundamental metric:
IEEE Std 982.2 - 1988 defines a Functional or Test Coverage Metric. It can be used
to measure test coverage prior to software delivery. It provide a measure of the
percentage of the software tested at any point during testing.
It is calculated as follows:
Function Test Coverage = FE/FT
Where
FE is the number of test requirements that are covered by test cases that were
executed against the software
FT is the total number of test requirements

Software Release Metrics
The software is ready for release when:
1. It has been tested with a test suite that provides 100% functional coverage, 80%
branch coverage, and 100% procedure coverage.
2. There are no level 1 or 2 severity defects.
3. The defect finding rate is less than 40 new defects per 1000 hours of testing
4. The software reaches 1000 hours of operation
5. Stress testing, configuration testing, installation testing, Naïve user testing,
usability testing, and sanity testing have been completed


Software Testing Framework V2.0                                                       17 of 25
IEEE Software Maturity Metric
IEEE Std 982.2 - 1988 defines a Software Maturity Index that can be used to
determine the readiness for release of a software system. This index is especially
useful for assessing release readiness when changes, additions, or deletions are
made to existing software systems. It also provides an historical index of the impact
of changes. It is calculated as follows:
SMI = Mt - ( Fa + Fc + Fd)/Mt
Where
SMI is the Software Maturity Index value
Mt is the number of software functions/modules in the current release
Fc is the number of functions/modules that contain changes from the previous
release
Fa is the number of functions/modules that contain additions to the previous release
Fd is the number of functions/modules that are deleted from the previous release

Reliability Metrics
Perry offers the following equation for calculating reliability.
Reliability = 1 - Number of errors (actual or predicted)/Total number of
lines of executable code
This reliability value is calculated for the number of errors during a specified time
interval.
Three other metrics can be calculated during extended testing or after the system is
in production. They are:
MTTFF (Mean Time to First Failure)
MTTFF = The number of time intervals the system is operable until its first failure
MTBF (Mean Time Between Failures)
MTBF = Sum of the time intervals the system is operable
Number of failures for the time period
MTTR (Mean Time To Repair)
MTTR = sum of the time intervals required to repair the system
The number of repairs during the time period




Software Testing Framework V2.0                                                18 of 25
6.0 Test Models
There are various models of Software Testing. Here in this framework I would
explain the three most commonly used models:

                            1. The ‘V’ Model.
                            2. The ‘W’ Model.
                            3. The Butterfly Model

6.1 The ‘V’ Model
The following diagram depicts the ‘V’ Model

  Requirements                                         Acceptance Tests



                                                     System Tests
        Specification



                                               Integration Tests
              Architecture



                                            Unit Tests
               Detailed Design




                                   Coding
The diagram is self- explanatory. For an easy understanding, look at the following
table:
SDLC Phase                                  Test Phase
1. Requirements                             1. Build Test Strategy.
                                            2. Plan for Testing.
                                            3. Acceptance Test Scenarios
                                            Identification.
2. Specification                            1. System Test Case Generation.
3. Architecture                             1. Integration Test Case Generation.
4. Detailed Design                          1. Unit Test Case Generation




Software Testing Framework V2.0                                                19 of 25
6.2 The ‘W’ Model
The following diagram depicts the ‘W’ model:




                             Regression                             Requirements
     Requirements
                               Round 3                              Review
                             Performance
                                  Testing

                                  Regression
           Specification            Round 2                   Specification    System
                                                              Review           Testing




                   Architecture        Regression       Architecture     Integration
                                         Round 1        Review           Testing




                      Detailed Design                              Unit
                                                   Design
                                                                   Testing
                                                   Review

                                            Code         Code
                                                      Walkthrough



The ‘W’ model depicts that the Testing starts from day one of the initiation of the
project and continues till the end. The following table will illustrate the phases of
activities that happen in the ‘W’ model:

SDLC Phase             The first ‘V’                        The second ‘V’
1. Requirements        1. Requirements Review               1. Build Test Strategy.
                                                            2. Plan for Testing.
                                                            3. Acceptance (Beta) Test Scenario
                                                            Identification.
2. Specification       2.   Specification Review            1. System Test Case Generation.
3. Architecture        3.   Architecture Review             1. Integration Test Case Generation.
4. Detailed Design     4.   Detailed Design Review          1. Unit Test Case Generation.
5. Code                5.   Code Walkthrough                1. Execute Unit Tests
                                                            1. Execute Integration Tests.
                                                            1. Regression Round 1.
                                                            1. Execute System Tests.
                                                            1. Regression Round 2.
                                                            1. Performance Tests
                                                            1. Regression Round 3
                                                            1. Performance/Beta Tests




Software Testing Framework V2.0                                                          20 of 25
In the second ‘V’, I have mentioned Acceptance/Beta Test Scenario Identification.
 This is because, the customer might want to design the Acceptance Tests. In this
 case as the development team executes the Beta Tests at the client place, the same
 team can identify the Scenarios.

 Regression Rounds are performed at regular intervals to check whether the defects,
 which have been raised and fixed, are re- tested.


 6.3 The Butterfly Model

 The t est ing act ivit ies for t est ing soft ware product s are preferable t o follow t he
 Butterfly Model. The following picture depicts the test methodology.




Test Design                                                                 Test Analysis
Test Execution




                                   Fig: Butterfly Model


 I n t he Butterfly m odel of Test Developm ent , t he left wing of t he but t erfly depict s
 the Te st An a lysis. The right wing depict s t he Te st D e sign , and finally t he body of
 t he but t erfly depict s t he Te st Ex e cu t ion . How t his exact ly happens is described
 below.

 Test Analysis

 Analysis is t he key fact or which drives in any planning. During t he analysis, t he
 analyst understands the following:
 •  Verify t hat each requirem ent is t agged in a m anner t hat allows correlat ion of t he
    tests for that requirement to the requirement itself. (Establish Test Traceability)
 • Verify traceability of the software requirements to system requirements.
 • Inspect for contradictory requirements.
 • Inspect for ambiguous requirements.
 • Inspect for missing requirements.
 • Check t o m ake sure t hat each requirem ent , as well as t he specificat ion as a
    whole, is understandable.
 • I dent ify one or m ore m easurem ent , dem onst rat ion, or analysis m et hod t hat m ay
    be used to verify the requirement’s implementation (during formal testing).
 • Creat e a t est “ sket ch” t hat includes t he t ent at ive approach and indicat es t he
    test’s objectives.
 During Test Analysis t he required docum ent s will be carefully st udied by t he Test
 Personnel, and the final Analysis Report is documented.
 The following documents would be usually referred:

     1. Software Requirements Specification.
     2. Functional Specification.
     3. Architecture Document.

 Software Testing Framework V2.0                                                     21 of 25
4. Use Case Documents.

The An a lysis Re por t would consist of t he underst anding of t he applicat ion, t he
funct ional flow of t he applicat ion, num ber of m odules involved and t he effect ive Test
Time.

Test Design
The right wing of t he but t erfly represent s t he act of designing and im plem ent ing t he
t est cases needed t o verify t he design art ifact as replicat ed in t he im plem ent at ion.
Like test analysis, it is a relatively large piece of work. Unlike test analysis, however,
t he focus of t est design is not t o assim ilat e inform at ion creat ed by ot hers, but rat her
t o im plem ent procedures, t echniques, and dat a set s t hat achieve t he t est ’s
objective(s).
The out put s of t he t est analysis phase are t he foundat ion for t est design. Each
requirem ent or design const ruct has had at least one t echnique ( a m easurem ent ,
dem onst rat ion, or analysis) ident ified during t est analysis t hat will validat e or verify
that requirement. The tester must now implement the intended technique.
Soft ware t est design, as a discipline, is an exercise in t he prevent ion, det ect ion, and
elim inat ion of bugs in soft ware. Prevent ing bugs is t he prim ary goal of soft ware
t est ing. Diligent and com pet ent t est design prevent s bugs from ever reaching t he
im plem ent at ion st age. Test design, wit h it s at t endant t est analysis foundat ion, is
t herefore t he prem iere weapon in t he arsenal of developers and t est ers for lim it ing
the cost associated with finding and fixing bugs.
During Test Design, basing on t he Analysis Report t he t est personnel would develop
the following:

    1.   Test Plan.
    2.   Test Approach.
    3.   Test Case documents.
    4.   Performance Test Parameters.
    5.   Performance Test Plan.

Test Execution

Any test case should adhere to the following principals:
   1. Accurate – tests what the description says it will test.
   2. Economical – has only the steps needed for its purpose.
   3. Repeatable – tests should be consistent, no matter who/when it is executed.
   4. Appropriate – should be apt for the situation.
   5. Traceable – the functionality of the test case should be easily found.

During t he Test Execut ion phase, keeping t he Proj ect and t he Test schedule, t he t est
cases designed would be execut ed. The following docum ent s will be handled during
the test execution phase:
1. Test Execution Reports.
2. Daily/Weekly/monthly Defect Reports.
3. Person wise defect reports.

After the Test Execution phase, the following documents would be signed off.

1. Project Closure Document.
2. Reliability Analysis Report.
3. Stability Analysis Report.
4. Performance Analysis Report.
5. Project Metrics.



Software Testing Framework V2.0                                                          22 of 25
7.0 Defect Tracking Process

The Defect Tracking process should answer the following questions:
   1. When is the defect found?
   2. Who raised the defect?
   3. Is the defect reported properly?
   4. Is the defect assigned to the appropriate developer?
   5. When was the defect fixed?
   6. Is the defect re- tested?
   7. Is the defect closed?

The defect tracking process has to be handled carefully and managed efficiently.

The following figure illustrates the defect tracking process:

              The Tester/Developer
                 finds the Bug.



              Reports the Defect in
               the Defect Tracking
              Tool. Status “Open”



                 The concerned
              Developer is informed




               The Developer fixes
                   the Defect




                 The Developer                      If the Defect re-
              changes the Status to                occurs, the status
                  “Resolved”                     changes to “Re- Open”



               The Tester Re- Tests
              and changes Status to
                    “Closed”




Defect Classification

This sect ion defines a defect Severit y Scale fram ework for det erm ining defect
crit icalit y and t he associat ed defect Priorit y Levels t o be assigned t o errors found
software.




Software Testing Framework V2.0                                                     23 of 25
The defects can be classified as follows:

Classification       Description
Critical             There is s funct ionalit y block. The applicat ion is not able t o
                     proceed any further.
Major                The applicat ion is not working as desired. There are variat ions in
                     the functionality.
Minor                There is no failure report ed due t o t he defect , but cert ainly needs
                     to be rectified.
Cosmetic             Defects in the User Interface or Navigation.
Suggestion           Feature which can be added for betterment.


Priority Level of the Defect

The priorit y level describes t he t im e for resolut ion of t he defect . The priorit y level
would be classified as follows:

Classification       Description
Immediate            Resolve the defect with immediate effect.
At the Earliest      Resolve the defect at the earliest, on priority at the second level.
Normal               Resolve the defect.
Later                Could be resolved at the later stages.


8.0 Test Process for a Project

I n t his sect ion, I would explain how t o go about planning your t est ing act ivit ies
effect ively and efficient ly. The process is explained in a t abular form at giving t he
phase of testing, activity and person responsible.

For t his, I assum e t hat t he proj ect has been ident ified and t he t est ing t eam consist s
of five personnel: Test Manager, Test Lead, Senior Test Engineer and 2 Test
Engineer’s.

SDLC Phase           Testing Phase/Activity                      Personnel
1. Requirements      1. Study the requirements for               Test Manager / Test Lead
                     Testability.
                     2. Design the Test Strategy.
                     3. Prepare the Test Plan.
                     4. Identify scenarios for
                     Acceptance/Beta Tests
2. Specification     1. Identify System Test Cases /             Test Lead, Senior Test
                     Scenarios.                                  Engineer, and Test Engineers.
                     2. Identify Performance Tests.
3. Architecture      1. Identify Integration Test Cases /        Test Lead, Senior Test
                     Scenarios.                                  Engineer, and Test Engineers.
                     2. Identify Performance Tests.
4. Detailed          1. Generate Unit Test Cases                 Test Engineers.
Design




Software Testing Framework V2.0                                                         24 of 25
9.0 Deliverables
The Deliverables from the Test team would include the following:

        1.   Test Strategy.
        2.   Test Plan.
        3.   Test Case Documents.
        4.   Defect Reports.
        5.   Status Reports (Daily/weekly/Monthly).
        6.   Test Scripts (if any).
        7.   Metric Reports.
        8.   Product Sign off Document.




Software Testing Framework V2.0                                    25 of 25

Mais conteúdo relacionado

Semelhante a software-testing-framework 3

Quality Assessment Handbook
Quality Assessment HandbookQuality Assessment Handbook
Quality Assessment HandbookMani Nutulapati
 
Practical Guide To Software System Testing
Practical Guide To Software System TestingPractical Guide To Software System Testing
Practical Guide To Software System Testingvladimir zaremba
 
Performance Test Plan - Sample 2
Performance Test Plan - Sample 2Performance Test Plan - Sample 2
Performance Test Plan - Sample 2Atul Pant
 
CAST_CBOK_Ver_6-2 2010.09.17
CAST_CBOK_Ver_6-2 2010.09.17CAST_CBOK_Ver_6-2 2010.09.17
CAST_CBOK_Ver_6-2 2010.09.17Tasha Howle
 
Detecting netflixthrough analysis of twitter
Detecting netflixthrough analysis of twitterDetecting netflixthrough analysis of twitter
Detecting netflixthrough analysis of twitterJack Shepherd
 
An Approach To Software Development Life Cycle
An Approach To Software Development Life CycleAn Approach To Software Development Life Cycle
An Approach To Software Development Life CycleBettyBaker
 
Setting up and managing a test lab
Setting up and managing a test labSetting up and managing a test lab
Setting up and managing a test labkurkj
 
Istqb Agile-tester Extension
Istqb Agile-tester ExtensionIstqb Agile-tester Extension
Istqb Agile-tester ExtensionGirish Goutam
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdfCbhaSlide
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdfCbhaSlide
 

Semelhante a software-testing-framework 3 (20)

Lakhotia09
Lakhotia09Lakhotia09
Lakhotia09
 
Quality Assessment Handbook
Quality Assessment HandbookQuality Assessment Handbook
Quality Assessment Handbook
 
Component testing
Component testingComponent testing
Component testing
 
Practical Guide To Software System Testing
Practical Guide To Software System TestingPractical Guide To Software System Testing
Practical Guide To Software System Testing
 
Performance Test Plan - Sample 2
Performance Test Plan - Sample 2Performance Test Plan - Sample 2
Performance Test Plan - Sample 2
 
Testing
TestingTesting
Testing
 
Qtp tutorial
Qtp tutorialQtp tutorial
Qtp tutorial
 
CAST_CBOK_Ver_6-2 2010.09.17
CAST_CBOK_Ver_6-2 2010.09.17CAST_CBOK_Ver_6-2 2010.09.17
CAST_CBOK_Ver_6-2 2010.09.17
 
Detecting netflixthrough analysis of twitter
Detecting netflixthrough analysis of twitterDetecting netflixthrough analysis of twitter
Detecting netflixthrough analysis of twitter
 
An Approach To Software Development Life Cycle
An Approach To Software Development Life CycleAn Approach To Software Development Life Cycle
An Approach To Software Development Life Cycle
 
Setting up and managing a test lab
Setting up and managing a test labSetting up and managing a test lab
Setting up and managing a test lab
 
Testing guide
Testing guideTesting guide
Testing guide
 
Istqb Agile-tester Extension
Istqb Agile-tester ExtensionIstqb Agile-tester Extension
Istqb Agile-tester Extension
 
Txet Document
Txet DocumentTxet Document
Txet Document
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 
Testing Tutorial PDF
Testing Tutorial PDFTesting Tutorial PDF
Testing Tutorial PDF
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 

software-testing-framework 3

  • 1. Software Testing Framework Document version: 2.0 Harinath V Pudipeddi hari.nath@sqae.com http://www.sqae.com
  • 2. Table of Contents Table of Contents ...........................................................................................2 Revision History .............................................................................................4 Testing Framework .........................................................................................5 1.0 INTRODUCTION ........................................................................................................................... 5 1.2 TRADITIONAL TESTING CYCLE ...........................................................................5 2.0 VERIFICATION AND VALIDATION TESTING STRATEGIES................................... 6 2.1 VERIFICATION STRATEGIES ..............................................................................6 2.1.1 REVIEW’ S ......................................................................................7 2.1.2 INSPECTIONS ..................................................................................8 2.1.3 WALKTHROUGHS ..............................................................................8 2.2 VALIDATION STRATEGIES ................................................................................8 3.0 TESTING TYPES ............................................................................................................................ 9 3.1 WHITE BOX TESTING .....................................................................................9 W HITE BOX TESTING TYPES................................................................................. 10 3.1.1 BASIS PATH TESTING ...................................................................... 10 3.1.2 FLOW GRAPH N OTATION ................................................................... 10 3.1.3 CYCLOMATIC COMPLEXITY ................................................................. 10 3.1.4 GRAPH MATRICES .......................................................................... 10 3.1.5 CONTROL STRUCTURE TESTING ........................................................... 10 3.1.5.1 Condition Testing ........................................................... 10 3.1.5.2 Data Flow Testing .......................................................... 10 3.1.6 LOOP TESTING ........................................................................................ 11 3.1.6.1 Simple Loops .......................................................................... 11 3.1.6.2 Nested Loops .......................................................................... 11 3.1.6.3 Concatenated Loops ................................................................. 11 3.1.6.4 Unstructured Loops .................................................................. 11 3.2 BLACK BOX TESTING ................................................................................... 11 BLACK BOX TESTING TYPES ................................................................................. 11 3.2.1 GRAPH BASED TESTING METHODS ....................................................... 11 3.2.2 EQUIVALENCE PARTITIONING .............................................................. 11 3.2.3 BOUNDARY VALUE ANALYSIS .............................................................. 12 3.2.4 COMPARISON TESTING ..................................................................... 12 3.2.5 ORTHOGONAL ARRAY TESTING ............................................................ 12 3.3 SCENARIO BASED TESTING (SBT).......................................................... 12 3.4 EXPLORATORY TESTING ....................................................................... 13 4.0 STRUCTURAL SYSTEM TESTING TECHNIQUES ........................................................ 13 5.0 FUNCTIONAL SYSTEM TESTING TECHNIQUES......................................................... 13 4.0 TESTING PHASES ...................................................................................................................... 14 4.2 UNIT TESTING ........................................................................................... 15 4.3 INTEGRATION TESTING ................................................................................. 15 4.3.1 TOP- DOWN I NTEGRATION.................................................................. 15 Software Testing Framework V2.0 2 of 25
  • 3. 4.3.2 BOTTOM- UP I NTEGRATION ................................................................. 15 4.4 SMOKE TESTING......................................................................................... 16 4.5 SYSTEM TESTING ........................................................................................ 16 4.5.1. RECOVERY TESTING ....................................................................... 16 4.5.2. SECURITY TESTING ........................................................................ 16 4.5.3. STRESS TESTING .......................................................................... 16 4.5.4. PERFORMANCE TESTING .................................................................. 16 4.5.5. REGRESSION TESTING .................................................................... 17 4.6 ALPHA TESTING ......................................................................................... 17 4.7 USER ACCEPTANCE TESTING ........................................................................... 17 4.8 BETA TESTING ........................................................................................... 17 5.0 METRICS ......................................................................................................................................... 17 6.0 TEST MODELS .............................................................................................................................. 19 6.1 THE ‘V’ MODEL .......................................................................................... 19 6.2 THE ‘W’ MODEL ......................................................................................... 20 6.3 THE BUTTERFLY MODEL ................................................................................ 21 7.0 DEFECT TRACKING PROCESS.............................................................................................. 23 8.0 TEST PROCESS FOR A PROJECT ........................................................................................ 24 9.0 DELIVERABLES ........................................................................................................................... 25 Software Testing Framework V2.0 3 of 25
  • 4. Revision History Version No. Date Author Notes 1.0 August 6, 2003 Harinath Initial Document Creation and Posting on web site. 2.0 December 15, Harinath Renamed the document to Software 2003 Testing Framework V2.0 Modified the structure of the document. Added Testing Models section Added SBT, ET testing types. Next Version of this framework would include Test Estimation Procedures and More Metrics. Software Testing Framework V2.0 4 of 25
  • 5. Testing Framework Through experience t hey det erm ined, t hat t here should be 30 defect s per 1000 lines of code. I f t est ing does not uncover 30 defect s, a logical solut ion is t hat t he t est process was not effective. 1.0 Introduction Test ing plays an im port ant role in t oday’s Syst em Developm ent Life Cycle. During Testing, we follow a systematic procedure to uncover defects at various stages of the life cycle. This fram ework is aim ed at providing t he reader various Test Types, Test Phases, Test Models and Test Met rics and guide as t o how t o perform effect ive Test ing in t he project. All t he definit ions and st andards m ent ioned in t his fram ework are exist ing one’s. I have not alt ered any definit ions, but where ever possible I t ried t o explain t hem in sim ple words. Also, t he fram ework, approach and suggest ions are m y experiences. My int ent ion of t his fram ework is t o help Test Engineers t o underst and t he concept s of t est ing, various t echniques and apply t hem effect ively in t heir daily work. This framework is not for publication or for monetary distribution. I f you have any queries, suggest ions for im provem ent s or any point s found m issing, kindly write back to me. 1.2 Traditional Testing Cycle Let us look at t he t radit ional Soft ware Developm ent life cycle. The figure below depicts the same. Requirements Requirements Design Design Te st Code Code Test Maintenance Maintenance Fig A Fig B I n t he above diagram ( Fig A) , t he Test ing phase com es aft er t he Coding is com plet e and before the product is launched and goes into maintenance. Software Testing Framework V2.0 5 of 25
  • 6. But , t he recom m ended t est process involves t est ing in every phase of t he life cycle ( Fig B) . During t he requirem ent phase, t he em phasis is upon validat ion t o det erm ine t hat t he defined requirem ent s m eet t he needs of t he proj ect . During t he design and program phases, t he em phasis is on verificat ion t o ensure t hat t he design and program s accom plish t he defined requirem ent s. During t he t est and inst allat ion phases, t he em phasis is on inspect ion t o det erm ine t hat t he im plem ent ed syst em meets the system specification. The chart below describes the Life Cycle verification activities. Life Cycle Phase Verification Activities Requirements • Determine verification approach. • Determine adequacy of requirements. • Generate functional test data. • Determine consistency of design with requirements. Design • Determine adequacy of design. • Generate structural and functional test data. • Determine consistency with design Program (Build) • Determine adequacy of implementation • Generat e st ruct ural and funct ional t est dat a for programs. Test • Test application system. Installation • Place tested system into production. Maintenance • Modify and retest. Throughout the entire lifecycle, neither development nor verification is a straight- line act ivit y. Modificat ions or correct ions t o a st ruct ure at one phase will require modifications or re- verification of structures produced during previous phases. 2.0 Verification and Validation Testing Strategies 2.1 Verification Strategies The Verificat ion St rat egies, persons / t eam s involved in t he t est ing, and t he deliverable of that phase of testing is briefed below: Verification Performed By Explanation Deliverable Strategy Requirements Users, Developers, Requirement Reviewed and Reviews Test Engineers. Review’s help in approved base lining desired statement of requirements to requirements. build a system. Design Reviews Designers, Test Design Reviews help System Design Engineers in validating if the Document, design meets the Hardware Design requirements and Document. build an effective system. Code Walkthroughs Developers, Code Walkthroughs Software ready for Subject Specialists, help in analyzing the initial testing by Test Engineers. coding techniques the developer. and if the code is meeting the coding standards Software Testing Framework V2.0 6 of 25
  • 7. Code Inspections Developers, Formal analysis of Software ready for Subject Specialists, the program source testing by the Test Engineers. code to find defects testing team. as defined by meeting system design specification. 2.1.1 Review’s The focus of Review is on a work product ( e.g. Requirem ent s docum ent , Code et c.) . Aft er t he work product is developed, t he Proj ect Leader calls for a Review. The work product is dist ribut ed t o t he personnel who involves in t he review. The m ain audience for t he review should be t he Proj ect Manager, Proj ect Leader and t he Producer of the work product. Major reviews include the following: 1. In Process Reviews 2. Decision Point or Phase End Reviews 3. Post Implementation Reviews Let us discuss in brief about t he above m ent ioned reviews. As per st at ist ics Reviews uncover over 65% of t he defect s and t est ing uncovers around 30% . So, it ’s very important to maintain reviews as part of the V&V strategies. In- Process Review In- Process Review looks at t he product during a specific t im e period of a life cycle, such as act ivit y. They are usually lim it ed t o a segm ent of a proj ect , wit h t he goal of ident ifying defect s as work progresses, rat her t han at t he close of a phase or even later, when they are more costly to correct. Decision- Point or Phase- End Review This review looks at t he product for t he m ain purpose of det erm ining whet her t o cont inue wit h planned act ivit ies. They are held at t he end of each phase, in a sem iform al or form al way. Defect s found are t racked t hrough resolut ion, usually by way of t he exist ing defect t racking syst em . The com m on phase- end reviews are Software Requirements Review, Critical Design Review and Test Readiness Review. • The Soft w a r e Re qu ir e m e n t s Re vie w is aim ed at validat ing and approving t he docum ent ed soft ware requirem ent s for t he purpose of est ablishing a baseline and ident ifying analysis packages. The Developm ent Plan, Soft ware Test Plan, Configurat ion Managem ent Plan are some of the documents reviews during this phase. • The Cr it ica l D e sign Re vie w baselines t he det ailed design specificat ion. Test cases are reviewed and approved. • The Te st Re a din e ss Re vie w is perform ed when t he appropriat e applicat ion com ponent s are near com plet ing. This review will det erm ine the readiness of the application for system and acceptance testing. Post Implementation Review These reviews are held aft er im plem ent at ion is com plet e t o audit t he process based on act ual result s. Post - I m plem ent at ion reviews are also known as Postmortems and are held t o assess t he success of t he overall process aft er release and ident ify any opport unit ies for process im provem ent . They can be held up t o t hree t o six m ont hs after implementation, and are conducted in a format. Software Testing Framework V2.0 7 of 25
  • 8. There are three general classes of reviews: 1. Informal or Peer Review 2. Semiformal or Walk- Through 3. Format or Inspections Pe e r Re vie w is generally a one- to- one m eet ing bet ween t he aut hor of a work product and a peer, init iat ed as a request for im port regarding a part icular art ifact or problem . There is no agenda, and result s are not form ally report ed. These reviews occur on an as needed basis throughout each phase of a project. 2.1.2 Inspections A knowledgeable individual called a m oderat or, who is not a m em ber of t he t eam or t he aut hor of t he product under review, facilit at es inspect ions. A recorder who records the defects found and actions assigned assists the moderator. The meeting is planned in advance and m at erial is dist ribut ed t o all t he part icipant s and t he part icipant s are expect ed t o at t end t he m eet ing well prepared. The issues raised during t he m eet ing are docum ent ed and circulat ed am ong t he m em bers present and the management. 2.1.3 Walkthroughs The aut hor of t he m at erial being reviewed facilit at es walk- Through. The part icipant s are led t hrough t he m at erial in one of t wo form at s; t he present at ion is m ade wit hout int errupt ions and com m ent s are m ade at t he end, or com m ent s are m ade t hroughout . I n eit her case, t he issues raised are capt ured and published in a report dist ribut ed t o t he part icipant s. Possible solut ions for uncovered defect s are not discussed during the review. 2.2 Validation Strategies The Validat ion St rat egies, persons / t eam s involved in t he t est ing, and t he deliverable of that phase of testing is briefed below: Validation Performed By Explanation Deliverable Strategy Unit Testing. Developers / Test Testing of single Software unit Engineers. program, modules, ready for testing or unit of code. with other system component. Integration Testing. Test Engineers. Testing of integrated Portions of the programs, modules, system ready for or units of code. testing with other portions of the system. System Testing. Test Engineers. Testing of entire Tested computer computer system. system, based on This kind of testing what was specified usually includes to be developed. functional and structural testing. Production Developers, Test Testing of the whole Stable application. Environment Engineers. computer system Testing. before rolling out to the UAT. Software Testing Framework V2.0 8 of 25
  • 9. User Acceptance Users. Testing of computer Tested and Testing. system to make sure accepted system it will work in the based on the user system regardless of needs. what the system requirements indicate. Installation Test Engineers. Testing of the Successfully Testing. Computer System installed during the application. Installation at the user place. Beta Testing Users. Testing of the Successfully application after the installed and installation at the running client place. application. 3.0 Testing Types There are two types of testing: 1. Functional or Black Box Testing, 2. Structural or White Box Testing. Before t he Proj ect Managem ent decides on t he t est ing act ivit ies t o be perform ed, it should have decided t he t est t ype t hat it is going t o follow. I f it is t he Black Box, t hen t he t est cases should be writ t en addressing t he funct ionalit y of t he applicat ion. I f it is t he Whit e Box, t hen t he Test Cases should be writ t en for t he int ernal and functional behavior of the system. Funct ional t est ing ensures t hat t he requirem ent s are properly sat isfied by t he applicat ion syst em . The funct ions are t hose t asks t hat t he syst em is designed t o accomplish. Structural testing ensures sufficient testing of the implementation of a function. 3.1 White Box Testing Whit e Box Test ing; also know as glass box t est ing is a t est ing m et hod where t he tester involves in testing the individual software programs using tools, standards etc. Using white box testing methods, we can derive test cases that: 1) Guarant ee t hat all independent pat hs wit hin a m odule have been exercised at lease once, 2) Exercise all logical decisions on their true and false sides, 3) Execute all loops at their boundaries and within their operational bounds, and 4) Exercise internal data structures to ensure their validity. Advantages of White box testing: 1) Logic errors and incorrect assum pt ions are inversely proport ional t o t he probability that a program path will be executed. 2) Oft en, a logical pat h is not likely t o be execut ed when, in fact , it m ay be execut ed on a regular basis. 3) Typographical errors are random. Software Testing Framework V2.0 9 of 25
  • 10. White Box Testing Types There are various t ypes of Whit e Box Test ing. Here in t his fram ework I will address the most common and important types. 3.1.1 Basis Path Testing Basis pat h t est ing is a whit e box t est ing t echnique first proposed by Tom McCabe. The Basis pat h m et hod enables t o derive a logical com plexit y m easure of a procedural design and use t his m easure as a guide for defining a basis set of execut ion pat hs. Test Cases derived t o exercise t he basis set are guarant eed t o execute every statement in the program at least one time during testing. 3.1.2 Flow Graph Notation The flow graph depict s logical cont rol flow using a diagram m at ic not at ion. Each structured construct has a corresponding flow graph symbol. 3.1.3 Cyclomatic Complexity Cyclom at ic com plexit y is a soft ware m et ric t hat provides a quant it at ive m easure of the logical complexity of a program. When used in the context of a basis path testing m et hod, t he value com put ed for Cyclom at ic com plexit y defines t he num ber for independent pat hs in t he basis set of a program and provides us wit h an upper bound for t he num ber of t est s t hat m ust be conduct ed t o ensure t hat all st at em ent s have been executed at lease once. An independent pat h is any pat h t hrough t he program t hat int roduces at least one new set of processing statements or a new condition. Computing Cyclomatic Complexity Cyclom at ic com plexit y has a foundat ion in graph t heory and provides us wit h extremely useful software metric. Complexity is computed in one of the three ways: 1. The num ber of regions of t he flow graph corresponds t o t he Cyclom at ic complexity. 2. Cyclomatic complexity, V(G), for a flow graph, G is defined as V (G) = E- N+2 Where E, is the number of flow graph edges, N is the number of flow graph nodes. 3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as: V (G) = P+1 Where P is the number of predicate nodes contained in the flow graph G. 3.1.4 Graph Matrices The procedure for deriving t he flow graph and even det erm ining a set of basis pat hs is am enable t o m echanizat ion. To develop a soft ware t ool t hat assist s in basis pat h testing, a data structure, called a graph matrix can be quite useful. A Graph Mat rix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and colum n corresponds t o an ident ified node, and m at rix entries correspond to connections between nodes. 3.1.5 Control Structure Testing Described below are some of the variations of Control Structure Testing. 3.1.5.1 Condition Testing Condit ion t est ing is a t est case design m et hod t hat exercises t he logical conditions contained in a program module. 3.1.5.2 Data Flow Testing The dat a flow t est ing m et hod select s t est pat hs of a program according t o t he locations of definitions and uses of variables in the program. Software Testing Framework V2.0 10 of 25
  • 11. 3.1.6 Loop Testing Loop Test ing is a whit e box t est ing t echnique t hat focuses exclusively on t he validit y of loop const ruct s. Four classes of loops can be defined: Sim ple loops, Concat enat ed loops, nested loops, and unstructured loops. 3.1.6.1 Simple Loops The following set s of t est s can be applied t o sim ple loops, where ‘n’ is t he maximum number of allowable passes through the loop. 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes through the loop. 4. ‘m’ passes through the loop where m<n. 5. n- 1, n, n+1 passes through the loop. 3.1.6.2 Nested Loops I f we ext end t he t est approach for sim ple loops t o nest ed loops, t he num ber of possible tests would grow geometrically as the level of nesting increases. 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct sim ple loop t est s for t he innerm ost loop while holding t he out er loops at t heir m inim um it erat ion param et er values. Add ot her t est s for out - of- range or exclude values. 3. Work out ward, conduct ing t est s for t he next loop, but keeping all ot her out er loops at minimum values and other nested loops to “typical” values. 4. Continue until all loops have been tested. 3.1.6.3 Concatenated Loops Concat enat ed loops can be t est ed using t he approach defined for sim ple loops, if each of t he loops is independent of t he ot her. However, if t wo loops are concat enat ed and t he loop count er for loop 1 is used as t he init ial value for loop 2, then the loops are not independent. 3.1.6.4 Unstructured Loops Whenever possible, t his class of loops should be redesigned t o reflect t he use of the structured programming constructs. 3.2 Black Box Testing Black box t est ing, also known as behavioral t est ing focuses on t he funct ional requirem ent s of t he soft ware. All t he funct ional requirem ent s of t he program will be used to derive sets of input conditions for testing. Black Box Testing Types The following are the most famous/frequently used Black Box Testing Types. 3.2.1 Graph Based Testing Methods Soft ware t est ing begins by creat ing a graph of im port ant obj ect s and t heir relationships and t hen devising a series of t est s t hat will cover t he graph so that each objects and their relationships and then devising a series of tests that will cover t he graph so t hat each obj ect and relat ionship is exercised and error are uncovered. 3.2.2 Equivalence Partitioning Equivalence part it ioning is a black box t est ing m et hod t hat divides t he input domain of a program into classes of data from which test cases can be derived. EP can be defined according to the following guidelines: Software Testing Framework V2.0 11 of 25
  • 12. 1. I f an input condit ion specifies a range, one valid and one t wo invalid classes are defined. 2. I f an input condit ion requires a specific value, one valid and t wo invalid equivalence classes are defined. 3. I f an input condit ion specifies a m em ber of a set , one valid and one invalid equivalence class are defined. 4. If an input condition is Boolean, one valid and one invalid class are defined. 3.2.3 Boundary Value Analysis BVA is a t est case design t echnique t hat com plem ent s equivalence part it ioning. Rat her t han select ing any elem ent of an equivalence class, BVA leads t o t he select ion of t est cases at t he “ edges” of t he class. Rat her t han focusing solely on input conditions, BVA derives test cases from the output domain as well. Guidelines for BVA are sim ilar in m any respect s t o t hose provided for equivalence partitioning. 3.2.4 Comparison Testing Sit uat ions where independent versions of soft ware be developed for crit ical applicat ions, even when only a single version will be used in t he delivered com put er based syst em . These independent versions from t he basis of a black box testing technique called Comparison testing or back- to- back testing. 3.2.5 Orthogonal Array Testing The orthogonal array testing method is particularly useful in finding errors associated with region faults – an error category associated with faulty logic within a software component. 3.3 Scenario Based Testing (SBT) Dr.Cem Kaner in “ A Pat t ern for Scenario Test ing” has explained scenario Based Testing in great detail that can be found at www.testing.com. What is Scenario Based Test ing and How/ Where is it useful is an int erest ing question. I shall explain in brief the above two mentioned points. Scenario Based Test ing is cat egorized under Black Box Test s and are m ost helpful when t he t est ing is concent rat ed on t he Business logic and funct ional behavior of t he applicat ion. Adopt ing SBT is effect ive when t est ing com plex applicat ions. Now, every applicat ion is com plex, t hen it ’s t he t eam s call as t o im plem ent SBT or not . I would personally suggest using SBT when t he funct ionalit y t o t est includes various feat ures and funct ions. A best exam ple would be while t est ing banking applicat ion. As banking applicat ions require ut m ost care while t est ing, handling various funct ions in a single scenario would result in effective results. A sam ple t ransact ion ( scenario) can be, a cust om er logging int o t he applicat ion, checking his balance, t ransferring am ount t o anot her account , paying his bills, checking his balance again and logging out. In brief, use Scenario Based Tests when: 1. Testing complex applications. 2. Testing Business functionality. When designing scenarios, keep in mind: 1. The scenario should be close to the real life scenario. 2. Scenarios should be realistic. 3. Scenarios should be traceable to any/combination of functionality. 4. Scenarios should be supported by sufficient data. Software Testing Framework V2.0 12 of 25
  • 13. 3.4 Exploratory Testing Explorat ory Test s are cat egorized under Black Box Test s and are aim ed at t est ing in conditions when sufficient time is not available for testing or proper documentation is not available. Exploratory testing is ‘Testing while Exploring’. When you have no idea of how the application works, exploring the application with the intent of finding errors can be termed as Exploratory Testing. Performing Exploratory Testing This is one big question for many people. The following can be used to perform Exploratory Testing: • Learn the Application. • Learn the Business for which the application is addressed. • Learn the technology to the maximum extent on which the application has been designed. • Learn how to test. • Plan and Design tests as per the learning. 4.0 Structural System Testing Techniques The following are the structural system testing techniques. Technique Description Example Stress Determine system performance Sufficient disk space with expected volumes. allocated. Execution System achieves desired level of Transaction turnaround proficiency. time adequate. Recovery System can be returned to an Evaluate adequacy of operational status after a failure. backup data. Operations System can be executed in a Determine systems can normal operational status. run using document. Compliance System is developed in accordance Standards follow. with standards and procedures. Security System is protected in accordance Access denied. with importance to organization. 5.0 Functional System Testing Techniques The following are the functional system testing techniques. Technique Description Example Requirements System performs as specified. Prove system requirements. Regression Verifies that anything unchanged Unchanged system still performs correctly. segments function. Error Handling Errors can be prevented or Error introduced into the detected and then corrected. test. Manual Support The people- computer interaction Manual procedures works. developed. Intersystems. Data is correctly passed from Intersystem parameters system to system. changed. Control Controls reduce system risk to an File reconciliation acceptable level. procedures work. Software Testing Framework V2.0 13 of 25
  • 14. Parallel Old systems and new system are Old and new system can run and the results compared to reconcile. detect unplanned differences. 4.0 Testing Phases Requirement Study Requirement Checklist Software Requirement Specification Software Requirement Functional Specification Specification Checklist Functional Specification Document Functional Specification Architecture Design Document Architecture Design Detailed Design Document Coding Functional Specification Unit Test Case Documents Document Unit Test Case Document Design Document System Test Case Document Functional Specification Document Integration Test Case Document Unit/Integration/System Regression Test Case Test Case Documents Document Functional Specification Document Performance Test Cases and Scenarios Performance Criteria Software Requirement Specification Regression Test Case User Acceptance Test Case Document Documents/Scenarios Performance Test Cases and Scenarios Software Testing Framework V2.0 14 of 25
  • 15. 4.2 Unit Testing Goal of Unit t est ing is t o uncover defect s using form al t echniques like Boundary Value Analysis ( BVA) , Equivalence Part it ioning, and Error Guessing. Defect s and deviat ions in Dat e form at s, Special requirem ent s in input condit ions ( for exam ple Text box where only num eric or alphabet s should be ent ered) , select ion based on Com bo Box’s, List Box’s, Opt ion but t ons, Check Box’s would be ident ified during t he Unit Testing phase. 4.3 Integration Testing I nt egrat ion t est ing is a syst em at ic t echnique for const ruct ing t he program st ruct ure while at t he sam e t im e conduct ing t est s t o uncover errors associat ed wit h int erfacing. The obj ect ive is t o t ake unit t est ed com ponent s and build a program structure that has been dictated by design. Usually, the following methods of Integration testing are followed: 1. Top- down Integration approach. 2. Bottom- up Integration approach. 4.3.1 Top- down Integration Top- down int egrat ion t est ing is an increm ent al approach t o const ruct ion of program st ruct ure. Modules are int egrat ed by m oving downward t hrough t he cont rol hierarchy, beginning wit h t he m ain cont rol m odule. Modules subordinat e t o t he m ain cont rol m odule are incorporat ed int o t he st ruct ure in eit her a dept h- first or breadt h- first manner. 1. The Integration process is performed in a series of five steps: 2. The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. 3. Depending on t he int egrat ion approach select ed subordinat e st ubs are replaced one at a time with actual components. 4. Tests are conducted as each component is integrated. 5. On com plet ion of each set of t est s, anot her st ub is replaced wit h t he real component. 6. Regression t est ing m ay be conduct ed t o ensure t hat new errors have not been introduced. 4.3.2 Bottom- up Integration Button- up int egrat ion t est ing begins const ruct ion and t est ing wit h at om ic m odules (i.e. components at the lowest levels in the program structure). Because components are int egrat ed from t he but t on up, processing required for com ponent s subordinat e to a given level is always available and the need for stubs is eliminated. 1. A Bot t om - up int egrat ion st rat egy m ay be im plem ent ed wit h t he following steps: 2. Low level com ponent s are com bined int o clust ers t hat perform a specific software sub function. 3. A driver is written to coordinate test case input and output. 4. The cluster is tested. 5. Drivers are rem oved and clust ers are com bined m oving upward in t he program structure. Software Testing Framework V2.0 15 of 25
  • 16. 4.4 Smoke Testing “Smoke testing might be a characterized as a rolling integration strategy”. Sm oke t est ing is an int egrat ion t est ing approach t hat is com m only used when “shrink- wrapped” soft ware product s are being developed. I t is designed as a pacing m echanism for t im e- critical projects, allowing the software team to assess its project on a frequent basis. The sm oke t est should exercise t he ent ire syst em from end t o end. Sm oke t est ing provides benefits such as: 1) Integration risk is minimized. 2) The quality of the end- product is improved. 3) Error diagnosis and correction are simplified. 4) Progress is easier to asses. 4.5 System Testing Syst em t est ing is a series of different t est s whose prim ary purpose is t o fully exercise t he com put er based syst em . Alt hough each t est has a different purpose, all work t o verify t hat syst em elem ent s have been properly int egrat ed and perform allocated functions. The following tests can be categorized under System testing: 1. Recovery Testing. 2. Security Testing. 3. Stress Testing. 4. Performance Testing. 4.5.1. Recovery Testing Recovery t est ing is a syst em t est t hat focuses t he soft ware t o fall in a variet y of ways and verifies t hat recovery is properly perform ed. I f recovery is aut om at ic, reinit ializat ion, checkpoint ing m echanism s, dat a recovery and rest art are evaluat ed for correct ness. I f recovery requires hum an int ervent ion, t he m ean- time- to- repair (MTTR) is evaluated to determine whether it is within acceptable limits. 4.5.2. Security Testing Securit y t est ing at t em pt s t o verify t hat prot ect ion m echanism s built int o a syst em will, in fact , prot ect it from im proper penet rat ion. During Securit y t est ing, password cracking, unaut horized ent ry int o t he soft ware, net work securit y are all t aken int o consideration. 4.5.3. Stress Testing St ress t est ing execut es a syst em in a m anner t hat dem ands resources in abnorm al quant it y, frequency, or volum e. The following t ypes of t est s m ay be conduct ed during stress testing; • Special t est s m ay be designed t hat generat e t en int errupt s per second, when one or two is the average rate. • I nput dat a rat es m ay be increases by an order of m agnit ude t o det ermine how input functions will respond. • Test Cases that require maximum memory or other resources. • Test Cases that may cause excessive hunting for disk- resident data. • Test Cases that my cause thrashing in a virtual operating system. 4.5.4. Performance Testing Perform ance t est s are coupled wit h st ress t est ing and usually require bot h hardware and software instrumentation. Software Testing Framework V2.0 16 of 25
  • 17. 4.5.5. Regression Testing Regression testing is the re- execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side affects. Regression may be conducted manually, by re- executing a subset of al test cases or using automated capture/playback tools. The Regression test suit contains three different classes of test cases: • A representative sample of tests that will exercise all software functions. • Additional tests that focus on software functions that are likely to be affected by the change. • Tests that focus on the software components that have been changed. 4.6 Alpha Testing The Alpha testing is conducted at the developer sites and in a controlled environment by the end- user of the software. 4.7 User Acceptance Testing User Accept ance t est ing occurs j ust before t he soft ware is released t o t he cust om er. The end- users along with the developers perform the User Acceptance Testing with a certain set of test cases and typical scenarios. 4.8 Beta Testing The Bet a t est ing is conduct ed at one or m ore cust om er sit es by t he end- user of t he soft ware. The bet a t est is a live applicat ion of t he soft ware in an environm ent t hat cannot be controlled by the developer. 5.0 Metrics Met rics are t he m ost im port ant responsibilit y of t he Test Team . Met rics allow for deeper underst anding of t he perform ance of t he applicat ion and it s behavior. The fine t uning of t he applicat ion can be enhanced only wit h m et rics. I n a t ypical QA process, there are many metrics which provide information. The following can be regarded as the fundamental metric: IEEE Std 982.2 - 1988 defines a Functional or Test Coverage Metric. It can be used to measure test coverage prior to software delivery. It provide a measure of the percentage of the software tested at any point during testing. It is calculated as follows: Function Test Coverage = FE/FT Where FE is the number of test requirements that are covered by test cases that were executed against the software FT is the total number of test requirements Software Release Metrics The software is ready for release when: 1. It has been tested with a test suite that provides 100% functional coverage, 80% branch coverage, and 100% procedure coverage. 2. There are no level 1 or 2 severity defects. 3. The defect finding rate is less than 40 new defects per 1000 hours of testing 4. The software reaches 1000 hours of operation 5. Stress testing, configuration testing, installation testing, Naïve user testing, usability testing, and sanity testing have been completed Software Testing Framework V2.0 17 of 25
  • 18. IEEE Software Maturity Metric IEEE Std 982.2 - 1988 defines a Software Maturity Index that can be used to determine the readiness for release of a software system. This index is especially useful for assessing release readiness when changes, additions, or deletions are made to existing software systems. It also provides an historical index of the impact of changes. It is calculated as follows: SMI = Mt - ( Fa + Fc + Fd)/Mt Where SMI is the Software Maturity Index value Mt is the number of software functions/modules in the current release Fc is the number of functions/modules that contain changes from the previous release Fa is the number of functions/modules that contain additions to the previous release Fd is the number of functions/modules that are deleted from the previous release Reliability Metrics Perry offers the following equation for calculating reliability. Reliability = 1 - Number of errors (actual or predicted)/Total number of lines of executable code This reliability value is calculated for the number of errors during a specified time interval. Three other metrics can be calculated during extended testing or after the system is in production. They are: MTTFF (Mean Time to First Failure) MTTFF = The number of time intervals the system is operable until its first failure MTBF (Mean Time Between Failures) MTBF = Sum of the time intervals the system is operable Number of failures for the time period MTTR (Mean Time To Repair) MTTR = sum of the time intervals required to repair the system The number of repairs during the time period Software Testing Framework V2.0 18 of 25
  • 19. 6.0 Test Models There are various models of Software Testing. Here in this framework I would explain the three most commonly used models: 1. The ‘V’ Model. 2. The ‘W’ Model. 3. The Butterfly Model 6.1 The ‘V’ Model The following diagram depicts the ‘V’ Model Requirements Acceptance Tests System Tests Specification Integration Tests Architecture Unit Tests Detailed Design Coding The diagram is self- explanatory. For an easy understanding, look at the following table: SDLC Phase Test Phase 1. Requirements 1. Build Test Strategy. 2. Plan for Testing. 3. Acceptance Test Scenarios Identification. 2. Specification 1. System Test Case Generation. 3. Architecture 1. Integration Test Case Generation. 4. Detailed Design 1. Unit Test Case Generation Software Testing Framework V2.0 19 of 25
  • 20. 6.2 The ‘W’ Model The following diagram depicts the ‘W’ model: Regression Requirements Requirements Round 3 Review Performance Testing Regression Specification Round 2 Specification System Review Testing Architecture Regression Architecture Integration Round 1 Review Testing Detailed Design Unit Design Testing Review Code Code Walkthrough The ‘W’ model depicts that the Testing starts from day one of the initiation of the project and continues till the end. The following table will illustrate the phases of activities that happen in the ‘W’ model: SDLC Phase The first ‘V’ The second ‘V’ 1. Requirements 1. Requirements Review 1. Build Test Strategy. 2. Plan for Testing. 3. Acceptance (Beta) Test Scenario Identification. 2. Specification 2. Specification Review 1. System Test Case Generation. 3. Architecture 3. Architecture Review 1. Integration Test Case Generation. 4. Detailed Design 4. Detailed Design Review 1. Unit Test Case Generation. 5. Code 5. Code Walkthrough 1. Execute Unit Tests 1. Execute Integration Tests. 1. Regression Round 1. 1. Execute System Tests. 1. Regression Round 2. 1. Performance Tests 1. Regression Round 3 1. Performance/Beta Tests Software Testing Framework V2.0 20 of 25
  • 21. In the second ‘V’, I have mentioned Acceptance/Beta Test Scenario Identification. This is because, the customer might want to design the Acceptance Tests. In this case as the development team executes the Beta Tests at the client place, the same team can identify the Scenarios. Regression Rounds are performed at regular intervals to check whether the defects, which have been raised and fixed, are re- tested. 6.3 The Butterfly Model The t est ing act ivit ies for t est ing soft ware product s are preferable t o follow t he Butterfly Model. The following picture depicts the test methodology. Test Design Test Analysis Test Execution Fig: Butterfly Model I n t he Butterfly m odel of Test Developm ent , t he left wing of t he but t erfly depict s the Te st An a lysis. The right wing depict s t he Te st D e sign , and finally t he body of t he but t erfly depict s t he Te st Ex e cu t ion . How t his exact ly happens is described below. Test Analysis Analysis is t he key fact or which drives in any planning. During t he analysis, t he analyst understands the following: • Verify t hat each requirem ent is t agged in a m anner t hat allows correlat ion of t he tests for that requirement to the requirement itself. (Establish Test Traceability) • Verify traceability of the software requirements to system requirements. • Inspect for contradictory requirements. • Inspect for ambiguous requirements. • Inspect for missing requirements. • Check t o m ake sure t hat each requirem ent , as well as t he specificat ion as a whole, is understandable. • I dent ify one or m ore m easurem ent , dem onst rat ion, or analysis m et hod t hat m ay be used to verify the requirement’s implementation (during formal testing). • Creat e a t est “ sket ch” t hat includes t he t ent at ive approach and indicat es t he test’s objectives. During Test Analysis t he required docum ent s will be carefully st udied by t he Test Personnel, and the final Analysis Report is documented. The following documents would be usually referred: 1. Software Requirements Specification. 2. Functional Specification. 3. Architecture Document. Software Testing Framework V2.0 21 of 25
  • 22. 4. Use Case Documents. The An a lysis Re por t would consist of t he underst anding of t he applicat ion, t he funct ional flow of t he applicat ion, num ber of m odules involved and t he effect ive Test Time. Test Design The right wing of t he but t erfly represent s t he act of designing and im plem ent ing t he t est cases needed t o verify t he design art ifact as replicat ed in t he im plem ent at ion. Like test analysis, it is a relatively large piece of work. Unlike test analysis, however, t he focus of t est design is not t o assim ilat e inform at ion creat ed by ot hers, but rat her t o im plem ent procedures, t echniques, and dat a set s t hat achieve t he t est ’s objective(s). The out put s of t he t est analysis phase are t he foundat ion for t est design. Each requirem ent or design const ruct has had at least one t echnique ( a m easurem ent , dem onst rat ion, or analysis) ident ified during t est analysis t hat will validat e or verify that requirement. The tester must now implement the intended technique. Soft ware t est design, as a discipline, is an exercise in t he prevent ion, det ect ion, and elim inat ion of bugs in soft ware. Prevent ing bugs is t he prim ary goal of soft ware t est ing. Diligent and com pet ent t est design prevent s bugs from ever reaching t he im plem ent at ion st age. Test design, wit h it s at t endant t est analysis foundat ion, is t herefore t he prem iere weapon in t he arsenal of developers and t est ers for lim it ing the cost associated with finding and fixing bugs. During Test Design, basing on t he Analysis Report t he t est personnel would develop the following: 1. Test Plan. 2. Test Approach. 3. Test Case documents. 4. Performance Test Parameters. 5. Performance Test Plan. Test Execution Any test case should adhere to the following principals: 1. Accurate – tests what the description says it will test. 2. Economical – has only the steps needed for its purpose. 3. Repeatable – tests should be consistent, no matter who/when it is executed. 4. Appropriate – should be apt for the situation. 5. Traceable – the functionality of the test case should be easily found. During t he Test Execut ion phase, keeping t he Proj ect and t he Test schedule, t he t est cases designed would be execut ed. The following docum ent s will be handled during the test execution phase: 1. Test Execution Reports. 2. Daily/Weekly/monthly Defect Reports. 3. Person wise defect reports. After the Test Execution phase, the following documents would be signed off. 1. Project Closure Document. 2. Reliability Analysis Report. 3. Stability Analysis Report. 4. Performance Analysis Report. 5. Project Metrics. Software Testing Framework V2.0 22 of 25
  • 23. 7.0 Defect Tracking Process The Defect Tracking process should answer the following questions: 1. When is the defect found? 2. Who raised the defect? 3. Is the defect reported properly? 4. Is the defect assigned to the appropriate developer? 5. When was the defect fixed? 6. Is the defect re- tested? 7. Is the defect closed? The defect tracking process has to be handled carefully and managed efficiently. The following figure illustrates the defect tracking process: The Tester/Developer finds the Bug. Reports the Defect in the Defect Tracking Tool. Status “Open” The concerned Developer is informed The Developer fixes the Defect The Developer If the Defect re- changes the Status to occurs, the status “Resolved” changes to “Re- Open” The Tester Re- Tests and changes Status to “Closed” Defect Classification This sect ion defines a defect Severit y Scale fram ework for det erm ining defect crit icalit y and t he associat ed defect Priorit y Levels t o be assigned t o errors found software. Software Testing Framework V2.0 23 of 25
  • 24. The defects can be classified as follows: Classification Description Critical There is s funct ionalit y block. The applicat ion is not able t o proceed any further. Major The applicat ion is not working as desired. There are variat ions in the functionality. Minor There is no failure report ed due t o t he defect , but cert ainly needs to be rectified. Cosmetic Defects in the User Interface or Navigation. Suggestion Feature which can be added for betterment. Priority Level of the Defect The priorit y level describes t he t im e for resolut ion of t he defect . The priorit y level would be classified as follows: Classification Description Immediate Resolve the defect with immediate effect. At the Earliest Resolve the defect at the earliest, on priority at the second level. Normal Resolve the defect. Later Could be resolved at the later stages. 8.0 Test Process for a Project I n t his sect ion, I would explain how t o go about planning your t est ing act ivit ies effect ively and efficient ly. The process is explained in a t abular form at giving t he phase of testing, activity and person responsible. For t his, I assum e t hat t he proj ect has been ident ified and t he t est ing t eam consist s of five personnel: Test Manager, Test Lead, Senior Test Engineer and 2 Test Engineer’s. SDLC Phase Testing Phase/Activity Personnel 1. Requirements 1. Study the requirements for Test Manager / Test Lead Testability. 2. Design the Test Strategy. 3. Prepare the Test Plan. 4. Identify scenarios for Acceptance/Beta Tests 2. Specification 1. Identify System Test Cases / Test Lead, Senior Test Scenarios. Engineer, and Test Engineers. 2. Identify Performance Tests. 3. Architecture 1. Identify Integration Test Cases / Test Lead, Senior Test Scenarios. Engineer, and Test Engineers. 2. Identify Performance Tests. 4. Detailed 1. Generate Unit Test Cases Test Engineers. Design Software Testing Framework V2.0 24 of 25
  • 25. 9.0 Deliverables The Deliverables from the Test team would include the following: 1. Test Strategy. 2. Test Plan. 3. Test Case Documents. 4. Defect Reports. 5. Status Reports (Daily/weekly/Monthly). 6. Test Scripts (if any). 7. Metric Reports. 8. Product Sign off Document. Software Testing Framework V2.0 25 of 25