SlideShare uma empresa Scribd logo
1 de 80
Baixar para ler offline
Introduction

       Let me start with a short Intro, I am a Self employed Musician and web
developer, and have spent lot of time understanding the software testing field and the
stages of evolution in it. And I bring in front of you a very effective small dose of testing
theory. (Thanks to Mr Suresh Reddy, My senior mentor at NRSTT Hyderabad for al
the guidance and coaching. ) I have divided the Concise Testing Theory ( C.T.T )
into the following sections.




             Validation & Verification of any task can be called as Testing (1)

Sections

What is Software Testing ?
Who can do it? & Terminology.
Software Development Life Cycle ( SDLC.)
Testing methodology & Levels of testing.
Types of Testing.
Software Testing life cycle
Bug life cycle (BLC.)Automation Testing.
Performance Testing.
ISTQB & Software Certification

Each section is explained in Detail and will be enhanced as and when needed. Good luck
and enjoy the syllabi. I will shortly create concise knols on Sql server & QTP.
I do Believe Strongly that A good understanding of CTT can get you a job in IT ( at least
in India, US, UK & AU) very soon. All you need minimum is to be A graduate from any
field ( 15 years of formal education).
Software Testing?

Testing in general, is a process carried out by individuals or groups across all domains,
where
requirements exist.
Whereas, In more software subjective language, The comparison of EV (expected value )
and
AV (Actual value) is known as testing.
To get a better understanding of the sentence above , lets see a small example,
example 1.
I have a bulb , and my requirement is that when i switch it on, it should glow. So My
next step is to identify three things
Action to be performed : turn the Switch on,
Expected value              : Bulb should glow,
Actual value                : Present Status of the bulb after performing the action ,( on or
off ).
Now its the time for our test to generate a result based on a simple logic,
IF EV = AV then the result is PASS , else Any deviation will make the result as FAIL.
Now , ideally based on the difference or margin of difference between the EV and AV ,
The Severity of the defect is decided and is subjected for further rectifications.

 Conclusive Definition of Testing :
     Testing can be defined as the process in which the defects are Identified, Isolated
and then subjected for rectification and Re ensuring that the end result is defect
free (highest quality) in order to achieve maximum customer satisfaction. ( An
Interview definition. ).


Who can do testing & things to understand ( Pre requirement to
understand Software testing effectively)

       As we have understood so far, that testing is a very general task and can be
performed by anybody. But we have to understand the importance of software testing
first,
In the earlier days of software development, the support for developing the programs
was very little. Commercially very few rich companies used advanced software solutions.
As three decades of research and development has passed , IT sector has become
sufficient enough to develop advanced software programming for even space
exploration.

     Programming languages in the beginning were very complex and were not feasible
to achieve larger calculations, Hardware was costly and was very limited in its capacity
to perform continuous tasks, and Networking bandwidths was a gen next topic.
But Now Object oriented Programming languages, Terra bytes of memory and
bandwidths of hundreds of M bps, are the foundations for future's interactive web
generation. Sites like Google, MSN and Yahoo make life much easier to live with Chat ,
Conferencing and Data management etc.
So coming back to our goal (testing) , We need to realize that Software is here to
stay and become a part of mankind. Its the reality of tomorrow and a necessity to be
learnt for healthy survival in the coming technological era. More and more companies
are spending billions of cash for achieving the limits and in all these process one thing
is attaining its place more prominently which is none other than QUALITY.

 I used so much of background to stress quality because Its the final and direct goal of
Quality assurance people or Test engineers.


      Quality can be defined as Justification of all of the user's requirements in a
       product with the presence of value & safety.

So as you can see that quality is a much wider requirement of the masses when it comes
to consumption of any service, So the Job of a Tester is to act like a user and verify the
product Fit ( defect free = quality ).


      Defect can be defined as deviation from a users requirement in a particular
       product or service. So the same product have different level of defects or No
       defects based on tastes and requirements of different users.

As more and more business solutions are moving from man to machine , both
development and testing are headed for an unprecedented boom. Developers are under
constant pressure of learning and updating themselves with the latest programming
languages to keep up the pace with user's requirements.

  As a result , Testing is also becoming an Integral part of Software companies to
produce quality results for beating the competition. Previously developers used to test
the applications besides coding, but this had disadvantages , Like time consumption ,
emotional block to find defects in one's own creation and Post delivery maintenance
which is a common requirement now.
  So Finally Testers have been appointed to simultaneously test the applications along
with the development , In order to identify defects as soon as possible to reduce time
loss and improve efficiency of the developers.

   Who can do it? Well, Anyone can do it. You need to be a graduate to enter software
testing and comfortable relationship with the PC. All you have to get in your mind is that
your job is to act like a third person or end user and taste (test) the food before they
eat, and report it to the developer.
We are not correcting the mistake here, we are only identifying the defects and reporting
it and waiting for the next release to check whether the defects are rectified and also to
see whether new defects have raised.


Some Important terminologies
   Project : If something is developed based on a particular user/users
    requirements, and is used exclusively by them, then it is known as project. The
    finance is arranged by the client to complete the project successfully.
   Product : If something is developed based on company's specification (after a
    general survey of the market requirements) and can be used by multiple set of
    masses, then it is known as a product. The company has to fund the entire
    development and usually expect to break even after a successful market launch.
   Defect v/s Defective : If the product is justifying partial requirements of a
    particular customer but is usable functionally, then we say that The product has a
    defect, But If the product is functionally non usable , then even if some
    requirements are satisfied, the product still will be tagged as Defective.
   Quality Assurance : Quality assurance is a process of monitoring and guiding
    each and every role in the organization in order to make them perform their tasks
    according to the company's process guidelines.
   Quality Control or Validation : To check whether the end result is the right
    product, In other words whether the developed service meets all the
    requirements or not.
   Quality Assurance Verification : It is the method to check whether the
    developed product or project has followed the right process or guidelines through
    all the phases of development.
    NCR : Whenever the role is not following the process in performing the task
    assigned, then the penalty given to him is known as NCR ( Non Confirmacy
    raised).
    Inspection : Its a process of checking conducted by a group of members on a
    role or a department suddenly without any prior intimation.
    Audit : Audit is a process of checking conducted on the roles or a department
    with a prior notice, well in advance.
    SCM : Software configuration management : This is a process carried out
    by a team to attain the following things Version control and Change control . In
    other terms SCM team is responsible for updating all the common documents
    used across various domains to maintain uniformity, and also name the project
    and update its version numbers by gauzing the amount of change in the
    application or service after development.
   Common Repository : Its a server accessible by authorized users , to store and
    retrieve the information safely and securely.
   Base lining vs publishing : Its a process of finalizing the documents vs
    making it available to all the relevant resources.
   Release : Its a process of sending the application from the development
    department to the testing department or from the company to the market.
   SRN ( Software Release Note ) : Its a note prepared by the development
    department and sent to the testing department during the release and it contains
    information about Path of the build, Installation information, test data, list of
    known issues, version no, date and credentials etc.,
   SDN ( Software delivery Note ) : Its a note prepared by a team under the
    guidance of a Project manager, and will be submitted to the customer during
    delivery. It contains a carefully crated User manual and list of known issues and
    workarounds.
   Slippage : The extra time taken to accomplish a task is known as slippage
      Metrics v/s Matrix : Clear measurement of any task is defined as metrics,
       whereas a tabular format with linking information which is used for tracing any
       information back through references is called as Matrix.
      Template v/s document : Template is a pre-defined set of questionnaire or a
       professional fill in the blanks set up , which is used to prepare an finalize any
       document. The advantages of template is to maintain uniformity and easier
       comprehension, throughout all the documentation in a Project, group,
       department , company or even larger masses.
      Change Request v/s Impact Analysis : Change request is the proposal of the
       customer to bring some changes into the project by filling a CRT ( change request
       template). Whereas Impact analysis is a study carried out by the business
       analysts to gauze , how much impact will fall on the already developed part of the
       application and how feasible it is to go ahead with the change or demands of the
       customer.



  Software Development Life Cycle (SDLC)

     This is a nothing but a model which is used to achieve efficient results in software
companies and consists of 6 main phases. We will discuss each stage in a descriptive
fashion, dissected into four parts individually, such as Tasks, Roles , Process and Proof
of each phase completion. Remember SDLC is similar to waterfall model where the
output of one phase acts as the input for the next phase of the cycle.
SDLC Image See references below

  Initial phase
  Analysis phase
  Design phase
  Coding phase
  Testing phase
  Delivery and Maintenance


    Initial-Phase/ Requirements phase :

  Task : Interacting with the customer and gathering the requirements
  Roles : Business Analyst and Engagement Manager (EM).

   Process : First the BA will take an appointment with the customer, Collects the
requirement template, meets the customer , gathers requirements and comes back to
the company with the requirements document.
Then the EM will go through the requirements document . Try to find additional
requirements , Get the prototype ( dummy, similar to end product) developed in order
to get exact details in the case of unclear requirements or confused customers, and also
deal with any excess cost of the project.

Proof : The Requirements document is the proof of completion of the first phase of
SDLC .

Alternate names of the Requirements Document :
(Various companies and environments use different terminologies, But the logic is same
)
FRS : Functional Requirements Specification.
CRS : Customer/Client Requirement Specification,
URS : User Requirement Specifications,
BRS : Business Requirement Specifications,
BDD : Business Design Document,
BD : Business Document.


                           Analysis Phase :

  Tasks : Feasibility Study,
Analysis Image references see below
            Tentative planning,
            Technology selection
            Requirement analysis

  Roles : System Analyst (SA)
          Project Manager (PM)
          Technical manager (TM)

Process : A detailed study on the requirements and judging the possibilities and
scope of the requirements is known as feasibility study. It is done by the Manager level
teams usually. After that in the next step we move to a temporary scheduling of staff to
initiate the project, Select a suitable technology to develop the project effectively (
customer's choice is given first preference , If it is feasible ).
         And Finally the Hardware, software and Human resources required are listed
out in a document to baseline the project.

Proof   : The proof document of the analysis phase is SRS ( System
Requirement Specification.)


                            Design Phase :

  Tasks : High level Designing (H.L.D)
         : Low level Designing (L.L.D)

  Roles : Chief Architect ( handle HLD )
         : Technical lead ( involved in LLD)

Process : The Chief Architect will divide the whole project into modules by drawing
some graphical layouts using Unif[ied Modeling Language (UML). The Technical lead
will further divide those modules into sub modules using UML . And both will be
responsible for visioning the GUI ( The screen where user interacts with the
application OR Graphical User Interface .) and developing the Pseudo code (A dummy
code Or usually, Its a set of English Instructions to help the developers in coding the
application.)


                             Coding Phase :

  Task :    Developing the programs

  Roles:     Developers/programmers

Process : The developers will take the support of technical design document and will be
following the coding standards, while actual source code is developed. Some of the
Industry standard coding methods include Indentation, Color coding, Commenting etc.

Proof : The proof document of the coding phase is Source Code Document (SCD).


                         Testing Phase :

  Task : Testing
  Roles : Test engineers, Quality Assurance team.

Process : Since It is the core of our article , we will look at a descriptive fashion of
understanding the testing process in an I T environment.


      First, the Requirements document will be received by the testing department
      The test engineers will review the requirements in order to understand them.
      While revising , If at all any doubts arise, then the testers will list out all the
       unclear requirements in a document named Requirements Clarification Note
       (RCN).
      Then they send the RCN to the author of the requirement document ( i.e,
       Business Analyst ) in order to get the clarifications.
      Once the clarifications are done and sent to the testing team, they will take the
       test case template and write the test cases. ( test cases like example1 above).
      Once the first build is released, they will execute the test cases.
      While executing the test cases , If at all they find any defects, they will report it in
       the Defect Profile Document or DPD.
      Since the DPD is in the common repository, the developers will be notified of the
       status of the defect.
      Once the defects are sorted out, the development team releases the next build for
       testing. And also update the status of defects in the DPD.
      Testers will here check for the previous defects, related defects, new defects and
       update the DPD.
Proof : The last two steps are carried out till the product is defect free , so quality
assured product is the proof of the testing phase ( and that is why it is a very important
stage of SDLC in modern times).


                               Delivery & Maintenance phase

  Tasks       : Delivery
               : Post delivery maintenance

  Roles       : Deployment engineers

Process : Delivery : The deployment engineers will go to the customer environment
and install the application in the customer's environment & submit the application
along with the appropriate release notes to the customer .
          Maintenance : Once the application is delivered the customer will start using
the application, While using if any problem occurs , then it becomes a new task, Based
on the severity of the issue corresponding roles and process will be formulated. Some
customers may be expecting continuous maintenance, In that case a team of software
engineers will be taking care of the application regularly.



Software Testing Methods & Levels of Testing.

 There are three methods of testing , Black Box, White Box & Grey box testing, Let's see.

  Black Box testing : If one performs testing only on the functional part of an application (where end
users can perform actions) without having the structural knowledge, then that method of testing is known
as Black box testing. Usually test engineers are in this category.

   White Box testing : If one performs testing on the structural part of the application then that
method of testing is known as white box testing. Usually developers or white box testers are the ones to do
it successfully.

   Grey Box testing : If one performs testing on both the functional part as well as the structural part
of an application, then that method of testing is known as Grey box testing. It's an old way of testing and is
not as effective as the previous two methods and is losing popularity recently.




Levels of Testing

There are 5 levels of testing in a software environment. They are as follows,
Image: Systematic & Simultaneous Levels of testing.
            (The ticked options are the ones where Black box testers are required and the
                       other performed by the developers usually).


Unit level testing: A unit is defined as smallest part of an application. In this level of testing, each and
every program will be tested in order to confirm whether the conditions, functions and loops etc are
working fine or not. Usually the white box testers or developers are the performers at this level.

Module level testing: A module is defined as a group of related features to perform a major task. At this
level the modules are sent to the testing department and the test engineers will be validating the
functional part of the modules.

Integration level testing: At this level the developers will be developing the interfaces in order to
integrate the tested modules. While Integrating the modules they will test whether the interfaces that
connect the modules are functionally working or not.
Usually, the developers opt to integrate the modules in the following methods,


             Top down Approach: In this approach the parent modules are developed first and then
            the corresponding child modules will be developed. In case while integrating the modules if
            any mandatory module is missing then that is replaced with a temporary program called stub,
            to facilitate testing.
             Bottom up approach: In this approach the child modules will be developed first and will
            be integrated back to the parent modules. While integrating the modules in this way, if at all
            any mandatory module is missing, then that module is replaced with a temporary program
            known as driver to facilitate testing.
             Hybrid or Sandwich approach: In this approach both the TD & BU approaches will be
            mixed due to several reasons.
             Big Bang approach: In this approach one wait till all the modules are developed and
            integrate them finally at a time.

     System Level testing :

    Arguably, the Core of the testing comes in this level. Its a major phase in this testing level, because
depending on the requirement and afford-ability of the company, Automation testing, load, performance
& stress testing etc will be carried out in this phase which demands additional skills from a tester.

System: Once the stable complete application is installed into an environment, then as a whole it can be
called as a system (envt. + Application)
 At SLS level, the test engineers will perform many different types of testing, but the most Important one
is,
System Integration Test: In this type of testing one will perform some actions on the modules that
were integrated by the developers in the previous phase, and simultaneously check whether the reflections
are proper in the related connected modules.

Example 2: Let's take An ATM machine application, with the modules as follows,
Welcome screen, Balance inquiry screen, Withdrawal screen & Deposit screen.
Assuming that these 4 modules were integrated by the developers, So black box testers can perform
System Integration testing like this :-

    Test case 1:
Check Balance: Let's say amount is X
Deposit amount: Let's say amount is Y
Check balance: Expected Value
If the Actual Value is X + Y, then it is equal to the Expected Value. And because EV = AV, the result is
PASS.

User Acceptance Level Testing:

At this level the user will be invited and in his presence, testing will be carried on. Its the final testing
before user signs off and accepts the application. Whatever user may desire, the corresponding features
need to be tested functionally by the black box testers or Senior testers in the project.




   Types of Testing:

  There are two categories broadly to classify all the available types of testing software.
They are Static testing & Dynamic testing. Static means testing where No actions are
performed on the application, and features like GUI and appearance related testing
comes under this category. And Dynamic is where user needs to perform some actions
on the application to test like functionality checking, link checking etc.

  Initially there were very few popular types of testing, which were lengthy and manual.
But as the applications are becoming more and more complex, Its inevitable that, not
only features , functionality and appearance but performance and stability also are
major areas to be concentrated.

  The following types are the most is use and we will discuss about each of them one by
one.

Build Acceptance Test/Verification/Sanity testing (BAT) : Its a type of testing
in which one will perform an overall testing on the application to validate whether it is
proper for further detailed testing. Usually testing team conducts this in high risk
applications before accepting the application from the development department.
   Other two terms Smoke testing and Sanity testing are in debate from time
immemorial to find a fixed definition. Majority feel that If developers test the overall
application before releasing to the testing team for further action is known as Smoke
testing and future actions performed by black box test engineers is known as Sanity
testing. But this is cited as vice versa.
Regression Testing : Its a type of testing in one will perform testing on the already
tested functionality again and again. As the name suggests regression is revisiting the
same functionality of a feature, but it happens in the following two cases.

    Case1 : When the test engineers find some defects and report it to the developers, The
development team fixes the issue and releases the next build. Once the next build is
released, as per the requirement the testers will check wether the defect is fixed, and
also whether the related features are still working fine which might have been affected
while fixing the defect.
   Example 3: Lets say that you wanted a bike with 17" tyres instead of 15" and sent the
bike to the service center for changing it. Before sending it back for changing, you tested
all the other features and you are satisfied with them. Once the bike is returned to you
with new tyres, you will also check the look and feel overall, and check whether the
brakes still work as expected, whether the mileage maintains its expected value and all
related features that can be affected by this change. Testing the tyre in this case is New
testing and all the other features will fall under Regression testing.
    Case 2 : Whenever some new feature is added to the application in the middle of
development, and released for testing. The test engineers may need to check
additionally all the related features of the newly added feature. In this case also all the
features except the new functionality comes under Regression testing.

Retesting : Its a type of testing in which one will perform testing on the same
functionality again and again with multiple sets of data in order to come to a conclusion
whether its working fine or not.
   Example 4: Lets say the customer wants a bank application and the login screen must
have the password field that only accepts alphanumeric (eg. abc123, 56df etc) data and
no special characters ( *,$,# etc). In this case one needs to test the field with all possible
combinations and different sets of data to conclude that the password field is working
fine. Such kind of repeated testing on any feature is called Retesting.

 Alpha Testing & Beta Testing : These are both performed in the User acceptance
testing phase, If the customer visits the company and the comapny's own test engineers
perform testing in their own premises, then it is referred as Alpha testing.
If the testing is carried out at the client's environment and is carried out by the end
users or third party test experts, then it is known as Beta testing . ( Remember that both
these types are before the actual implementation of the software in the clients
environment, hence the term user acceptance. )

Installation Testing : Its a type of testing in which one will install the application into
the environment by following the guidelines given in the Deployment document /
Installation guide. If at all the installation is succesful then one will come to a conclusion
that the installation guide and the instructions in it are correct and it is appropriate to
install the application, otherwise report the problems in the deployment document.
 One main point is to ensure that, In this type of testing we are checking the user manual
and not the product. ( i.e, the installation/setup guide and not the application's
capability to install.)
Compatibility testing : Its a type of testing in which one will install the application
into multiple environments prepared with different combinations or environmental
components in order to check whether the application is suitable with those
environments or not. Usually this type of testing is carried out on products rather than
projects.

 Monkey Testing : Its a type of testing in which Abnormal actions are performed
intentionally on the application in order to check the stability of the application.
Remember it is different from stress testing or load testing. We are concentrating on the
stability of the features and functionality in the case of extreme possibilities of actions
performed on the application by different kind of users.

 Useability Testing : In this kind of testing we will check the user freindliness of the
application. Depending on the complexity of the application, one needs to test whether
information about all the features are understandable easily. Navigation of the
application must be easy to follow.

 Exploratory Testing : Its a type of testing in which domain experts ( knowledge of
business & functions) will perform testing on the application without having the
knowledge of requirements just by parallely exploring the functionality. If you go by the
simple definition of exploring, It means having minimum idea about something, & then
doing something related to it in order to know something more about it.

 End to End Testing : Its a type of testing in which we perform testing on various end
to end scenarios of an application. Which means performing various operations on the
applications like different users in real time scenario.
  Example 5 : Lets take a bank application and consider the different end to end
scenarios that exist within the application. Scenario 1 : Login, Balance enquiry , &
Logout.
             Scenario 2 : Login, Deposit, Withdrawal & logout.
             Scenario 3 : Login, Balance enquiry, Deposit, Withdrawal, Logout. etc.

 Security Testing : Its a type of testing in which one will test whether the application is
a secured application or not. In order to do the same, A black box test engineer will
concentrate on the following types of testing.

           Authentication Testing : It is a type of testing in which one will try to enter
       into the application by entering different combinations of usernames and
       passwords in order to check whether the application is allowing only the
       authorized users or not.
           Direct URL testing : It is a type of testing in which one will enter the direct
       URL's ( Uniform resource locator) of the secured pages and check whether the
       application is allowing the access to those areas or not.
           Firewall Leakage testing : Firewall term is a very widely misunderstood word
       and it generally means a barrier between two levels. ' The definition comes from
       an Old African story where people used to put fire around them while sleeping,
       in order to prevent red-ants coming near them. ' So In this type of testing, One
will enter into the application as one level of user ( ex. member, ) and try to
       access other higher level of users pages (ex. admin ,) , In order to check whether
       the firewalls are working fine or not.

 Port testing: Its a type of testing in which one will install the application into the
clients environment and check whether it is compatible with that environment. (
According to the requirements, one needs to find what kind of environment is needed to
be tested, whether it is product or project etc .)

 Soak Testing/Reliability testing : Its a type of testing in which one will perform
testing on the application continuosly for long period of time in order to check the
stability of the application.

 Mutation testing : Its a type of testing in which one will perform testing on the
application or its related factors by doing some changes to the logics or layers of the
environment. (Refer envt. knol for details) Any thing from the functionality of a feature
to the applications overall route can be tested with different set of combinations of
environment.

 Adhoc testing : Its a type of testing in which the test engineers will perform testing in
their own style after understanding the requirements clearly. ( Note that in Exploratory
testing the domain experts do not have the knowledge of requirements whereas in this
case the test engineers have the expected values set. )
Ch-1
Introduction
   ISTQB is the name of the certification board that certifies individuals and credits them with a
foundation or advanced level honors as Software Testers.
In November 02, The ISTQB was established in UK. Its entity is anyways in Belgium.




                                   Img1.1 Turkish Testing Board

                                     Member of ISTQB.org logo

The purpose of this certification is to create uniform standards in the field of software testing
across the world. ISTQB is part of the main Issuing comittee of The British Computer Society
established ISEB ( reg 1967 ref. wiki ) in order to certify the standards for systems & analysis,
networking and designs.

   The Whole process of certification and accrediation is now regulated by the American
Software Testing Qualifications Board. The candidates who succesfuly complete the exam are
awarded with the ISTQB Certified Tester Certificate and is valued highly by present quality
oriented companies, Both IT and Non IT organizations have really understood the meaning of
quality and have learnt that Testers can be skimmed through certification boards and hence the
popularity.

   Anyways, Since its a Major topic and requires more samples and questionaires, we have
divided the knol into seperate chapters to keep it short , up and running all the times. If you are
completely new to testing or want to update the basics of software testing before preparing for
ISTQB , then Please visit the Software Testing Theory - Part 1 for simplified summary.




Chapter -1 Fundamentals of Testing

1.1 Why is testing necessary?
bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
1.2 What is testing?
code, debugging, requirement, test basis, test case, test objective
1.3 Testing principles
1.4 Fundamental test process
conformation testing, exit criteria, incident, regression testing, test condition, test coverage, test
data, test execution, test log, test plan, test strategy, test summary report and testware.
1.5 The psychology of testing
independence.




I) General testing principles



Principles

A number of testing principles have been suggested over the past 40 years and offer general
guidelines common for all testing.



Principle 1 – Testing shows presence of defects

Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects
are found, it is not a proof of correctness.



Principle 2 – Exhaustive testing is impossible

 Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus
testing efforts.




Principle 3 – Early testing

Testing activities should start as early as possible in the software or system development life
cycle, and should be focused on defined objectives.



Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or
are responsible for the most operational failures.



Principle 5 – Pesticide paradox

If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.



Principle 6 – Testing is context dependent

Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.



Principle 7 – Absence-of-errors fallacy

Finding and fixing defects does not help if the system built is unusable and does not fulfill the
users‟ needs and expectations.




II) Fundamental test process



1) Test planning and control

Test planning is the activity of verifying the mission of testing, defining the objectives of testing
and the specification of test activities in order to meet the objectives and mission.

 It involves taking actions necessary to meet the mission and objectives of the project. In order
to control testing, it should be monitored throughout the project. Test planning takes into
account the feedback from monitoring and control activities.



2) Test analysis and design

Test analysis and design is the activity where general testing objectives are transformed into
tangible test conditions and test cases.
Test analysis and design has the following major tasks:

      Reviewing the test basis (such as requirements, architecture, design, interfaces).
      Evaluating testability of the test basis and test objects.
      Identifying and prioritizing test conditions based on analysis of test items, the specification,
       behaviour and structure.
      Designing and prioritizing test cases.
      Identifying necessary test data to support the test conditions and test cases.
      Designing the test environment set-up and identifying any required infrastructure and tools.




3) Test implementation and execution

      Developing, implementing and prioritizing test cases.
      Developing and prioritizing test procedures, creating test data and, optionally, preparing
       test harnesses and writing automated test scripts.
      Creating test suites from the test procedures for efficient test execution.
      Verifying that the test environment has been set up correctly.
      Executing test procedures either manually or by using test execution tools, according to
       the planned sequence.
      Logging the outcome of test execution and recording the identities and versions of the
       software under test, test tools and testware.
      Comparing actual results with expected results.
      Reporting discrepancies as incidents and analyzing them in order to establish their cause
       (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the
       way the test was executed).
      Repeating test activities as a result of action taken for each discrepancy. For example,
       reexecution of a test that previously failed in order to confirm a fix (confirmation
       testing), execution of a corrected test and/or execution of tests in order to ensure that
       defects have not been introduced in unchanged areas of the software or that defect fixing
       did not uncover other defects (regression testing).




4) Evaluating exit criteria and reporting

      Checking test logs against the exit criteria specified in test planning.
      Assessing if more tests are needed or if the exit criteria specified should be changed.
      Writing a test summary report for stakeholders.




5) Test closure activities

      Checking which planned deliverables have been delivered, the closure of incident reports
       or raising of change records for any that remain open, and the documentation of the
       acceptance of the system.
   Finalizing and archiving testware, the test environment and the test infrastructure for later
      reuse.
      Handover of testware to the maintenance organization.
     Analyzing lessons learned for future releases and projects, and the improvement of test
      maturity.




III) The psychology of testing

     Tests designed by the person(s) who wrote the software under test (low level of
      independence).
     Tests designed by another person(s) (e.g. from the development team).
     Tests designed by a person(s) from a different organizational group (e.g. an independent test
      team) or test specialists (e.g. usability or performance test specialists).
     Tests designed by a person(s) from a different organization or company (i.e. outsourcing or
      certification by an external body).
Ch-2. Testing throughout the software life cycle

2.1 Software development models

COTS, interactive-incremental development model, validation, verification, V-model.




2.2 Test levels

 Alpha testing, beta testing, component testing (also known as unit/module/program
testing), driver, stub, field testing, functional requirement, non-functional requirement,
integration, integration testing, robustness testing, system testing, test level, test-driven
development, test environment, user acceptance testing.


2.3 Test types

 Black box testing, code coverage, functional testing, interoperability testing, load
testing, maintainability testing, performance testing, portability testing, reliability
testing, security testing, specification based testing, stress testing, structural testing,
usability testing, white box testing


2.4 Maintenance testing

Impact analysis, maintenance testing.


i) Software development models

a) V-model (sequential development model)

 Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to the four development levels.
The four levels used in this syllabus are:
component (unit) testing;
integration testing;
system testing;
acceptance testing.


b) Iterative-incremental development models

Iterative-incremental development is the process of establishing requirements,
designing, building and testing a system, done as a series of shorter development cycles.
Examples are: prototyping, rapid application development (RAD), Rational Unified
Process (RUP) and agile development models.


c) Testing within a life cycle model

 In any life cycle model, there are several characteristics of good testing:
? For every development activity there is a corresponding testing activity.
? Each test level has test objectives specific to that level.
? The analysis and design of tests for a given test level should begin during the
corresponding development activity.
? Testers should be involved in reviewing documents as soon as drafts are available in
the development life cycle.
ii) Test levels

a) Component testing

Component testing searches for defects in, and verifies the functioning of, software (e.g.
modules, programs, objects, classes, etc.) that are separately testable.

Component testing may include testing of functionality and specific non-functional
characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as
well as structural testing (e.g. branch coverage).

One approach to component testing is to prepare and automate test cases before coding.
This is called a test-first approach or test-driven development.

b) Integration testing

Integration testing tests interfaces between components, interactions with different
parts of a system, such as the operating system, file system, hardware, or interfaces
between systems.

Component integration testing tests the interactions between software components and
is done after component testing;

System integration testing tests the interactions between different systems and may be
done after system testing.

Testing of specific non-functional characteristics (e.g. performance) may be included in
integration testing.

c) System testing

 System testing is concerned with the behaviour of a whole system/product as defined by
the scope of a development project or programme.

In system testing, the test environment should correspond to the final target or
production environment as much as possible in order to minimize the risk of
environment-specific failures not being found in testing.

System testing may include tests based on risks and/or on requirements specifications,
business processes, use cases, or other high level descriptions of system behaviour,
interactions with the operating system, and system resources.

System testing should investigate both functional and non-functional requirements of
the system.
d) Acceptance testing

Acceptance testing is often the responsibility of the customers or users of a system;
other stakeholders may be involved as well.

The goal in acceptance testing is to establish confidence in the system, parts of the
system or specific non-functional characteristics of the system

Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract‟s acceptance criteria for
producing custom-developed software. Acceptance criteria should be defined when the
contract is agreed. Regulation acceptance testing is performed against any regulations
that must be adhered to, such as governmental, legal or safety regulations.

Alpha and beta (or field) testing
Alpha testing is performed at the developing organization‟s site. Beta testing, or field
testing, is performed by people at their own locations. Both are performed by potential
customers, not the developers of the product.

iii) Test types

a) Testing of function (functional testing)

 The functions that a system, subsystem or component are to perform may be described
in work products such as a requirements specification, use cases, or a functional
specification, or they may be undocumented. The functions are “what” the system does.

A type of functional testing, security testing, investigates the functions (e.g. a firewall)
relating to detection of threats, such as viruses, from malicious outsiders. Another type
of functional testing, interoperability testing, evaluates the capability of the software
product to interact with one or more specified components or systems.

b) Testing of non-functional software characteristics (non-functional
testing)

 Non-functional testing includes, but is not limited to, performance testing, load testing,
stress testing, usability testing, maintainability testing, reliability testing and portability
testing. It is the testing of “how” the system works.

Non-functional testing may be performed at all test levels.

c) Testing of software structure/architecture (structural testing)

 Structural (white-box) testing may be performed at all test levels. Structural techniques
are best used after specification-based techniques, in order to help measure the
thoroughness of testing through assessment of coverage of a type of structure.
Structural testing approaches can also be applied at system, system integration or
acceptance testing levels (e.g. to business models or menu structures).

d) Testing related to changes (confirmation testing (retesting) and
regression testing)

After a defect is detected and fixed, the software should be retested to confirm that the
original defect has been successfully removed. This is called confirmation. Debugging
(defect fixing) is a development activity, not a testing activity.

Regression testing is the repeated testing of an already tested program, after
modification, to discover any defects introduced or uncovered as a result of the
change(s). It is
performed when the software, or its environment, is changed.

Regression testing may be performed at all test levels, and applies to functional, non-
functional and structural testing.

iv) Maintenance testing

Once deployed, a software system is often in service for years or decades. During this
time the system and its environment are often corrected, changed or extended.

Modifications include planned enhancement changes (e.g. release-based), corrective
and
emergency changes, and changes of environment,

Maintenance testing for migration (e.g. from one platform to another) should include
operational tests of the new environment, as well as of the changed software.

Maintenance testing for the retirement of a system may include the testing of data
migration or archiving if long data-retention periods are required.

Maintenance testing may be done at any or all test levels and for any or all test types.
Ch-3.1 Static techniques and the test process

dynamic testing, static testing, static technique



                  Img 1.1 Permanent Demand trend Jobmarket Internet.UK
  The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing
 ISTQB within the UK as a proportion of the total demand within the Qualifications category.
                                           2009


3.2 Review process

entry criteria, formal review, informal review, inspection, metric, moderator/inspection
leader, peer review, reviewer, scribe, technical review, walkthrough.


3.3 Static analysis by tools

Compiler, complexity, control flow, data flow, static analysis


I) Phases of a formal review

1) Planning
Selecting the personal, allocating roles, defining entry and exit criteria for more formal
reviews etc.
2) Kick-off
Distributing documents, explaining the objectives, checking entry criteria etc.
3) Individual preparation
Work done by each of the participants on their own work before the review meeting,
questions and comments
4) Review meeting
Discussion or logging, make recommendations for handling the defects, or make
decisions about the defects
5) Rework
Fixing defects found, typically done by the author Fixing defects found, typically done by
the author
6) Follow-up
Checking the defects have been addressed, gathering metrics and checking on exit
criteria


II) Roles and responsibilities
Manager Decides on execution of reviews, allocates time in projects schedules, and
determines if the review objectives have been met
Moderator Leads the review, including planning, running the meeting, follow-up after
the meeting.
Author The writer or person with chief responsibility of the document(s) to be reviewed.
Reviewers Individuals with a specific technical or business background. Identify defects
and describe findings.
Scribe (recorder) Documents all the issues, problems


III) Types of review

Informal review No formal process, pair programming or a technical lead reviewing
designs and code.
Main purpose: inexpensive way to get some benefit.
Walkthrough Meeting led by the author, „scenarios, dry runs, peer group‟, open-ended
sessions.
Main purpose: learning, gaining understanding, defect finding
Technical review Documented, defined defect detection process, ideally led by trained
moderator, may be performed as a peer review, pre meeting preparation, involved by
peers and technical experts
Main purpose: discuss, make decisions, find defects, solve technical problems and check
conformance to specifications and standards
Inspection Led by trained moderator (not the author), usually peer examination,
defined roles, includes metrics, formal process, pre-meeting preparation, formal follow-
up process
Main purpose: find defects.
Note: walkthroughs, technical reviews and inspections can be performed within a peer
group-colleague at the same organization level. This type of review is called a “peer
review”.



IV) Success factors for reviews

Each review has a clear predefined objective.
The right people for the review objectives are involved.
Defects found are welcomed, and expressed objectively.
People issues and psychological aspects are dealt with (e.g. making it a positive
experience for the author).
Review techniques are applied that are suitable to the type and level of software work
products and reviewers.
Checklists or roles are used if appropriate to increase effectiveness of defect
identification.
Training is given in review techniques, especially the more formal techniques, such as
inspection.
Management supports a good review process (e.g. by incorporating adequate time for
review activities in project schedules). There is an emphasis on learning and process
improvement.


V) Cyclomatic Complexity

The number of independent paths through a program
Cyclomatic Complexity is defined as: L – N + 2P
L = the number of edges/links in a graph
N = the number of nodes in a graphs
P = the number of disconnected parts of the graph (connected components)
Alternatively one may calculate Cyclomatic Complexity using decision point rule
Decision points +1
Cyclomatic Complexity and Risk Evaluation
1 to 10a simple program, without very much risk
11 to 20 a complex program, moderate risk
21 to 50, a more complex program, high risk
> 50an un-testable program (very high risk)
Ch-4 Test Design Techniques - Modules

4.1 The test development process

Test case specification, test design, test execution schedule, test procedure specification, test
script, traceability.
4.2 Categories of test design techniques

Black-box test design technique, specification-based test design technique, white-box test design
technique, structure-based test design technique, experience-based test design technique.


4.3 Specification-based or black box techniques

Boundary value analysis, decision table testing, equivalence partitioning, state transition testing,
use case testing.


4.4 Structure-based or white box techniques

Code coverage, decision coverage, statement coverage, structure-based testing.


4.5 Experience-based techniques

Exploratory testing, fault attack.


4.6 Choosing test techniques

No specific terms.


Test Design Techniques

       Specification-based/Black-box techniques
       Structure-based/White-box techniques
       Experience-based techniques



I) Specification-based/Black-box techniques

Equivalence partitioning
Boundary value analysis
Decision table testing
State transition testing
Use case testing


Equivalence partitioning
o Inputs to the software or system are divided in to groups that are expected to exhibit similar
behavior
o Equivalence partitions or classes can be found for both valid data and invalid data
o Partitions can also be identified for outputs, internal values, time related values and for
interface values.
o Equivalence partitioning is applicable all levels of testing


Boundary value analysis

o Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximum
and minimum values of a partition are its boundary values.
o A boundary value for a valid partition is a valid boundary value; the boundary of an invalid
partition is an invalid boundary value.
o Boundary value analysis can be applied at all test levels
o It is relatively easy to apply and its defect-finding capability is high
o This technique is often considered as an extension of equivalence partitioning.


Decision table testing

o In Decision table testing test cases are designed to execute the combination of inputs
o Decision tables are good way to capture system requirements that contain logical conditions.
o The decision table contains triggering conditions, often combinations of true and false for all
input conditions o It maybe applied to all situations when the action of the software depends on
several logical decisions


State transition testing

o In state transition testing test cases are designed to execute valid and invalid state transitions
o A system may exhibit a deferent response on current conditions or previous history. In this
case, that aspect of the system can be shown as a state transition diagram.
o State transition testing is much used in embedded software and technical automation.


Use case testing

o In use case testing test cases are designed to execute user scenarios
o A use case describes interactions between actors, including users and the system
o Each use case has preconditions, which need to be met for a use case to work successfully.
o A use case usually has a mainstream scenario and some times alternative branches.
o Use cases, often referred to as scenarios, are very useful for designing acceptance tests with
customer/user participation
II) Structure-based/White-box techniques

o Statement testing and coverage
o Decision testing and coverage
o Other structure-based techniques

       condition coverage
       multi condition coverage

Statement testing and coverage:

Statement
An entity in a programming language, which is typically the smallest indivisible unit of
execution
Statement coverage
The percentage of executable statements that have been exercised by a test suite
Statement testing
A white box test design technique in which test cases are designed to execute statements


Decision testing and coverage

Decision
A program point at which the control flow has two or more alternative routes
A node with two or more links to separate branches
Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite
100% decision coverage implies both 100% branches coverage and 100% statement coverage
Decision testing
A white box test design technique in which test cases are designed to execute decision outcomes.
Other structure-based techniques

Condition
A logical expression that can be evaluated as true or false
Condition coverage
The percentage of condition outcomes that have been exercised by a test suite
Condition testing
A white box test design technique in which test cases are designed to execute condition
outcomes
Multiple condition testing
A white box test design technique in which test cases are designed to execute combinations of
single condition outcomes


III) Experience-based techniques
o Error guessing
o Exploratory testing
Error guessing
o Error guessing is a commonly used experience-based technique
o Generally testers anticipate defects based on experience, these defects list can be built based on
experience, available defect data, and from common knowledge about why software fails.
Exploratory testing
o Exploratory testing is concurrent test design, test execution, test logging and learning , based
on test charter containing test objectives and carried out within time boxes
o It is approach that is most useful where there are few or inadequate specifications and serve
time pressure.
Ch-5 Test organization and independence

The effectiveness of finding defects by testing and reviews can be improved by using
independent testers. Options for testing teams available are:


      No independent testers. Developers test their own code.
      Independent testers within the development teams.
      Independent test team or group within the organization, reporting to project management
       or executive management
      Independent testers from the business organization or user community.
      Independent test specialists for specific test targets such as usability testers, security
       testers or certification testers (who certify a software product against standards and
       regulations).
      Independent testers outsourced or external to the organization.


The benefits of independence include:
Independent testers see other and different defects, and are unbiased.
An independent tester can verify assumptions people made during specification and
implementation of the system.
Drawbacks include:
Isolation from the development team (if treated as totally independent).
Independent testers may be the bottleneck as the last checkpoint.
Developers may lose a sense of responsibility for quality.


b) Tasks of the test leader and tester

Test leader tasks may include:

      Coordinate the test strategy and plan with project managers and others.
      Write or review a test strategy for the project, and test policy for the organization.
      Contribute the testing perspective to other project activities, such as integration planning.
      Plan the tests – considering the context and understanding the test objectives and risks –
       including selecting test approaches, estimating the time, effort and cost of testing,
       acquiring resources, defining test levels, cycles, and planning incident management.
      Initiate the specification, preparation, implementation and execution of tests, monitor the
       test results and check the exit criteria.
      Adapt planning based on test results and progress (sometimes documented in status
       reports) and take any action necessary to compensate for problems.
      Set up adequate configuration management of testware for traceability.
      Introduce suitable metrics for measuring test progress and evaluating the quality of the
       testing and the product.
      Decide what should be automated, to what degree, and how.
      Select tools to support testing and organize any training in tool use for testers.
   Decide about the implementation of the test environment.
      Write test summary reports based on the information gathered during testing.



Tester tasks may include:

      Review and contribute to test plans.
      Analyze, review and assess user requirements, specifications and models for testability.
      Create test specifications.
      Set up the test environment (often coordinating with system administration and network
       management).
      Prepare and acquire test data.
      Implement tests on all test levels, execute and log the tests, evaluate the results and
       document the deviations from expected results.
      Use test administration or management tools and test monitoring tools as required.
      Automate tests (may be supported by a developer or a test automation expert).
      Measure performance of components and systems (if applicable).
      Review tests developed by others.

Note: People who work on test analysis, test design, specific test types or test automation may be
specialists in these roles. Depending on the test level and the risks related to the product and the
project, different people may take over the role of tester, keeping some degree of independence.
Typically testers at the component and integration level would be developers; testers at the
acceptance test level would be business experts and users, and testers for operational acceptance
testing would be operators.


c) Defining skills test staff need

Now days a testing professional must have „application‟ or „business domain‟ knowledge and
„Technology‟ expertise apart from „Testing‟ Skills


2) Test planning and estimation

a) Test planning activities

      Determining the scope and risks, and identifying the objectives of testing.
      Defining the overall approach of testing (the test strategy), including the definition of the
       test levels and entry and exit criteria.
      Integrating and coordinating the testing activities into the software life cycle activities:
       acquisition, supply, development, operation and maintenance.
      Making decisions about what to test, what roles will perform the test activities, how the
       test activities should be done, and how the test results will be evaluated.
      Scheduling test analysis and design activities.
   Scheduling test implementation, execution and evaluation.
      Assigning resources for the different activities defined.
      Defining the amount, level of detail, structure and templates for the test documentation.
      Selecting metrics for monitoring and controlling test preparation and execution, defect
       resolution and risk issues.
      Setting the level of detail for test procedures in order to provide enough information to
       support reproducible test preparation and execution.



b) Exit criteria

The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or
when a set of tests has a specific goal.
Typically exit criteria may consist of:

      Thoroughness measures, such as coverage of code, functionality or risk.
      Estimates of defect density or reliability measures.
      Cost.
      Residual risks, such as defects not fixed or lack of test coverage in certain areas.
      Schedules such as those based on time to market.



c) Test estimation

Two approaches for the estimation of test effort are covered in this syllabus:

      The metrics-based approach: estimating the testing effort based on metrics of former or
       similar projects or based on typical values.
      The expert-based approach: estimating the tasks by the owner of these tasks or by
       experts.

Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
The testing effort may depend on a number of factors, including:

      Characteristics of the product: the quality of the specification and other information used
       for test models (i.e. the test basis), the size of the product, the complexity of the problem
       domain, the requirements for reliability and security, and the requirements for
       documentation.
      Characteristics of the development process: the stability of the organization, tools used,
       test process, skills of the people involved, and time pressure.
      The outcome of testing: the number of defects and the amount of rework required.
d) Test approaches (test strategies)

One way to classify test approaches or strategies is based on the point in time at which the bulk
of the test design work is begun:

      Preventative approaches, where tests are designed as early as possible.
      Reactive approaches, where test design comes after the software or system has been
       produced.

Typical approaches or strategies include:

      Analytical approaches, such as risk-based testing where testing is directed to areas of
       greatest risk
      Model-based approaches, such as stochastic testing using statistical information about
       failure rates (such as reliability growth models) or usage (such as operational profiles).
      Methodical approaches, such as failure-based (including error guessing and fault-attacks),
       experienced-based, check-list based, and quality characteristic based.
      Process- or standard-compliant approaches, such as those specified by industry-specific
       standards or the various agile methodologies.
      Dynamic and heuristic approaches, such as exploratory testing where testing is more
       reactive to events than pre-planned, and where execution and evaluation are concurrent
       tasks.
      Consultative approaches, such as those where test coverage is driven primarily by the
       advice and guidance of technology and/or business domain experts outside the test team.
      Regression-averse approaches, such as those that include reuse of existing test material,
       extensive automation of functional regression tests, and standard test suites.

Different approaches may be combined, for example, a risk-based dynamic approach.


The selection of a test approach should consider the context, including

      Risk of failure of the project, hazards to the product and risks of product failure to
       humans, the environment and the company.
      Skills and experience of the people in the proposed techniques, tools and methods.
      The objective of the testing endeavour and the mission of the testing team.
      Regulatory aspects, such as external and internal regulations for the development process.
      The nature of the product and the business.



3) Test progress monitoring and control

a) Test progress monitoring
   Percentage of work done in test case preparation (or percentage of planned test cases
       prepared).
      Percentage of work done in test environment preparation.
      Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
      Defect information (e.g. defect density, defects found and fixed, failure rate, and retest
       results).
      Test coverage of requirements, risks or code.
      Subjective confidence of testers in the product.
      Dates of test milestones.
      Testing costs, including the cost compared to the benefit of finding the next defect or to
       run the next test.

b) Test Reporting

      What happened during a period of testing, such as dates when exit criteria were met.
      Analyzed information and metrics to support recommendations and decisions about
       future actions, such as an assessment of defects remaining, the economic benefit of
       continued testing, outstanding risks, and the level of confidence in tested software.

Metrics should be collected during and at the end of a test level in order to assess:

      The adequacy of the test objectives for that test level.
      The adequacy of the test approaches taken.
      The effectiveness of the testing with respect to its objectives.



c) Test control

Test control describes any guiding or corrective actions taken as a result of information and
metrics gathered and reported. Actions may cover any test activity and may affect any other
software life cycle activity or task.

Examples of test control actions are:

      Making decisions based on information from test monitoring.
      Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
      Change the test schedule due to availability of a test environment.
      Set an entry criterion requiring fixes to have been retested (confirmation tested) by a
       developer before accepting them into a build.



4) Configuration management
The purpose of configuration management is to establish and maintain the integrity of the
products (components, data and documentation) of the software or system through the project
and product life cycle.
For testing, configuration management may involve ensuring that:

       All items of testware are identified, version controlled, tracked for changes, related to
        each other and related to development items (test objects) so that traceability can be
        maintained throughout the test process.
       All identified documents and software items are referenced unambiguously in test
        documentation

For the tester, configuration management helps to uniquely identify (and to reproduce) the tested
item, test documents, the tests and the test harness.
During test planning, the configuration management procedures and infrastructure (tools) should
be chosen, documented and implemented.


5) Risk and testing

a) Project risks

Project risks are the risks that surround the project‟s capability to deliver its objectives, such as:
Organizational factors:

       skill and staff shortages;
       personal and training issues;
       political issues, such as

o problems with testers communicating their needs and test results;
o failure to follow up on information found in testing and reviews (e.g. not
improving development and testing practices).

       improper attitude toward or expectations of testing (e.g. not appreciating the value of
        finding defects during testing).

Technical issues:

       problems in defining the right requirements;
       the extent that requirements can be met given existing constraints;
       the quality of the design, code and tests.

Supplier issues:

       failure of a third party;

contractual issues.
b) Product risks

Potential failure areas (adverse future events or hazards) in the software or system are known as
product risks, as they are a risk to the quality of the product, such as:

      Failure-prone software delivered.
      The potential that the software/hardware could cause harm to an individual or company.
      Poor software characteristics (e.g. functionality, reliability, usability and performance).
      Software that does not perform its intended functions.

Risks are used to decide where to start testing and where to test more; testing is used to reduce
the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.
Product risks are a special type of risk to the success of a project. Testing as a risk-control
activity provides feedback about the residual risk by measuring the effectiveness of critical
defect removal and of contingency plans.
A risk-based approach to testing provides proactive opportunities to reduce the levels of product
risk, starting in the initial stages of a project. It involves the identification of product risks and
their use in guiding test planning and control, specification, preparation and execution of tests. In
a risk-based approach the risks identified may be used to:

      Determine the test techniques to be employed.
      Determine the extent of testing to be carried out.
      Prioritize testing in an attempt to find the critical defects as early as possible.
      Determine whether any non-testing activities could be employed to reduce risk (e.g.
       providing training to inexperienced designers).

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to
determine the risks and the levels of testing required to address those risks.
To ensure that the chance of a product failure is minimized, risk management activities provide a
disciplined approach to:

      Assess (and reassess on a regular basis) what can go wrong (risks).
      Determine what risks are important to deal with.
      Implement actions to deal with those risks.

In addition, testing may support the identification of new risks, may help to determine what risks
should be reduced, and may lower uncertainty about risks.


6) Incident management

Since one of the objectives of testing is to find defects, the discrepancies between actual and
expected outcomes need to be logged as incidents. Incidents should be tracked from discovery
and classification to correction and confirmation of the solution. In order to manage all incidents
to completion, an organization should establish a process and rules for classification.
Incidents may be raised during development, review, testing or use of a software product. They
may be raised for issues in code or the working system, or in any type of documentation
including requirements, development documents, test documents, and user information such as
“Help” or installation guides.


Incident reports have the following objectives:

      Provide developers and other parties with feedback about the problem to enable
       identification, isolation and correction as necessary.
      Provide test leaders a means of tracking the quality of the system under test and the
       progress of the testing.
      Provide ideas for test process improvement.



Details of the incident report may include:

      Date of issue, issuing organization, and author.
      Expected and actual results.
      Identification of the test item (configuration item) and environment.
      Software or system life cycle process in which the incident was observed.
      Description of the incident to enable reproduction and resolution, including logs,
       database dumps or screenshots.
      Scope or degree of impact on stakeholder(s) interests.
      Severity of the impact on the system.
      Urgency/priority to fix.
      Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting
       retest, closed).
      Conclusions, recommendations and approvals.
      Global issues, such as other areas that may be affected by a change resulting from the
       incident.
      Change history, such as the sequence of actions taken by project team members with
       respect to the incident to isolate, repair, and confirm it as fixed.

References, including the identity of the test case specification that revealed the problem.
Ch-6---------------------------------------------------------------------------------------------------------
-----------------------------------------------------




Types of test tools

Management of testing and tests

        Requirement management tools
        Incident management tools
        Configuration management tools

Static testing

        Review tools
        Static analysis tools (D)
        Modeling tools (D)

Test specification

        Test design tools
        Test data preparation tools

Test execution and logging

        Test execution tools
        Test harness/unit test framework tools (D)
        Test comparators
        Coverage measurement tools (D)
   Security tools

Performance and monitoring

      Dynamic analysis tools
      Performance/Load/Stress Testing tools
      Monitoring tools

Specific application areas

      Special tools for web-based applications
      Special tools for specific development flat forms
      Special tools for embedded systems
      Tool support using other tools



Test Tools and their purposes

Requirement management tools

 Store requirements, check for consistency, allow requirements to be prioritized, trace changes,
coverage of requirements etc.


Incident management tools

 Store and manage incident reports, facilitating prioritization, assessment of actions to people and
attribution of status etc.


Configuration management tools

 Store information about versions and builds of software and testware; enable traceability
between testware and software work products etc.


Review tools

Store information, store and communicate review comments etc.


Static analysis tools (D)

 The enforcement of coding standards, the analysis of structures and dependencies, aiding in
understanding the code etc.
Modeling tools (D)

Validate models of the software, find defects in data model, state model or an object model etc.


Test design tools

Generate test inputs or executable tests, generate expected out comes etc.


Test data preparation tools

Preparing test data, Manipulate databases, files or data transmissions to set up test data etc.


Test execution tools

 Record tests, Automated test execution, use inputs and expected outcomes, compare results with
expected outcomes, repeat tests, dynamic comparison, manipulate the tests using scripting
language etc.


Test harness/unit test framework tools (D)

 Test components or part of a system by simulating the environment, provide an execution
framework in middleware etc.


Test comparators

 Determine differences between files, databases or test results post-execution comparison, may
use test oracle if it is automated etc.


Coverage measurement tools (D)

 Measure the percentage of specific types of code structure (ex: statements, branches or
decisions, and module or function calls)


Security tools

Check for computer viruses and denial of service attacks, search for specific vulnerabilities of
the system etc
.

Dynamic analysis tools (D)

    Detect memory leaks, identify time dependencies and identify pointer arithmetic errors.


Performance/Load/Stress Testing tools

 Measure load or stress, Monitor and report on how a system behaves a variety of simulated
usage conditions, simulate a load on an application/a database/or a system environment,
repetitive execution of tests etc.


Monitoring tools

 Continuously analyze, verify and report on specific system resources; store information about
the version and build of the software and testware, and enable traceability.
Tool support using other tools
Some tools use other tools (Ex: QTP uses excel sheet and SQL tools)


Potential benefits and risks of tool support for testing

Benefits:

o Repetitive work is reduced
o Greater consistency and repeatability
o Objective assessment
o Ease of access to information about tests or testing


Risks:

o Unrealistic expectations for the tool
o Underestimating the time and effort needed to achieve significant and continues benefits from
the tool
o Underestimating the effort required to maintain the test assets generated by the tool
o Over-reliance on the tool


Special considerations for some types of tools

Following tools have special considerations

          Test execution tools
   Performance testing tools
      Static testing tools
      Test management tools



Introducing a tool into an organization

The following factors are important in selecting a tool:

o Assessment of the organization maturity
o Identification of the areas within the organization where tool support will help to improve
testing process
o Evaluation of tools against clear requirements and objective criteria
o Proof-of-concept to see whether the product works as desired and meets the requirements and
objectives defined for it
o Evaluation of the vendor (training, support and other commercial aspects) or open-source
network of support
o Identifying and planning internal implementation (including coaching and mentoring for those
new to the use of the tool)


The objectives for a pilot project for a new tool

o To learn more about the tool
o To see how the tool would fit with existing processes or documentation
o To decide on standard ways of using the tool that will work for all potential users
o To evaluate the pilot project agonist its objectives


Successes factors for the deployment of the new tool within an organization

o Rolling out the tool to the rest of the organization incrementally
o Adapting and improving process to fit with the use of the tool
o Providing training and coaching/mentoring for new users.
o Defining usage guidelines
o Implementing a way to learn lessons from tool use.
o Monitoring tool use and benefits.
Questions-with-Answers
                                         Ch- 1

1 When what is visible to end-users is a deviation from the specific or
expected behavior, this is called:
a) an error
b) a fault
c) a failure
d) a defect
e) a mistake

2 Regression testing should be performed:
v) every week
w) after the software has changed
x) as often as possible
y) when the environment has changed
z) when the project manager says

a) v & w are true, x – z are false
b) w, x & y are true, v & z are false
c) w & y are true, v, x & z are false
d) w is true, v, x y and z are false
e) all of the above are true

3 IEEE 829 test plan documentation standard contains all of the following
except:
a) test items
b) test deliverables
c) test tasks
d) test environment
e) test specification

4 Testing should be stopped when:
a) all the planned tests have been run
b) time has run out
c) all faults have been fixed correctly
d) both a) and c)
e) it depends on the risks for the system being tested

5 Order numbers on a stock control system can range between 10000 and
99999 inclusive. Which of the following inputs might be a result of
designing tests for only valid equivalence classes and valid boundaries:
a) 1000, 5000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999
e) 9999, 10000, 50000, 99999, 10000

6 Consider the following statements about early test design:
i. early test design can prevent fault multiplication
ii. faults found during early test design are more expensive to fix
iii. early test design can find faults
iv. early test design can cause changes to the requirements
v. early test design takes more effort

a) i, iii & iv are true. Ii & v are false
b) iii is true, I, ii, iv & v are false
c) iii & iv are true. i, ii & v are false
d) i, iii, iv & v are true, ii us false
e) i & iii are true, ii, iv & v are false

7 Non-functional system testing includes:
a) testing to see where the system does not function properly
b) testing quality attributes of the system including performance and usability
c) testing a system feature using only the software required for that action
d) testing a system feature using only the software required for that function
e) testing for functions that should not exist

8 Which of the following is NOT part of configuration management:
a) status accounting of configuration items
b) auditing conformance to ISO9001
c) identification of test versions
d) record of changes to documentation over time
e) controlled library access

9 Which of the following is the main purpose of the integration strategy for
integration testing in the small?
a) to ensure that all of the small modules are tested adequately
b) to ensure that the system interfaces to other systems and networks
c) to specify which modules to combine when and how many at once
d) to ensure that the integration testing can be performed by a small team
e) to specify how the software should be divided into modules

10 What is the purpose of test completion criteria in a test plan:
a) to know when a specific test has finished its execution
b) to ensure that the test case specification is complete
c) to set the criteria used in generating test inputs
d) to know when test planning is complete
e) to plan when to stop testing

11 Consider the following statements
i. an incident may be closed without being fixed
ii. incidents may not be raised against documentation
iii. the final stage of incident tracking is fixing
iv. the incident record does not include information on test environments
v. incidents should be raised when someone other than the author of the software
performs the test

a) ii and v are true, I, iii and iv are false
b) i and v are true, ii, iii and iv are false
c) i, iv and v are true, ii and iii are false
d) i and ii are true, iii, iv and v are false
e) i is true, ii, iii, iv and v are false




12 Given the following code, which is true about the minimum number of
test cases required for full statement and branch coverage:
Read P
Read Q
IF P+Q > 100 THEN
Print “Large”
ENDIF
If P > 50 THEN
Print “P Large”
ENDIF

a) 1 test for statement coverage, 3 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage, 1 for branch coverage
d) 2 tests for statement coverage, 3 for branch coverage
e) 2 tests for statement coverage, 2 for branch coverage

13 Given the following:
Switch PC on
Start “outlook”
IF outlook appears THEN
Send an email
Close outlook

a) 1 test for statement coverage, 1 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage. 3 for branch coverage
d) 2 tests for statement coverage, 2 for branch coverage
e) 2 tests for statement coverage, 3 for branch coverage

14 Given the following code, which is true:
IF A > B THEN
C=A–B
ELSE
C=A+B
ENDIF
Read D
IF C = D Then
Print “Error”
ENDIF

a) 1 test for statement coverage, 3 for branch coverage
b) 2 tests for statement coverage, 2 for branch coverage
c) 2 tests for statement coverage. 3 for branch coverage
d) 3 tests for statement coverage, 3 for branch coverage
e) 3 tests for statement coverage, 2 for branch coverage

15 Consider the following:
Pick up and read the newspaper
Look at what is on television
If there is a program that you are interested in watching then switch the the television
on and watch the program
Otherwise
Continue reading the newspaper
If there is a crossword in the newspaper then try and complete the crossword

a) SC = 1 and DC = 1
b) SC = 1 and DC = 2
c) SC = 1 and DC = 3
d) SC = 2 and DC = 2
e) SC = 2 and DC = 3

16 The place to start if you want a (new) test tool is:
a) Attend a tool exhibition
b) Invite a vendor to give a demo
c) Analyze your needs and requirements
d) Find out what your budget would be for the tool
e) Search the internet

17 When a new testing tool is purchased, it should be used first by:
a) A small team to establish the best way to use the tool
b) Everyone who may eventually have some use for the tool
c) The independent testing team
d) The managers to see what projects it should be used in
e) The vendor contractor to write the initial scripts

18 What can static analysis NOT find?
a) The use of a variable before it has been defined
b) Unreachable (“dead”) code
c) Whether the value stored in a variable is correct
d) The re-definition of a variable before it has been used
e) Array bound violations

19 Which of the following is NOT a black box technique:
a) Equivalence partitioning
b) State transition testing
c) LCSAJ
d) Syntax testing
e) Boundary value analysis

20 Beta testing is:
a) Performed by customers at their own site
b) Performed by customers at their software developer‟s site
c) Performed by an independent test team
d) Useful to test bespoke software
e) Performed as early as possible in the lifecycle

21 Given the following types of tool, which tools would typically be used by
developers and which by an independent test team:
i. static analysis
ii. performance testing
iii. test management
iv. dynamic analysis
v. test running
vi. test data preparation

a) developers would typically use i, iv and vi; test team ii, iii and v
b) developers would typically use i and iv; test team ii, iii, v and vi
c) developers would typically use i, ii, iii and iv; test team v and vi
d) developers would typically use ii, iv and vi; test team I, ii and v
e) developers would typically use i, iii, iv and v; test team ii and vi

22 The main focus of acceptance testing is:
a) finding faults in the system
b) ensuring that the system is acceptable to all users
c) testing the system with other systems
d) testing for a business perspective
e) testing by an independent test team

23 Which of the following statements about the component testing standard
is false:
a) black box design techniques all have an associated measurement technique
b) white box design techniques all have an associated measurement technique
c) cyclomatic complexity is not a test measurement technique
d) black box measurement techniques all have an associated test design technique
e) white box measurement techniques all have an associated test design technique
24 Which of the following statements is NOT true:
a) inspection is the most formal review process
b) inspections should be led by a trained leader
c) managers can perform inspections on management documents
d) inspection is appropriate even when there are no written documents
e) inspection compares documents with predecessor (source) documents

25 A typical commercial test execution tool would be able to perform all of
the following EXCEPT:
a) generating expected outputs
b) replaying inputs according to a programmed script
c) comparison of expected outcomes with actual outcomes
d) recording test inputs
e) reading test values from a data file

26 The difference between re-testing and regression testing is
a) re-testing is running a test again; regression testing looks for unexpected side effects
b) re-testing looks for unexpected side effects; regression testing is repeating those tests
c) re-testing is done after faults are fixed; regression testing is done earlier
d) re-testing uses different environments, regression testing uses the same environment
e) re-testing is done by developers, regression testing is done by independent testers

27 Expected results are:
a) only important in system testing
b) only used in component testing
c) never specified in advance
d) most useful when specified in advance
e) derived from the code

28 Test managers should not:
a) report on deviations from the project plan
b) sign the system off for release
c) re-allocate resource to meet original plans
d) raise incidents on faults that they have found
e) provide information for risk analysis and quality improvement

29 Unreachable code would best be found using:
a) code reviews
b) code inspections
c) a coverage tool
d) a test management tool
e) a static analysis tool

30 A tool that supports traceability, recording of incidents or scheduling of
tests is called:
a) a dynamic analysis tool
b) a test execution tool
c) a debugging tool
d) a test management tool
e) a configuration management tool

31 What information need not be included in a test incident report:
a) how to fix the fault
b) how to reproduce the fault
c) test environment details
d) severity, priority
e) the actual and expected outcomes

32 Which expression best matches the following characteristics or review
processes:
1. led by author
2. Undocumented
3. No management participation
4. Led by A trained moderator or leader
5. uses entry exit criteria

s) inspection
t) peer review
u) informal review
v) walkthrough

a) s = 4, t = 3, u = 2 and 5, v = 1
b) s = 4 and 5, t = 3, u = 2, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 5, t = 4, u = 3, v = 1 and 2
e) s = 4 and 5, t = 1, u = 2, v = 3

33 Which of the following is NOT part of system testing:
a) business process-based testing
b) performance, load and stress testing
c) requirements-based testing
d) usability testing
e) top-down integration testing

34 What statement about expected outcomes is FALSE:
a) expected outcomes are defined by the software‟s behavior
b) expected outcomes are derived from a specification, not from the code
c) expected outcomes include outputs to a screen and changes to files and databases
d) expected outcomes should be predicted before a test is run
e) expected outcomes may include timing constraints such as response times

35 The standard that gives definitions of testing terms is:
a) ISO/IEC 12207
b) BS7925-1
c) BS7925-2

d) ANSI/IEEE 829
e) ANSI/IEEE 729

36 The cost of fixing a fault:
a) Is not important
b) Increases as we move the product towards live use
c) Decreases as we move the product towards live use
d) Is more expensive if found in requirements than functional design
e) Can never be determined

37 Which of the following is NOT included in the Test Plan document of the
Test Documentation Standard:
a) Test items (i.e. software versions)
b) What is not to be tested
c) Test environments
d) Quality plans
e) Schedules and deadlines

38 Could reviews or inspections be considered part of testing:
a) No, because they apply to development documentation
b) No, because they are normally applied before testing
c) No, because they do not apply to the test documentation
d) Yes, because both help detect faults and improve quality
e) Yes, because testing includes all non-constructive activities

39 Which of the following is not part of performance testing:
a) Measuring response time
b) Measuring transaction rates
c) Recovery testing
d) Simulating many users
e) Generating many transactions

40 Error guessing is best used
a) As the first approach to deriving test cases
b) After more formal techniques have been applied
c) By inexperienced testers
d) After the system has gone live
e) Only by end users
Ch- 2



1. Which of the following is true?

a. Testing is the same as quality assurance
b. Testing is a part of quality assurance
c. Testing is not a part of quality assurance
d. Testing is same as debugging

2. Why is testing necessary?
a. Because testing is good method to make there are not defects in the software
b. Because verification and validation are not enough to get to know the quality of the
software
c. Because testing measures the quality of the software system and helps to increase the
quality
d. Because testing finds more defects than reviews and inspections.

3. Integration testing has following characteristics
I. It can be done in incremental manner
II. It is always done after system testing
III. It includes functional tests
IV. It includes non-functional tests
a. I, II and III are correct
b. I is correct
c. I, III and IV are correct
d. I, II and IV are correct

4. A number of critical bugs are fixed in software. All the bugs are in one
module, related to reports. The test manager decides to do regression
testing only on the reports module.
a. The test manager should do only automated regression testing.
b. The test manager is justified in her decision because no bug has been fixed in other
modules
c. The test manager should only do confirmation testing. There is no need to do
regression testing
d. Regression testing should be done on other modules as well because fixing one
module may affect other modules

5. Which of the following is correct about static analysis tools?
a. Static analysis tools are used only by developers
b. Compilers may offer some support for static analysis
c. Static analysis tools help find failures rather than defects
d. Static analysis tools require execution of the code to analyze the coverage
6. In a flight reservation system, the number of available seats in each plane
model is an input. A plane may have any positive number of available seats,
up to the given capacity of the plane. Using Boundary Value analysis, a list
of available – seat values were generated. Which of the following lists is
correct?
a. 1, 2, capacity -1, capacity, capacity plus 1
b. 0, 1, capacity, capacity plus 1
c. 0, 1, 2, capacity plus 1, a very large number
d. 0, 1, 10, 100, capacity, capacity plus one

7. Which of the following is correct about static analysis tools
a. They help you find defects rather than failures
b. They are used by developers only
c. They require compilation of code
d. They are useful only for regulated industries

8. In foundation level syllabus you will find the main basic principles of
testing. Which of the following sentences describes one of these basic
principles?
a. Complete testing of software is attainable if you have enough resources and test tools
b. With automated testing you can make statements with more confidence about the
Quality of a product than with manual testing
c. For a software system, it is not possible, under normal conditions, to test all input and
preconditions.
d. A goal of testing is to show that the software is defect free.

9. Which of the following statements contains a valid goal for a functional
test set?
a. A goal is that no more failures will result from the remaining defects
    b. A goal is to find as many failures as possible so that the cause of the failures can be
    identified and fixed
c. A goal is to eliminate as much as possible the causes of defects
d. A goal is to fulfill all requirements for testing that are defined in the project plan.

10. In system testing...
a. .. Both functional and non-functional requirements are to be tested
b. ... Only functional requirements are tested; non-functional requirements are validated
in a review
c. ... Only non-functional requirements are tested; functional requirements are validated
in a review
d. ... Only requirements which are listed in the specification document are to be tested

11. Which of the following activities differentiate a walkthrough from a
formal review?
a. A walkthrough does not follow a defined process
b. For a walkthrough individual preparation by the reviewers is optional
c. A walkthrough requires meeting
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation
Istqb intro with question answer for exam preparation

Mais conteúdo relacionado

Mais procurados

QA Interview Questions With Answers
QA Interview Questions With AnswersQA Interview Questions With Answers
QA Interview Questions With AnswersH2Kinfosys
 
Manual testing interview question by INFOTECH
Manual testing interview question by INFOTECHManual testing interview question by INFOTECH
Manual testing interview question by INFOTECHPravinsinh
 
Manual Testing Material by Durgasoft
Manual Testing Material by DurgasoftManual Testing Material by Durgasoft
Manual Testing Material by DurgasoftDurga Prasad
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | TestbytesTestbytes
 
Software Quality Assurance (QA) Testing Interview Questions & Answers
Software Quality Assurance (QA) Testing Interview Questions & AnswersSoftware Quality Assurance (QA) Testing Interview Questions & Answers
Software Quality Assurance (QA) Testing Interview Questions & AnswersJanBask Training
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answersMaria FutureThoughts
 
Software testing-in-gurgaon
Software testing-in-gurgaonSoftware testing-in-gurgaon
Software testing-in-gurgaonAP EDUSOFT
 
Test Life Cycle - Presentation - Important concepts covered
Test Life Cycle - Presentation - Important concepts coveredTest Life Cycle - Presentation - Important concepts covered
Test Life Cycle - Presentation - Important concepts coveredSunil Kumar Gunasekaran
 
Unit 1 basic concepts of testing & quality
Unit 1   basic concepts of testing & qualityUnit 1   basic concepts of testing & quality
Unit 1 basic concepts of testing & qualityravikhimani1984
 
QA interview questions and answers
QA interview questions and answersQA interview questions and answers
QA interview questions and answersMehul Chauhan
 
Software testing - basics
Software testing - basicsSoftware testing - basics
Software testing - basicsPrasad Gali
 
Software testing course - Manual
Software testing course - ManualSoftware testing course - Manual
Software testing course - ManualPankaj Dubey
 
Software testing live project training
Software testing live project trainingSoftware testing live project training
Software testing live project trainingTOPS Technologies
 
Phases of software development
Phases of software developmentPhases of software development
Phases of software developmentCeline George
 
Automation testing interview pdf org
Automation testing interview pdf orgAutomation testing interview pdf org
Automation testing interview pdf orgTestbytes
 
Interview questions
Interview questionsInterview questions
Interview questionssivareddyeda
 
Qa interview questions and answers
Qa interview questions and answersQa interview questions and answers
Qa interview questions and answerssjayasankar2k8
 

Mais procurados (20)

QA Interview Questions With Answers
QA Interview Questions With AnswersQA Interview Questions With Answers
QA Interview Questions With Answers
 
Manual testing interview question by INFOTECH
Manual testing interview question by INFOTECHManual testing interview question by INFOTECH
Manual testing interview question by INFOTECH
 
Manual Testing Material by Durgasoft
Manual Testing Material by DurgasoftManual Testing Material by Durgasoft
Manual Testing Material by Durgasoft
 
Beginners QA Testing
Beginners QA TestingBeginners QA Testing
Beginners QA Testing
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | Testbytes
 
Software Quality Assurance (QA) Testing Interview Questions & Answers
Software Quality Assurance (QA) Testing Interview Questions & AnswersSoftware Quality Assurance (QA) Testing Interview Questions & Answers
Software Quality Assurance (QA) Testing Interview Questions & Answers
 
38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers38475471 qa-and-software-testing-interview-questions-and-answers
38475471 qa-and-software-testing-interview-questions-and-answers
 
Software testing-in-gurgaon
Software testing-in-gurgaonSoftware testing-in-gurgaon
Software testing-in-gurgaon
 
Test Life Cycle - Presentation - Important concepts covered
Test Life Cycle - Presentation - Important concepts coveredTest Life Cycle - Presentation - Important concepts covered
Test Life Cycle - Presentation - Important concepts covered
 
Unit 1 basic concepts of testing & quality
Unit 1   basic concepts of testing & qualityUnit 1   basic concepts of testing & quality
Unit 1 basic concepts of testing & quality
 
QA interview questions and answers
QA interview questions and answersQA interview questions and answers
QA interview questions and answers
 
Test plan
Test planTest plan
Test plan
 
Software testing - basics
Software testing - basicsSoftware testing - basics
Software testing - basics
 
Software testing course - Manual
Software testing course - ManualSoftware testing course - Manual
Software testing course - Manual
 
Software testing live project training
Software testing live project trainingSoftware testing live project training
Software testing live project training
 
Phases of software development
Phases of software developmentPhases of software development
Phases of software development
 
Automation testing interview pdf org
Automation testing interview pdf orgAutomation testing interview pdf org
Automation testing interview pdf org
 
Interview questions
Interview questionsInterview questions
Interview questions
 
Qa interview questions and answers
Qa interview questions and answersQa interview questions and answers
Qa interview questions and answers
 
Formal method
Formal methodFormal method
Formal method
 

Destaque

Beginners guide to software testing
Beginners guide to software testingBeginners guide to software testing
Beginners guide to software testingKevalkumar Shah
 
Software testing objective_types
Software testing objective_typesSoftware testing objective_types
Software testing objective_typessangeeswaran
 
Istqb question-paper-dump-6
Istqb question-paper-dump-6Istqb question-paper-dump-6
Istqb question-paper-dump-6TestingGeeks
 
Acls prestudy packet
Acls prestudy packetAcls prestudy packet
Acls prestudy packetcaplansyndrom
 
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...Testing técnico - Automatización en web y mobile para pruebas funcionales y p...
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...Abstracta
 
Software testing quiz questions and answers
Software testing quiz questions and answersSoftware testing quiz questions and answers
Software testing quiz questions and answersRajendraG
 

Destaque (10)

Html
HtmlHtml
Html
 
Software Testing notes
Software Testing notesSoftware Testing notes
Software Testing notes
 
Beginners guide to software testing
Beginners guide to software testingBeginners guide to software testing
Beginners guide to software testing
 
Software testing objective_types
Software testing objective_typesSoftware testing objective_types
Software testing objective_types
 
Istqb question-paper-dump-6
Istqb question-paper-dump-6Istqb question-paper-dump-6
Istqb question-paper-dump-6
 
Acls prestudy packet
Acls prestudy packetAcls prestudy packet
Acls prestudy packet
 
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...Testing técnico - Automatización en web y mobile para pruebas funcionales y p...
Testing técnico - Automatización en web y mobile para pruebas funcionales y p...
 
Manual Testing.
Manual Testing.Manual Testing.
Manual Testing.
 
Software testing quiz questions and answers
Software testing quiz questions and answersSoftware testing quiz questions and answers
Software testing quiz questions and answers
 
Software engineering-quiz
Software engineering-quizSoftware engineering-quiz
Software engineering-quiz
 

Semelhante a Istqb intro with question answer for exam preparation

Semelhante a Istqb intro with question answer for exam preparation (20)

Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experienced
 
DevOps for beginners
DevOps for beginnersDevOps for beginners
DevOps for beginners
 
Software Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsSoftware Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutions
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdf
 
Manual Testing
Manual TestingManual Testing
Manual Testing
 
STLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptxSTLC & SDLC-ppt-1.pptx
STLC & SDLC-ppt-1.pptx
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Software testing q as collection by ravi
Software testing q as   collection by raviSoftware testing q as   collection by ravi
Software testing q as collection by ravi
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing(1)
Software testing(1)Software testing(1)
Software testing(1)
 
Software testing
Software testingSoftware testing
Software testing
 
software_testing pdf.pdf
software_testing pdf.pdfsoftware_testing pdf.pdf
software_testing pdf.pdf
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Computer1
Computer1Computer1
Computer1
 
Qa analyst training
Qa analyst training Qa analyst training
Qa analyst training
 
Testing Presentation
Testing PresentationTesting Presentation
Testing Presentation
 
Quality Assurance Process
Quality Assurance ProcessQuality Assurance Process
Quality Assurance Process
 
Acceptance test driven development
Acceptance test driven developmentAcceptance test driven development
Acceptance test driven development
 

Último

My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 

Último (20)

My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 

Istqb intro with question answer for exam preparation

  • 1. Introduction Let me start with a short Intro, I am a Self employed Musician and web developer, and have spent lot of time understanding the software testing field and the stages of evolution in it. And I bring in front of you a very effective small dose of testing theory. (Thanks to Mr Suresh Reddy, My senior mentor at NRSTT Hyderabad for al the guidance and coaching. ) I have divided the Concise Testing Theory ( C.T.T ) into the following sections. Validation & Verification of any task can be called as Testing (1) Sections What is Software Testing ? Who can do it? & Terminology. Software Development Life Cycle ( SDLC.) Testing methodology & Levels of testing. Types of Testing. Software Testing life cycle Bug life cycle (BLC.)Automation Testing. Performance Testing. ISTQB & Software Certification Each section is explained in Detail and will be enhanced as and when needed. Good luck and enjoy the syllabi. I will shortly create concise knols on Sql server & QTP. I do Believe Strongly that A good understanding of CTT can get you a job in IT ( at least in India, US, UK & AU) very soon. All you need minimum is to be A graduate from any field ( 15 years of formal education).
  • 2. Software Testing? Testing in general, is a process carried out by individuals or groups across all domains, where requirements exist. Whereas, In more software subjective language, The comparison of EV (expected value ) and AV (Actual value) is known as testing. To get a better understanding of the sentence above , lets see a small example, example 1. I have a bulb , and my requirement is that when i switch it on, it should glow. So My next step is to identify three things Action to be performed : turn the Switch on, Expected value : Bulb should glow, Actual value : Present Status of the bulb after performing the action ,( on or off ). Now its the time for our test to generate a result based on a simple logic, IF EV = AV then the result is PASS , else Any deviation will make the result as FAIL. Now , ideally based on the difference or margin of difference between the EV and AV , The Severity of the defect is decided and is subjected for further rectifications. Conclusive Definition of Testing : Testing can be defined as the process in which the defects are Identified, Isolated and then subjected for rectification and Re ensuring that the end result is defect free (highest quality) in order to achieve maximum customer satisfaction. ( An Interview definition. ). Who can do testing & things to understand ( Pre requirement to understand Software testing effectively) As we have understood so far, that testing is a very general task and can be performed by anybody. But we have to understand the importance of software testing first, In the earlier days of software development, the support for developing the programs was very little. Commercially very few rich companies used advanced software solutions. As three decades of research and development has passed , IT sector has become sufficient enough to develop advanced software programming for even space exploration. Programming languages in the beginning were very complex and were not feasible to achieve larger calculations, Hardware was costly and was very limited in its capacity to perform continuous tasks, and Networking bandwidths was a gen next topic. But Now Object oriented Programming languages, Terra bytes of memory and bandwidths of hundreds of M bps, are the foundations for future's interactive web generation. Sites like Google, MSN and Yahoo make life much easier to live with Chat , Conferencing and Data management etc.
  • 3. So coming back to our goal (testing) , We need to realize that Software is here to stay and become a part of mankind. Its the reality of tomorrow and a necessity to be learnt for healthy survival in the coming technological era. More and more companies are spending billions of cash for achieving the limits and in all these process one thing is attaining its place more prominently which is none other than QUALITY. I used so much of background to stress quality because Its the final and direct goal of Quality assurance people or Test engineers.  Quality can be defined as Justification of all of the user's requirements in a product with the presence of value & safety. So as you can see that quality is a much wider requirement of the masses when it comes to consumption of any service, So the Job of a Tester is to act like a user and verify the product Fit ( defect free = quality ).  Defect can be defined as deviation from a users requirement in a particular product or service. So the same product have different level of defects or No defects based on tastes and requirements of different users. As more and more business solutions are moving from man to machine , both development and testing are headed for an unprecedented boom. Developers are under constant pressure of learning and updating themselves with the latest programming languages to keep up the pace with user's requirements. As a result , Testing is also becoming an Integral part of Software companies to produce quality results for beating the competition. Previously developers used to test the applications besides coding, but this had disadvantages , Like time consumption , emotional block to find defects in one's own creation and Post delivery maintenance which is a common requirement now. So Finally Testers have been appointed to simultaneously test the applications along with the development , In order to identify defects as soon as possible to reduce time loss and improve efficiency of the developers. Who can do it? Well, Anyone can do it. You need to be a graduate to enter software testing and comfortable relationship with the PC. All you have to get in your mind is that your job is to act like a third person or end user and taste (test) the food before they eat, and report it to the developer. We are not correcting the mistake here, we are only identifying the defects and reporting it and waiting for the next release to check whether the defects are rectified and also to see whether new defects have raised. Some Important terminologies
  • 4. Project : If something is developed based on a particular user/users requirements, and is used exclusively by them, then it is known as project. The finance is arranged by the client to complete the project successfully.  Product : If something is developed based on company's specification (after a general survey of the market requirements) and can be used by multiple set of masses, then it is known as a product. The company has to fund the entire development and usually expect to break even after a successful market launch.  Defect v/s Defective : If the product is justifying partial requirements of a particular customer but is usable functionally, then we say that The product has a defect, But If the product is functionally non usable , then even if some requirements are satisfied, the product still will be tagged as Defective.  Quality Assurance : Quality assurance is a process of monitoring and guiding each and every role in the organization in order to make them perform their tasks according to the company's process guidelines.  Quality Control or Validation : To check whether the end result is the right product, In other words whether the developed service meets all the requirements or not.  Quality Assurance Verification : It is the method to check whether the developed product or project has followed the right process or guidelines through all the phases of development.  NCR : Whenever the role is not following the process in performing the task assigned, then the penalty given to him is known as NCR ( Non Confirmacy raised).  Inspection : Its a process of checking conducted by a group of members on a role or a department suddenly without any prior intimation.  Audit : Audit is a process of checking conducted on the roles or a department with a prior notice, well in advance.  SCM : Software configuration management : This is a process carried out by a team to attain the following things Version control and Change control . In other terms SCM team is responsible for updating all the common documents used across various domains to maintain uniformity, and also name the project and update its version numbers by gauzing the amount of change in the application or service after development.  Common Repository : Its a server accessible by authorized users , to store and retrieve the information safely and securely.  Base lining vs publishing : Its a process of finalizing the documents vs making it available to all the relevant resources.  Release : Its a process of sending the application from the development department to the testing department or from the company to the market.  SRN ( Software Release Note ) : Its a note prepared by the development department and sent to the testing department during the release and it contains information about Path of the build, Installation information, test data, list of known issues, version no, date and credentials etc.,  SDN ( Software delivery Note ) : Its a note prepared by a team under the guidance of a Project manager, and will be submitted to the customer during delivery. It contains a carefully crated User manual and list of known issues and workarounds.
  • 5. Slippage : The extra time taken to accomplish a task is known as slippage  Metrics v/s Matrix : Clear measurement of any task is defined as metrics, whereas a tabular format with linking information which is used for tracing any information back through references is called as Matrix.  Template v/s document : Template is a pre-defined set of questionnaire or a professional fill in the blanks set up , which is used to prepare an finalize any document. The advantages of template is to maintain uniformity and easier comprehension, throughout all the documentation in a Project, group, department , company or even larger masses.  Change Request v/s Impact Analysis : Change request is the proposal of the customer to bring some changes into the project by filling a CRT ( change request template). Whereas Impact analysis is a study carried out by the business analysts to gauze , how much impact will fall on the already developed part of the application and how feasible it is to go ahead with the change or demands of the customer. Software Development Life Cycle (SDLC) This is a nothing but a model which is used to achieve efficient results in software companies and consists of 6 main phases. We will discuss each stage in a descriptive fashion, dissected into four parts individually, such as Tasks, Roles , Process and Proof of each phase completion. Remember SDLC is similar to waterfall model where the output of one phase acts as the input for the next phase of the cycle.
  • 6. SDLC Image See references below Initial phase Analysis phase Design phase Coding phase Testing phase Delivery and Maintenance Initial-Phase/ Requirements phase : Task : Interacting with the customer and gathering the requirements Roles : Business Analyst and Engagement Manager (EM). Process : First the BA will take an appointment with the customer, Collects the requirement template, meets the customer , gathers requirements and comes back to the company with the requirements document. Then the EM will go through the requirements document . Try to find additional requirements , Get the prototype ( dummy, similar to end product) developed in order to get exact details in the case of unclear requirements or confused customers, and also deal with any excess cost of the project. Proof : The Requirements document is the proof of completion of the first phase of SDLC . Alternate names of the Requirements Document : (Various companies and environments use different terminologies, But the logic is same ) FRS : Functional Requirements Specification. CRS : Customer/Client Requirement Specification, URS : User Requirement Specifications, BRS : Business Requirement Specifications, BDD : Business Design Document, BD : Business Document. Analysis Phase : Tasks : Feasibility Study,
  • 7. Analysis Image references see below Tentative planning, Technology selection Requirement analysis Roles : System Analyst (SA) Project Manager (PM) Technical manager (TM) Process : A detailed study on the requirements and judging the possibilities and scope of the requirements is known as feasibility study. It is done by the Manager level teams usually. After that in the next step we move to a temporary scheduling of staff to initiate the project, Select a suitable technology to develop the project effectively ( customer's choice is given first preference , If it is feasible ). And Finally the Hardware, software and Human resources required are listed out in a document to baseline the project. Proof : The proof document of the analysis phase is SRS ( System Requirement Specification.) Design Phase : Tasks : High level Designing (H.L.D) : Low level Designing (L.L.D) Roles : Chief Architect ( handle HLD ) : Technical lead ( involved in LLD) Process : The Chief Architect will divide the whole project into modules by drawing some graphical layouts using Unif[ied Modeling Language (UML). The Technical lead will further divide those modules into sub modules using UML . And both will be responsible for visioning the GUI ( The screen where user interacts with the
  • 8. application OR Graphical User Interface .) and developing the Pseudo code (A dummy code Or usually, Its a set of English Instructions to help the developers in coding the application.) Coding Phase : Task : Developing the programs Roles: Developers/programmers Process : The developers will take the support of technical design document and will be following the coding standards, while actual source code is developed. Some of the Industry standard coding methods include Indentation, Color coding, Commenting etc. Proof : The proof document of the coding phase is Source Code Document (SCD). Testing Phase : Task : Testing Roles : Test engineers, Quality Assurance team. Process : Since It is the core of our article , we will look at a descriptive fashion of understanding the testing process in an I T environment.  First, the Requirements document will be received by the testing department  The test engineers will review the requirements in order to understand them.  While revising , If at all any doubts arise, then the testers will list out all the unclear requirements in a document named Requirements Clarification Note (RCN).  Then they send the RCN to the author of the requirement document ( i.e, Business Analyst ) in order to get the clarifications.  Once the clarifications are done and sent to the testing team, they will take the test case template and write the test cases. ( test cases like example1 above).  Once the first build is released, they will execute the test cases.  While executing the test cases , If at all they find any defects, they will report it in the Defect Profile Document or DPD.  Since the DPD is in the common repository, the developers will be notified of the status of the defect.  Once the defects are sorted out, the development team releases the next build for testing. And also update the status of defects in the DPD.  Testers will here check for the previous defects, related defects, new defects and update the DPD.
  • 9. Proof : The last two steps are carried out till the product is defect free , so quality assured product is the proof of the testing phase ( and that is why it is a very important stage of SDLC in modern times). Delivery & Maintenance phase Tasks : Delivery : Post delivery maintenance Roles : Deployment engineers Process : Delivery : The deployment engineers will go to the customer environment and install the application in the customer's environment & submit the application along with the appropriate release notes to the customer . Maintenance : Once the application is delivered the customer will start using the application, While using if any problem occurs , then it becomes a new task, Based on the severity of the issue corresponding roles and process will be formulated. Some customers may be expecting continuous maintenance, In that case a team of software engineers will be taking care of the application regularly. Software Testing Methods & Levels of Testing. There are three methods of testing , Black Box, White Box & Grey box testing, Let's see. Black Box testing : If one performs testing only on the functional part of an application (where end users can perform actions) without having the structural knowledge, then that method of testing is known as Black box testing. Usually test engineers are in this category. White Box testing : If one performs testing on the structural part of the application then that method of testing is known as white box testing. Usually developers or white box testers are the ones to do it successfully. Grey Box testing : If one performs testing on both the functional part as well as the structural part of an application, then that method of testing is known as Grey box testing. It's an old way of testing and is not as effective as the previous two methods and is losing popularity recently. Levels of Testing There are 5 levels of testing in a software environment. They are as follows,
  • 10. Image: Systematic & Simultaneous Levels of testing. (The ticked options are the ones where Black box testers are required and the other performed by the developers usually). Unit level testing: A unit is defined as smallest part of an application. In this level of testing, each and every program will be tested in order to confirm whether the conditions, functions and loops etc are working fine or not. Usually the white box testers or developers are the performers at this level. Module level testing: A module is defined as a group of related features to perform a major task. At this level the modules are sent to the testing department and the test engineers will be validating the functional part of the modules. Integration level testing: At this level the developers will be developing the interfaces in order to integrate the tested modules. While Integrating the modules they will test whether the interfaces that connect the modules are functionally working or not. Usually, the developers opt to integrate the modules in the following methods,  Top down Approach: In this approach the parent modules are developed first and then the corresponding child modules will be developed. In case while integrating the modules if any mandatory module is missing then that is replaced with a temporary program called stub, to facilitate testing.  Bottom up approach: In this approach the child modules will be developed first and will be integrated back to the parent modules. While integrating the modules in this way, if at all any mandatory module is missing, then that module is replaced with a temporary program known as driver to facilitate testing.  Hybrid or Sandwich approach: In this approach both the TD & BU approaches will be mixed due to several reasons.  Big Bang approach: In this approach one wait till all the modules are developed and integrate them finally at a time. System Level testing : Arguably, the Core of the testing comes in this level. Its a major phase in this testing level, because depending on the requirement and afford-ability of the company, Automation testing, load, performance & stress testing etc will be carried out in this phase which demands additional skills from a tester. System: Once the stable complete application is installed into an environment, then as a whole it can be called as a system (envt. + Application) At SLS level, the test engineers will perform many different types of testing, but the most Important one is,
  • 11. System Integration Test: In this type of testing one will perform some actions on the modules that were integrated by the developers in the previous phase, and simultaneously check whether the reflections are proper in the related connected modules. Example 2: Let's take An ATM machine application, with the modules as follows, Welcome screen, Balance inquiry screen, Withdrawal screen & Deposit screen. Assuming that these 4 modules were integrated by the developers, So black box testers can perform System Integration testing like this :- Test case 1: Check Balance: Let's say amount is X Deposit amount: Let's say amount is Y Check balance: Expected Value If the Actual Value is X + Y, then it is equal to the Expected Value. And because EV = AV, the result is PASS. User Acceptance Level Testing: At this level the user will be invited and in his presence, testing will be carried on. Its the final testing before user signs off and accepts the application. Whatever user may desire, the corresponding features need to be tested functionally by the black box testers or Senior testers in the project. Types of Testing: There are two categories broadly to classify all the available types of testing software. They are Static testing & Dynamic testing. Static means testing where No actions are performed on the application, and features like GUI and appearance related testing comes under this category. And Dynamic is where user needs to perform some actions on the application to test like functionality checking, link checking etc. Initially there were very few popular types of testing, which were lengthy and manual. But as the applications are becoming more and more complex, Its inevitable that, not only features , functionality and appearance but performance and stability also are major areas to be concentrated. The following types are the most is use and we will discuss about each of them one by one. Build Acceptance Test/Verification/Sanity testing (BAT) : Its a type of testing in which one will perform an overall testing on the application to validate whether it is proper for further detailed testing. Usually testing team conducts this in high risk applications before accepting the application from the development department. Other two terms Smoke testing and Sanity testing are in debate from time immemorial to find a fixed definition. Majority feel that If developers test the overall application before releasing to the testing team for further action is known as Smoke testing and future actions performed by black box test engineers is known as Sanity testing. But this is cited as vice versa.
  • 12. Regression Testing : Its a type of testing in one will perform testing on the already tested functionality again and again. As the name suggests regression is revisiting the same functionality of a feature, but it happens in the following two cases. Case1 : When the test engineers find some defects and report it to the developers, The development team fixes the issue and releases the next build. Once the next build is released, as per the requirement the testers will check wether the defect is fixed, and also whether the related features are still working fine which might have been affected while fixing the defect. Example 3: Lets say that you wanted a bike with 17" tyres instead of 15" and sent the bike to the service center for changing it. Before sending it back for changing, you tested all the other features and you are satisfied with them. Once the bike is returned to you with new tyres, you will also check the look and feel overall, and check whether the brakes still work as expected, whether the mileage maintains its expected value and all related features that can be affected by this change. Testing the tyre in this case is New testing and all the other features will fall under Regression testing. Case 2 : Whenever some new feature is added to the application in the middle of development, and released for testing. The test engineers may need to check additionally all the related features of the newly added feature. In this case also all the features except the new functionality comes under Regression testing. Retesting : Its a type of testing in which one will perform testing on the same functionality again and again with multiple sets of data in order to come to a conclusion whether its working fine or not. Example 4: Lets say the customer wants a bank application and the login screen must have the password field that only accepts alphanumeric (eg. abc123, 56df etc) data and no special characters ( *,$,# etc). In this case one needs to test the field with all possible combinations and different sets of data to conclude that the password field is working fine. Such kind of repeated testing on any feature is called Retesting. Alpha Testing & Beta Testing : These are both performed in the User acceptance testing phase, If the customer visits the company and the comapny's own test engineers perform testing in their own premises, then it is referred as Alpha testing. If the testing is carried out at the client's environment and is carried out by the end users or third party test experts, then it is known as Beta testing . ( Remember that both these types are before the actual implementation of the software in the clients environment, hence the term user acceptance. ) Installation Testing : Its a type of testing in which one will install the application into the environment by following the guidelines given in the Deployment document / Installation guide. If at all the installation is succesful then one will come to a conclusion that the installation guide and the instructions in it are correct and it is appropriate to install the application, otherwise report the problems in the deployment document. One main point is to ensure that, In this type of testing we are checking the user manual and not the product. ( i.e, the installation/setup guide and not the application's capability to install.)
  • 13. Compatibility testing : Its a type of testing in which one will install the application into multiple environments prepared with different combinations or environmental components in order to check whether the application is suitable with those environments or not. Usually this type of testing is carried out on products rather than projects. Monkey Testing : Its a type of testing in which Abnormal actions are performed intentionally on the application in order to check the stability of the application. Remember it is different from stress testing or load testing. We are concentrating on the stability of the features and functionality in the case of extreme possibilities of actions performed on the application by different kind of users. Useability Testing : In this kind of testing we will check the user freindliness of the application. Depending on the complexity of the application, one needs to test whether information about all the features are understandable easily. Navigation of the application must be easy to follow. Exploratory Testing : Its a type of testing in which domain experts ( knowledge of business & functions) will perform testing on the application without having the knowledge of requirements just by parallely exploring the functionality. If you go by the simple definition of exploring, It means having minimum idea about something, & then doing something related to it in order to know something more about it. End to End Testing : Its a type of testing in which we perform testing on various end to end scenarios of an application. Which means performing various operations on the applications like different users in real time scenario. Example 5 : Lets take a bank application and consider the different end to end scenarios that exist within the application. Scenario 1 : Login, Balance enquiry , & Logout. Scenario 2 : Login, Deposit, Withdrawal & logout. Scenario 3 : Login, Balance enquiry, Deposit, Withdrawal, Logout. etc. Security Testing : Its a type of testing in which one will test whether the application is a secured application or not. In order to do the same, A black box test engineer will concentrate on the following types of testing.  Authentication Testing : It is a type of testing in which one will try to enter into the application by entering different combinations of usernames and passwords in order to check whether the application is allowing only the authorized users or not.  Direct URL testing : It is a type of testing in which one will enter the direct URL's ( Uniform resource locator) of the secured pages and check whether the application is allowing the access to those areas or not.  Firewall Leakage testing : Firewall term is a very widely misunderstood word and it generally means a barrier between two levels. ' The definition comes from an Old African story where people used to put fire around them while sleeping, in order to prevent red-ants coming near them. ' So In this type of testing, One
  • 14. will enter into the application as one level of user ( ex. member, ) and try to access other higher level of users pages (ex. admin ,) , In order to check whether the firewalls are working fine or not. Port testing: Its a type of testing in which one will install the application into the clients environment and check whether it is compatible with that environment. ( According to the requirements, one needs to find what kind of environment is needed to be tested, whether it is product or project etc .) Soak Testing/Reliability testing : Its a type of testing in which one will perform testing on the application continuosly for long period of time in order to check the stability of the application. Mutation testing : Its a type of testing in which one will perform testing on the application or its related factors by doing some changes to the logics or layers of the environment. (Refer envt. knol for details) Any thing from the functionality of a feature to the applications overall route can be tested with different set of combinations of environment. Adhoc testing : Its a type of testing in which the test engineers will perform testing in their own style after understanding the requirements clearly. ( Note that in Exploratory testing the domain experts do not have the knowledge of requirements whereas in this case the test engineers have the expected values set. )
  • 15. Ch-1 Introduction ISTQB is the name of the certification board that certifies individuals and credits them with a foundation or advanced level honors as Software Testers. In November 02, The ISTQB was established in UK. Its entity is anyways in Belgium. Img1.1 Turkish Testing Board Member of ISTQB.org logo The purpose of this certification is to create uniform standards in the field of software testing across the world. ISTQB is part of the main Issuing comittee of The British Computer Society established ISEB ( reg 1967 ref. wiki ) in order to certify the standards for systems & analysis, networking and designs. The Whole process of certification and accrediation is now regulated by the American Software Testing Qualifications Board. The candidates who succesfuly complete the exam are awarded with the ISTQB Certified Tester Certificate and is valued highly by present quality oriented companies, Both IT and Non IT organizations have really understood the meaning of quality and have learnt that Testers can be skimmed through certification boards and hence the popularity. Anyways, Since its a Major topic and requires more samples and questionaires, we have divided the knol into seperate chapters to keep it short , up and running all the times. If you are completely new to testing or want to update the basics of software testing before preparing for ISTQB , then Please visit the Software Testing Theory - Part 1 for simplified summary. Chapter -1 Fundamentals of Testing 1.1 Why is testing necessary? bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
  • 16. 1.2 What is testing? code, debugging, requirement, test basis, test case, test objective 1.3 Testing principles 1.4 Fundamental test process conformation testing, exit criteria, incident, regression testing, test condition, test coverage, test data, test execution, test log, test plan, test strategy, test summary report and testware. 1.5 The psychology of testing independence. I) General testing principles Principles A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing. Principle 1 – Testing shows presence of defects Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. Principle 3 – Early testing Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives. Principle 4 – Defect clustering
  • 17. A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures. Principle 5 – Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. Principle 6 – Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. Principle 7 – Absence-of-errors fallacy Finding and fixing defects does not help if the system built is unusable and does not fulfill the users‟ needs and expectations. II) Fundamental test process 1) Test planning and control Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities. 2) Test analysis and design Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.
  • 18. Test analysis and design has the following major tasks:  Reviewing the test basis (such as requirements, architecture, design, interfaces).  Evaluating testability of the test basis and test objects.  Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.  Designing and prioritizing test cases.  Identifying necessary test data to support the test conditions and test cases.  Designing the test environment set-up and identifying any required infrastructure and tools. 3) Test implementation and execution  Developing, implementing and prioritizing test cases.  Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.  Creating test suites from the test procedures for efficient test execution.  Verifying that the test environment has been set up correctly.  Executing test procedures either manually or by using test execution tools, according to the planned sequence.  Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.  Comparing actual results with expected results.  Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).  Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing). 4) Evaluating exit criteria and reporting  Checking test logs against the exit criteria specified in test planning.  Assessing if more tests are needed or if the exit criteria specified should be changed.  Writing a test summary report for stakeholders. 5) Test closure activities  Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
  • 19. Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.  Handover of testware to the maintenance organization.  Analyzing lessons learned for future releases and projects, and the improvement of test maturity. III) The psychology of testing  Tests designed by the person(s) who wrote the software under test (low level of independence).  Tests designed by another person(s) (e.g. from the development team).  Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).  Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).
  • 20. Ch-2. Testing throughout the software life cycle 2.1 Software development models COTS, interactive-incremental development model, validation, verification, V-model. 2.2 Test levels Alpha testing, beta testing, component testing (also known as unit/module/program testing), driver, stub, field testing, functional requirement, non-functional requirement,
  • 21. integration, integration testing, robustness testing, system testing, test level, test-driven development, test environment, user acceptance testing. 2.3 Test types Black box testing, code coverage, functional testing, interoperability testing, load testing, maintainability testing, performance testing, portability testing, reliability testing, security testing, specification based testing, stress testing, structural testing, usability testing, white box testing 2.4 Maintenance testing Impact analysis, maintenance testing. i) Software development models a) V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels. The four levels used in this syllabus are: component (unit) testing; integration testing; system testing; acceptance testing. b) Iterative-incremental development models Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models. c) Testing within a life cycle model In any life cycle model, there are several characteristics of good testing: ? For every development activity there is a corresponding testing activity. ? Each test level has test objectives specific to that level. ? The analysis and design of tests for a given test level should begin during the corresponding development activity. ? Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.
  • 22. ii) Test levels a) Component testing Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. b) Integration testing Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems. Component integration testing tests the interactions between software components and is done after component testing; System integration testing tests the interactions between different systems and may be done after system testing. Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing. c) System testing System testing is concerned with the behaviour of a whole system/product as defined by the scope of a development project or programme. In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing. System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behaviour, interactions with the operating system, and system resources. System testing should investigate both functional and non-functional requirements of the system.
  • 23. d) Acceptance testing Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well. The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system Contract and regulation acceptance testing Contract acceptance testing is performed against a contract‟s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations. Alpha and beta (or field) testing Alpha testing is performed at the developing organization‟s site. Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product. iii) Test types a) Testing of function (functional testing) The functions that a system, subsystem or component are to perform may be described in work products such as a requirements specification, use cases, or a functional specification, or they may be undocumented. The functions are “what” the system does. A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders. Another type of functional testing, interoperability testing, evaluates the capability of the software product to interact with one or more specified components or systems. b) Testing of non-functional software characteristics (non-functional testing) Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. It is the testing of “how” the system works. Non-functional testing may be performed at all test levels. c) Testing of software structure/architecture (structural testing) Structural (white-box) testing may be performed at all test levels. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure.
  • 24. Structural testing approaches can also be applied at system, system integration or acceptance testing levels (e.g. to business models or menu structures). d) Testing related to changes (confirmation testing (retesting) and regression testing) After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is a development activity, not a testing activity. Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). It is performed when the software, or its environment, is changed. Regression testing may be performed at all test levels, and applies to functional, non- functional and structural testing. iv) Maintenance testing Once deployed, a software system is often in service for years or decades. During this time the system and its environment are often corrected, changed or extended. Modifications include planned enhancement changes (e.g. release-based), corrective and emergency changes, and changes of environment, Maintenance testing for migration (e.g. from one platform to another) should include operational tests of the new environment, as well as of the changed software. Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required. Maintenance testing may be done at any or all test levels and for any or all test types.
  • 25. Ch-3.1 Static techniques and the test process dynamic testing, static testing, static technique Img 1.1 Permanent Demand trend Jobmarket Internet.UK The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing ISTQB within the UK as a proportion of the total demand within the Qualifications category. 2009 3.2 Review process entry criteria, formal review, informal review, inspection, metric, moderator/inspection leader, peer review, reviewer, scribe, technical review, walkthrough. 3.3 Static analysis by tools Compiler, complexity, control flow, data flow, static analysis I) Phases of a formal review 1) Planning Selecting the personal, allocating roles, defining entry and exit criteria for more formal reviews etc. 2) Kick-off Distributing documents, explaining the objectives, checking entry criteria etc. 3) Individual preparation Work done by each of the participants on their own work before the review meeting, questions and comments 4) Review meeting Discussion or logging, make recommendations for handling the defects, or make decisions about the defects 5) Rework Fixing defects found, typically done by the author Fixing defects found, typically done by the author 6) Follow-up Checking the defects have been addressed, gathering metrics and checking on exit criteria II) Roles and responsibilities
  • 26. Manager Decides on execution of reviews, allocates time in projects schedules, and determines if the review objectives have been met Moderator Leads the review, including planning, running the meeting, follow-up after the meeting. Author The writer or person with chief responsibility of the document(s) to be reviewed. Reviewers Individuals with a specific technical or business background. Identify defects and describe findings. Scribe (recorder) Documents all the issues, problems III) Types of review Informal review No formal process, pair programming or a technical lead reviewing designs and code. Main purpose: inexpensive way to get some benefit. Walkthrough Meeting led by the author, „scenarios, dry runs, peer group‟, open-ended sessions. Main purpose: learning, gaining understanding, defect finding Technical review Documented, defined defect detection process, ideally led by trained moderator, may be performed as a peer review, pre meeting preparation, involved by peers and technical experts Main purpose: discuss, make decisions, find defects, solve technical problems and check conformance to specifications and standards Inspection Led by trained moderator (not the author), usually peer examination, defined roles, includes metrics, formal process, pre-meeting preparation, formal follow- up process Main purpose: find defects. Note: walkthroughs, technical reviews and inspections can be performed within a peer group-colleague at the same organization level. This type of review is called a “peer review”. IV) Success factors for reviews Each review has a clear predefined objective. The right people for the review objectives are involved. Defects found are welcomed, and expressed objectively. People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author). Review techniques are applied that are suitable to the type and level of software work products and reviewers. Checklists or roles are used if appropriate to increase effectiveness of defect identification. Training is given in review techniques, especially the more formal techniques, such as inspection. Management supports a good review process (e.g. by incorporating adequate time for
  • 27. review activities in project schedules). There is an emphasis on learning and process improvement. V) Cyclomatic Complexity The number of independent paths through a program Cyclomatic Complexity is defined as: L – N + 2P L = the number of edges/links in a graph N = the number of nodes in a graphs P = the number of disconnected parts of the graph (connected components) Alternatively one may calculate Cyclomatic Complexity using decision point rule Decision points +1 Cyclomatic Complexity and Risk Evaluation 1 to 10a simple program, without very much risk 11 to 20 a complex program, moderate risk 21 to 50, a more complex program, high risk > 50an un-testable program (very high risk)
  • 28. Ch-4 Test Design Techniques - Modules 4.1 The test development process Test case specification, test design, test execution schedule, test procedure specification, test script, traceability.
  • 29. 4.2 Categories of test design techniques Black-box test design technique, specification-based test design technique, white-box test design technique, structure-based test design technique, experience-based test design technique. 4.3 Specification-based or black box techniques Boundary value analysis, decision table testing, equivalence partitioning, state transition testing, use case testing. 4.4 Structure-based or white box techniques Code coverage, decision coverage, statement coverage, structure-based testing. 4.5 Experience-based techniques Exploratory testing, fault attack. 4.6 Choosing test techniques No specific terms. Test Design Techniques  Specification-based/Black-box techniques  Structure-based/White-box techniques  Experience-based techniques I) Specification-based/Black-box techniques Equivalence partitioning Boundary value analysis Decision table testing State transition testing Use case testing Equivalence partitioning
  • 30. o Inputs to the software or system are divided in to groups that are expected to exhibit similar behavior o Equivalence partitions or classes can be found for both valid data and invalid data o Partitions can also be identified for outputs, internal values, time related values and for interface values. o Equivalence partitioning is applicable all levels of testing Boundary value analysis o Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximum and minimum values of a partition are its boundary values. o A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. o Boundary value analysis can be applied at all test levels o It is relatively easy to apply and its defect-finding capability is high o This technique is often considered as an extension of equivalence partitioning. Decision table testing o In Decision table testing test cases are designed to execute the combination of inputs o Decision tables are good way to capture system requirements that contain logical conditions. o The decision table contains triggering conditions, often combinations of true and false for all input conditions o It maybe applied to all situations when the action of the software depends on several logical decisions State transition testing o In state transition testing test cases are designed to execute valid and invalid state transitions o A system may exhibit a deferent response on current conditions or previous history. In this case, that aspect of the system can be shown as a state transition diagram. o State transition testing is much used in embedded software and technical automation. Use case testing o In use case testing test cases are designed to execute user scenarios o A use case describes interactions between actors, including users and the system o Each use case has preconditions, which need to be met for a use case to work successfully. o A use case usually has a mainstream scenario and some times alternative branches. o Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation
  • 31. II) Structure-based/White-box techniques o Statement testing and coverage o Decision testing and coverage o Other structure-based techniques  condition coverage  multi condition coverage Statement testing and coverage: Statement An entity in a programming language, which is typically the smallest indivisible unit of execution Statement coverage The percentage of executable statements that have been exercised by a test suite Statement testing A white box test design technique in which test cases are designed to execute statements Decision testing and coverage Decision A program point at which the control flow has two or more alternative routes A node with two or more links to separate branches Decision Coverage The percentage of decision outcomes that have been exercised by a test suite 100% decision coverage implies both 100% branches coverage and 100% statement coverage Decision testing A white box test design technique in which test cases are designed to execute decision outcomes. Other structure-based techniques Condition A logical expression that can be evaluated as true or false Condition coverage The percentage of condition outcomes that have been exercised by a test suite Condition testing A white box test design technique in which test cases are designed to execute condition outcomes Multiple condition testing A white box test design technique in which test cases are designed to execute combinations of single condition outcomes III) Experience-based techniques
  • 32. o Error guessing o Exploratory testing Error guessing o Error guessing is a commonly used experience-based technique o Generally testers anticipate defects based on experience, these defects list can be built based on experience, available defect data, and from common knowledge about why software fails. Exploratory testing o Exploratory testing is concurrent test design, test execution, test logging and learning , based on test charter containing test objectives and carried out within time boxes o It is approach that is most useful where there are few or inadequate specifications and serve time pressure.
  • 33. Ch-5 Test organization and independence The effectiveness of finding defects by testing and reviews can be improved by using independent testers. Options for testing teams available are:  No independent testers. Developers test their own code.  Independent testers within the development teams.  Independent test team or group within the organization, reporting to project management or executive management  Independent testers from the business organization or user community.  Independent test specialists for specific test targets such as usability testers, security testers or certification testers (who certify a software product against standards and regulations).  Independent testers outsourced or external to the organization. The benefits of independence include: Independent testers see other and different defects, and are unbiased. An independent tester can verify assumptions people made during specification and implementation of the system. Drawbacks include: Isolation from the development team (if treated as totally independent). Independent testers may be the bottleneck as the last checkpoint. Developers may lose a sense of responsibility for quality. b) Tasks of the test leader and tester Test leader tasks may include:  Coordinate the test strategy and plan with project managers and others.  Write or review a test strategy for the project, and test policy for the organization.  Contribute the testing perspective to other project activities, such as integration planning.  Plan the tests – considering the context and understanding the test objectives and risks – including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.  Set up adequate configuration management of testware for traceability.  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.  Decide what should be automated, to what degree, and how.  Select tools to support testing and organize any training in tool use for testers.
  • 34. Decide about the implementation of the test environment.  Write test summary reports based on the information gathered during testing. Tester tasks may include:  Review and contribute to test plans.  Analyze, review and assess user requirements, specifications and models for testability.  Create test specifications.  Set up the test environment (often coordinating with system administration and network management).  Prepare and acquire test data.  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.  Use test administration or management tools and test monitoring tools as required.  Automate tests (may be supported by a developer or a test automation expert).  Measure performance of components and systems (if applicable).  Review tests developed by others. Note: People who work on test analysis, test design, specific test types or test automation may be specialists in these roles. Depending on the test level and the risks related to the product and the project, different people may take over the role of tester, keeping some degree of independence. Typically testers at the component and integration level would be developers; testers at the acceptance test level would be business experts and users, and testers for operational acceptance testing would be operators. c) Defining skills test staff need Now days a testing professional must have „application‟ or „business domain‟ knowledge and „Technology‟ expertise apart from „Testing‟ Skills 2) Test planning and estimation a) Test planning activities  Determining the scope and risks, and identifying the objectives of testing.  Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.  Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.  Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated.  Scheduling test analysis and design activities.
  • 35. Scheduling test implementation, execution and evaluation.  Assigning resources for the different activities defined.  Defining the amount, level of detail, structure and templates for the test documentation.  Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.  Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution. b) Exit criteria The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set of tests has a specific goal. Typically exit criteria may consist of:  Thoroughness measures, such as coverage of code, functionality or risk.  Estimates of defect density or reliability measures.  Cost.  Residual risks, such as defects not fixed or lack of test coverage in certain areas.  Schedules such as those based on time to market. c) Test estimation Two approaches for the estimation of test effort are covered in this syllabus:  The metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values.  The expert-based approach: estimating the tasks by the owner of these tasks or by experts. Once the test effort is estimated, resources can be identified and a schedule can be drawn up. The testing effort may depend on a number of factors, including:  Characteristics of the product: the quality of the specification and other information used for test models (i.e. the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation.  Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure.  The outcome of testing: the number of defects and the amount of rework required.
  • 36. d) Test approaches (test strategies) One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun:  Preventative approaches, where tests are designed as early as possible.  Reactive approaches, where test design comes after the software or system has been produced. Typical approaches or strategies include:  Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk  Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).  Methodical approaches, such as failure-based (including error guessing and fault-attacks), experienced-based, check-list based, and quality characteristic based.  Process- or standard-compliant approaches, such as those specified by industry-specific standards or the various agile methodologies.  Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.  Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.  Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites. Different approaches may be combined, for example, a risk-based dynamic approach. The selection of a test approach should consider the context, including  Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.  Skills and experience of the people in the proposed techniques, tools and methods.  The objective of the testing endeavour and the mission of the testing team.  Regulatory aspects, such as external and internal regulations for the development process.  The nature of the product and the business. 3) Test progress monitoring and control a) Test progress monitoring
  • 37. Percentage of work done in test case preparation (or percentage of planned test cases prepared).  Percentage of work done in test environment preparation.  Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).  Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).  Test coverage of requirements, risks or code.  Subjective confidence of testers in the product.  Dates of test milestones.  Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test. b) Test Reporting  What happened during a period of testing, such as dates when exit criteria were met.  Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software. Metrics should be collected during and at the end of a test level in order to assess:  The adequacy of the test objectives for that test level.  The adequacy of the test approaches taken.  The effectiveness of the testing with respect to its objectives. c) Test control Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task. Examples of test control actions are:  Making decisions based on information from test monitoring.  Re-prioritize tests when an identified risk occurs (e.g. software delivered late).  Change the test schedule due to availability of a test environment.  Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build. 4) Configuration management
  • 38. The purpose of configuration management is to establish and maintain the integrity of the products (components, data and documentation) of the software or system through the project and product life cycle. For testing, configuration management may involve ensuring that:  All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process.  All identified documents and software items are referenced unambiguously in test documentation For the tester, configuration management helps to uniquely identify (and to reproduce) the tested item, test documents, the tests and the test harness. During test planning, the configuration management procedures and infrastructure (tools) should be chosen, documented and implemented. 5) Risk and testing a) Project risks Project risks are the risks that surround the project‟s capability to deliver its objectives, such as: Organizational factors:  skill and staff shortages;  personal and training issues;  political issues, such as o problems with testers communicating their needs and test results; o failure to follow up on information found in testing and reviews (e.g. not improving development and testing practices).  improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing). Technical issues:  problems in defining the right requirements;  the extent that requirements can be met given existing constraints;  the quality of the design, code and tests. Supplier issues:  failure of a third party; contractual issues.
  • 39. b) Product risks Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product, such as:  Failure-prone software delivered.  The potential that the software/hardware could cause harm to an individual or company.  Poor software characteristics (e.g. functionality, reliability, usability and performance).  Software that does not perform its intended functions. Risks are used to decide where to start testing and where to test more; testing is used to reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect. Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans. A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation and execution of tests. In a risk-based approach the risks identified may be used to:  Determine the test techniques to be employed.  Determine the extent of testing to be carried out.  Prioritize testing in an attempt to find the critical defects as early as possible.  Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers). Risk-based testing draws on the collective knowledge and insight of the project stakeholders to determine the risks and the levels of testing required to address those risks. To ensure that the chance of a product failure is minimized, risk management activities provide a disciplined approach to:  Assess (and reassess on a regular basis) what can go wrong (risks).  Determine what risks are important to deal with.  Implement actions to deal with those risks. In addition, testing may support the identification of new risks, may help to determine what risks should be reduced, and may lower uncertainty about risks. 6) Incident management Since one of the objectives of testing is to find defects, the discrepancies between actual and expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and classification to correction and confirmation of the solution. In order to manage all incidents to completion, an organization should establish a process and rules for classification.
  • 40. Incidents may be raised during development, review, testing or use of a software product. They may be raised for issues in code or the working system, or in any type of documentation including requirements, development documents, test documents, and user information such as “Help” or installation guides. Incident reports have the following objectives:  Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.  Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.  Provide ideas for test process improvement. Details of the incident report may include:  Date of issue, issuing organization, and author.  Expected and actual results.  Identification of the test item (configuration item) and environment.  Software or system life cycle process in which the incident was observed.  Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots.  Scope or degree of impact on stakeholder(s) interests.  Severity of the impact on the system.  Urgency/priority to fix.  Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest, closed).  Conclusions, recommendations and approvals.  Global issues, such as other areas that may be affected by a change resulting from the incident.  Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed. References, including the identity of the test case specification that revealed the problem.
  • 41. Ch-6--------------------------------------------------------------------------------------------------------- ----------------------------------------------------- Types of test tools Management of testing and tests  Requirement management tools  Incident management tools  Configuration management tools Static testing  Review tools  Static analysis tools (D)  Modeling tools (D) Test specification  Test design tools  Test data preparation tools Test execution and logging  Test execution tools  Test harness/unit test framework tools (D)  Test comparators  Coverage measurement tools (D)
  • 42. Security tools Performance and monitoring  Dynamic analysis tools  Performance/Load/Stress Testing tools  Monitoring tools Specific application areas  Special tools for web-based applications  Special tools for specific development flat forms  Special tools for embedded systems  Tool support using other tools Test Tools and their purposes Requirement management tools Store requirements, check for consistency, allow requirements to be prioritized, trace changes, coverage of requirements etc. Incident management tools Store and manage incident reports, facilitating prioritization, assessment of actions to people and attribution of status etc. Configuration management tools Store information about versions and builds of software and testware; enable traceability between testware and software work products etc. Review tools Store information, store and communicate review comments etc. Static analysis tools (D) The enforcement of coding standards, the analysis of structures and dependencies, aiding in understanding the code etc.
  • 43. Modeling tools (D) Validate models of the software, find defects in data model, state model or an object model etc. Test design tools Generate test inputs or executable tests, generate expected out comes etc. Test data preparation tools Preparing test data, Manipulate databases, files or data transmissions to set up test data etc. Test execution tools Record tests, Automated test execution, use inputs and expected outcomes, compare results with expected outcomes, repeat tests, dynamic comparison, manipulate the tests using scripting language etc. Test harness/unit test framework tools (D) Test components or part of a system by simulating the environment, provide an execution framework in middleware etc. Test comparators Determine differences between files, databases or test results post-execution comparison, may use test oracle if it is automated etc. Coverage measurement tools (D) Measure the percentage of specific types of code structure (ex: statements, branches or decisions, and module or function calls) Security tools Check for computer viruses and denial of service attacks, search for specific vulnerabilities of the system etc
  • 44. . Dynamic analysis tools (D) Detect memory leaks, identify time dependencies and identify pointer arithmetic errors. Performance/Load/Stress Testing tools Measure load or stress, Monitor and report on how a system behaves a variety of simulated usage conditions, simulate a load on an application/a database/or a system environment, repetitive execution of tests etc. Monitoring tools Continuously analyze, verify and report on specific system resources; store information about the version and build of the software and testware, and enable traceability. Tool support using other tools Some tools use other tools (Ex: QTP uses excel sheet and SQL tools) Potential benefits and risks of tool support for testing Benefits: o Repetitive work is reduced o Greater consistency and repeatability o Objective assessment o Ease of access to information about tests or testing Risks: o Unrealistic expectations for the tool o Underestimating the time and effort needed to achieve significant and continues benefits from the tool o Underestimating the effort required to maintain the test assets generated by the tool o Over-reliance on the tool Special considerations for some types of tools Following tools have special considerations  Test execution tools
  • 45. Performance testing tools  Static testing tools  Test management tools Introducing a tool into an organization The following factors are important in selecting a tool: o Assessment of the organization maturity o Identification of the areas within the organization where tool support will help to improve testing process o Evaluation of tools against clear requirements and objective criteria o Proof-of-concept to see whether the product works as desired and meets the requirements and objectives defined for it o Evaluation of the vendor (training, support and other commercial aspects) or open-source network of support o Identifying and planning internal implementation (including coaching and mentoring for those new to the use of the tool) The objectives for a pilot project for a new tool o To learn more about the tool o To see how the tool would fit with existing processes or documentation o To decide on standard ways of using the tool that will work for all potential users o To evaluate the pilot project agonist its objectives Successes factors for the deployment of the new tool within an organization o Rolling out the tool to the rest of the organization incrementally o Adapting and improving process to fit with the use of the tool o Providing training and coaching/mentoring for new users. o Defining usage guidelines o Implementing a way to learn lessons from tool use. o Monitoring tool use and benefits.
  • 46. Questions-with-Answers Ch- 1 1 When what is visible to end-users is a deviation from the specific or expected behavior, this is called: a) an error b) a fault c) a failure d) a defect e) a mistake 2 Regression testing should be performed: v) every week w) after the software has changed x) as often as possible y) when the environment has changed z) when the project manager says a) v & w are true, x – z are false b) w, x & y are true, v & z are false c) w & y are true, v, x & z are false d) w is true, v, x y and z are false e) all of the above are true 3 IEEE 829 test plan documentation standard contains all of the following except: a) test items b) test deliverables c) test tasks d) test environment e) test specification 4 Testing should be stopped when: a) all the planned tests have been run b) time has run out c) all faults have been fixed correctly d) both a) and c) e) it depends on the risks for the system being tested 5 Order numbers on a stock control system can range between 10000 and 99999 inclusive. Which of the following inputs might be a result of designing tests for only valid equivalence classes and valid boundaries: a) 1000, 5000, 99999 b) 9999, 50000, 100000 c) 10000, 50000, 99999
  • 47. d) 10000, 99999 e) 9999, 10000, 50000, 99999, 10000 6 Consider the following statements about early test design: i. early test design can prevent fault multiplication ii. faults found during early test design are more expensive to fix iii. early test design can find faults iv. early test design can cause changes to the requirements v. early test design takes more effort a) i, iii & iv are true. Ii & v are false b) iii is true, I, ii, iv & v are false c) iii & iv are true. i, ii & v are false d) i, iii, iv & v are true, ii us false e) i & iii are true, ii, iv & v are false 7 Non-functional system testing includes: a) testing to see where the system does not function properly b) testing quality attributes of the system including performance and usability c) testing a system feature using only the software required for that action d) testing a system feature using only the software required for that function e) testing for functions that should not exist 8 Which of the following is NOT part of configuration management: a) status accounting of configuration items b) auditing conformance to ISO9001 c) identification of test versions d) record of changes to documentation over time e) controlled library access 9 Which of the following is the main purpose of the integration strategy for integration testing in the small? a) to ensure that all of the small modules are tested adequately b) to ensure that the system interfaces to other systems and networks c) to specify which modules to combine when and how many at once d) to ensure that the integration testing can be performed by a small team e) to specify how the software should be divided into modules 10 What is the purpose of test completion criteria in a test plan: a) to know when a specific test has finished its execution b) to ensure that the test case specification is complete c) to set the criteria used in generating test inputs d) to know when test planning is complete e) to plan when to stop testing 11 Consider the following statements i. an incident may be closed without being fixed
  • 48. ii. incidents may not be raised against documentation iii. the final stage of incident tracking is fixing iv. the incident record does not include information on test environments v. incidents should be raised when someone other than the author of the software performs the test a) ii and v are true, I, iii and iv are false b) i and v are true, ii, iii and iv are false c) i, iv and v are true, ii and iii are false d) i and ii are true, iii, iv and v are false e) i is true, ii, iii, iv and v are false 12 Given the following code, which is true about the minimum number of test cases required for full statement and branch coverage: Read P Read Q IF P+Q > 100 THEN Print “Large” ENDIF If P > 50 THEN Print “P Large” ENDIF a) 1 test for statement coverage, 3 for branch coverage b) 1 test for statement coverage, 2 for branch coverage c) 1 test for statement coverage, 1 for branch coverage d) 2 tests for statement coverage, 3 for branch coverage e) 2 tests for statement coverage, 2 for branch coverage 13 Given the following: Switch PC on Start “outlook” IF outlook appears THEN Send an email Close outlook a) 1 test for statement coverage, 1 for branch coverage b) 1 test for statement coverage, 2 for branch coverage c) 1 test for statement coverage. 3 for branch coverage d) 2 tests for statement coverage, 2 for branch coverage e) 2 tests for statement coverage, 3 for branch coverage 14 Given the following code, which is true: IF A > B THEN
  • 49. C=A–B ELSE C=A+B ENDIF Read D IF C = D Then Print “Error” ENDIF a) 1 test for statement coverage, 3 for branch coverage b) 2 tests for statement coverage, 2 for branch coverage c) 2 tests for statement coverage. 3 for branch coverage d) 3 tests for statement coverage, 3 for branch coverage e) 3 tests for statement coverage, 2 for branch coverage 15 Consider the following: Pick up and read the newspaper Look at what is on television If there is a program that you are interested in watching then switch the the television on and watch the program Otherwise Continue reading the newspaper If there is a crossword in the newspaper then try and complete the crossword a) SC = 1 and DC = 1 b) SC = 1 and DC = 2 c) SC = 1 and DC = 3 d) SC = 2 and DC = 2 e) SC = 2 and DC = 3 16 The place to start if you want a (new) test tool is: a) Attend a tool exhibition b) Invite a vendor to give a demo c) Analyze your needs and requirements d) Find out what your budget would be for the tool e) Search the internet 17 When a new testing tool is purchased, it should be used first by: a) A small team to establish the best way to use the tool b) Everyone who may eventually have some use for the tool c) The independent testing team d) The managers to see what projects it should be used in e) The vendor contractor to write the initial scripts 18 What can static analysis NOT find? a) The use of a variable before it has been defined b) Unreachable (“dead”) code
  • 50. c) Whether the value stored in a variable is correct d) The re-definition of a variable before it has been used e) Array bound violations 19 Which of the following is NOT a black box technique: a) Equivalence partitioning b) State transition testing c) LCSAJ d) Syntax testing e) Boundary value analysis 20 Beta testing is: a) Performed by customers at their own site b) Performed by customers at their software developer‟s site c) Performed by an independent test team d) Useful to test bespoke software e) Performed as early as possible in the lifecycle 21 Given the following types of tool, which tools would typically be used by developers and which by an independent test team: i. static analysis ii. performance testing iii. test management iv. dynamic analysis v. test running vi. test data preparation a) developers would typically use i, iv and vi; test team ii, iii and v b) developers would typically use i and iv; test team ii, iii, v and vi c) developers would typically use i, ii, iii and iv; test team v and vi d) developers would typically use ii, iv and vi; test team I, ii and v e) developers would typically use i, iii, iv and v; test team ii and vi 22 The main focus of acceptance testing is: a) finding faults in the system b) ensuring that the system is acceptable to all users c) testing the system with other systems d) testing for a business perspective e) testing by an independent test team 23 Which of the following statements about the component testing standard is false: a) black box design techniques all have an associated measurement technique b) white box design techniques all have an associated measurement technique c) cyclomatic complexity is not a test measurement technique d) black box measurement techniques all have an associated test design technique e) white box measurement techniques all have an associated test design technique
  • 51. 24 Which of the following statements is NOT true: a) inspection is the most formal review process b) inspections should be led by a trained leader c) managers can perform inspections on management documents d) inspection is appropriate even when there are no written documents e) inspection compares documents with predecessor (source) documents 25 A typical commercial test execution tool would be able to perform all of the following EXCEPT: a) generating expected outputs b) replaying inputs according to a programmed script c) comparison of expected outcomes with actual outcomes d) recording test inputs e) reading test values from a data file 26 The difference between re-testing and regression testing is a) re-testing is running a test again; regression testing looks for unexpected side effects b) re-testing looks for unexpected side effects; regression testing is repeating those tests c) re-testing is done after faults are fixed; regression testing is done earlier d) re-testing uses different environments, regression testing uses the same environment e) re-testing is done by developers, regression testing is done by independent testers 27 Expected results are: a) only important in system testing b) only used in component testing c) never specified in advance d) most useful when specified in advance e) derived from the code 28 Test managers should not: a) report on deviations from the project plan b) sign the system off for release c) re-allocate resource to meet original plans d) raise incidents on faults that they have found e) provide information for risk analysis and quality improvement 29 Unreachable code would best be found using: a) code reviews b) code inspections c) a coverage tool d) a test management tool e) a static analysis tool 30 A tool that supports traceability, recording of incidents or scheduling of tests is called: a) a dynamic analysis tool
  • 52. b) a test execution tool c) a debugging tool d) a test management tool e) a configuration management tool 31 What information need not be included in a test incident report: a) how to fix the fault b) how to reproduce the fault c) test environment details d) severity, priority e) the actual and expected outcomes 32 Which expression best matches the following characteristics or review processes: 1. led by author 2. Undocumented 3. No management participation 4. Led by A trained moderator or leader 5. uses entry exit criteria s) inspection t) peer review u) informal review v) walkthrough a) s = 4, t = 3, u = 2 and 5, v = 1 b) s = 4 and 5, t = 3, u = 2, v = 1 c) s = 1 and 5, t = 3, u = 2, v = 4 d) s = 5, t = 4, u = 3, v = 1 and 2 e) s = 4 and 5, t = 1, u = 2, v = 3 33 Which of the following is NOT part of system testing: a) business process-based testing b) performance, load and stress testing c) requirements-based testing d) usability testing e) top-down integration testing 34 What statement about expected outcomes is FALSE: a) expected outcomes are defined by the software‟s behavior b) expected outcomes are derived from a specification, not from the code c) expected outcomes include outputs to a screen and changes to files and databases d) expected outcomes should be predicted before a test is run e) expected outcomes may include timing constraints such as response times 35 The standard that gives definitions of testing terms is: a) ISO/IEC 12207
  • 53. b) BS7925-1 c) BS7925-2 d) ANSI/IEEE 829 e) ANSI/IEEE 729 36 The cost of fixing a fault: a) Is not important b) Increases as we move the product towards live use c) Decreases as we move the product towards live use d) Is more expensive if found in requirements than functional design e) Can never be determined 37 Which of the following is NOT included in the Test Plan document of the Test Documentation Standard: a) Test items (i.e. software versions) b) What is not to be tested c) Test environments d) Quality plans e) Schedules and deadlines 38 Could reviews or inspections be considered part of testing: a) No, because they apply to development documentation b) No, because they are normally applied before testing c) No, because they do not apply to the test documentation d) Yes, because both help detect faults and improve quality e) Yes, because testing includes all non-constructive activities 39 Which of the following is not part of performance testing: a) Measuring response time b) Measuring transaction rates c) Recovery testing d) Simulating many users e) Generating many transactions 40 Error guessing is best used a) As the first approach to deriving test cases b) After more formal techniques have been applied c) By inexperienced testers d) After the system has gone live e) Only by end users
  • 54. Ch- 2 1. Which of the following is true? a. Testing is the same as quality assurance b. Testing is a part of quality assurance c. Testing is not a part of quality assurance d. Testing is same as debugging 2. Why is testing necessary? a. Because testing is good method to make there are not defects in the software b. Because verification and validation are not enough to get to know the quality of the software c. Because testing measures the quality of the software system and helps to increase the quality d. Because testing finds more defects than reviews and inspections. 3. Integration testing has following characteristics I. It can be done in incremental manner II. It is always done after system testing III. It includes functional tests IV. It includes non-functional tests a. I, II and III are correct b. I is correct c. I, III and IV are correct d. I, II and IV are correct 4. A number of critical bugs are fixed in software. All the bugs are in one module, related to reports. The test manager decides to do regression testing only on the reports module. a. The test manager should do only automated regression testing. b. The test manager is justified in her decision because no bug has been fixed in other modules c. The test manager should only do confirmation testing. There is no need to do regression testing d. Regression testing should be done on other modules as well because fixing one module may affect other modules 5. Which of the following is correct about static analysis tools? a. Static analysis tools are used only by developers b. Compilers may offer some support for static analysis c. Static analysis tools help find failures rather than defects d. Static analysis tools require execution of the code to analyze the coverage
  • 55. 6. In a flight reservation system, the number of available seats in each plane model is an input. A plane may have any positive number of available seats, up to the given capacity of the plane. Using Boundary Value analysis, a list of available – seat values were generated. Which of the following lists is correct? a. 1, 2, capacity -1, capacity, capacity plus 1 b. 0, 1, capacity, capacity plus 1 c. 0, 1, 2, capacity plus 1, a very large number d. 0, 1, 10, 100, capacity, capacity plus one 7. Which of the following is correct about static analysis tools a. They help you find defects rather than failures b. They are used by developers only c. They require compilation of code d. They are useful only for regulated industries 8. In foundation level syllabus you will find the main basic principles of testing. Which of the following sentences describes one of these basic principles? a. Complete testing of software is attainable if you have enough resources and test tools b. With automated testing you can make statements with more confidence about the Quality of a product than with manual testing c. For a software system, it is not possible, under normal conditions, to test all input and preconditions. d. A goal of testing is to show that the software is defect free. 9. Which of the following statements contains a valid goal for a functional test set? a. A goal is that no more failures will result from the remaining defects b. A goal is to find as many failures as possible so that the cause of the failures can be identified and fixed c. A goal is to eliminate as much as possible the causes of defects d. A goal is to fulfill all requirements for testing that are defined in the project plan. 10. In system testing... a. .. Both functional and non-functional requirements are to be tested b. ... Only functional requirements are tested; non-functional requirements are validated in a review c. ... Only non-functional requirements are tested; functional requirements are validated in a review d. ... Only requirements which are listed in the specification document are to be tested 11. Which of the following activities differentiate a walkthrough from a formal review? a. A walkthrough does not follow a defined process b. For a walkthrough individual preparation by the reviewers is optional c. A walkthrough requires meeting