SlideShare uma empresa Scribd logo
1 de 105
Chapter -1 Fundamentals of Testing
1.1 Why is testing necessary?
bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
1.2 What is testing?
code, debugging, requirement, test basis, test case, test objective
1.3 Testing principles
1.4 Fundamental test process
conformation testing, exit criteria, incident, regression testing, test condition, test coverage,
test data, test execution, test log, test plan, test strategy, test summary report and testware.
1.5 The psychology of testing
independence.
I) General testing principles
Principles
A number of testing principles have been suggested over the past 40 years and offer general
guidelines common for all testing.
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
 Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus
testing efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life
cycle, and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing,
or are responsible for the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the

                                                                                                     1
users’ needs and expectations.
II) Fundamental test process
1) Test planning and control
Test planning is the activity of verifying the mission of testing, defining the objectives of
testing and the specification of test activities in order to meet the objectives and mission.
 It involves taking actions necessary to meet the mission and objectives of the project. In
order to control testing, it should be monitored throughout the project. Test planning takes
into account the feedback from monitoring and control activities.
2) Test analysis and design
Test analysis and design is the activity where general testing objectives are transformed into
tangible test conditions and test cases.
Test analysis and design has the following major tasks:
     Reviewing the test basis (such as requirements, architecture, design, interfaces).
     Evaluating testability of the test basis and test objects.
     Identifying and prioritizing test conditions based on analysis of test items, the
      specification, behaviour and structure.
     Designing and prioritizing test cases.
     Identifying necessary test data to support the test conditions and test cases.
     Designing the test environment set-up and identifying any required infrastructure and
      tools.
3) Test implementation and execution
     Developing, implementing and prioritizing test cases.
     Developing and prioritizing test procedures, creating test data and, optionally,
      preparing test harnesses and writing automated test scripts.
     Creating test suites from the test procedures for efficient test execution.
     Verifying that the test environment has been set up correctly.
     Executing test procedures either manually or by using test execution tools, according to
      the planned sequence.
     Logging the outcome of test execution and recording the identities and versions of the
      software under test, test tools and testware.
     Comparing actual results with expected results.
     Reporting discrepancies as incidents and analyzing them in order to establish their
      cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake
      in the way the test was executed).
     Repeating test activities as a result of action taken for each discrepancy. For example,
      reexecution of a test that previously failed in order to confirm a fix (confirmation
      testing), execution of a corrected test and/or execution of tests in order to ensure that
      defects have not been introduced in unchanged areas of the software or that defect
      fixing did not uncover other defects (regression testing).

                                                                                                  2
4) Evaluating exit criteria and reporting
     Checking test logs against the exit criteria specified in test planning.
     Assessing if more tests are needed or if the exit criteria specified should be changed.
     Writing a test summary report for stakeholders.
5) Test closure activities
     Checking which planned deliverables have been delivered, the closure of incident
      reports or raising of change records for any that remain open, and the documentation
      of the acceptance of the system.
     Finalizing and archiving testware, the test environment and the test infrastructure for
      later reuse.
     Handover of testware to the maintenance organization.
     Analyzing lessons learned for future releases and projects, and the improvement of test
      maturity.
III) The psychology of testing
     Tests designed by the person(s) who wrote the software under test (low level of
      independence).
     Tests designed by another person(s) (e.g. from the development team).
     Tests designed by a person(s) from a different organizational group (e.g. an
      independent test team) or test specialists (e.g. usability or performance test specialists).
     Tests designed by a person(s) from a different organization or company (i.e.
      outsourcing or certification by an external body).
Questions :
Example 1: 50 Question(knol.google.com)
( These are sample questions related to chapter 1 , For answers see other sections of ISTQB
below)


1) When what is visible to end-users is a deviation from the specific or expected behavior, this
is called:
a) an error
b) a fault
c) a failure
d) a defect

2) Regression testing should be performed:
v) every week
w) after the software has changed
x) as often as possible
y) when the environment has changed
z) when the project manager says
a) v & w are true, x – z are false

                                                                                                3
b) w, x & y are true, v & z are false
c) w & y are true, v, x & z are false
d) w is true, v, x y and z are false

3) Testing should be stopped when:
a) all the planned tests have been run
b) time has run out
c) all faults have been fixed correctly
d) it depends on the risks for the system being tested

4) Consider the following statements about early test design:
i. early test design can prevent fault multiplication
ii. faults found during early test design are more expensive to fix
iii. early test design can find faults
iv. early test design can cause changes to the requirements
v. early test design takes more effort
a) i, iii & iv are true. Ii & v are false
b) iii is true, I, ii, iv & v are false
c) iii & iv are true. i, ii & v are false
d) i, iii, iv & v are true, ii us false

5) The main focus of acceptance testing is:
a) finding faults in the system
b) ensuring that the system is acceptable to all users
c) testing the system with other systems
d) testing for a business perspective

6) The difference between re-testing and regression testing is
a) re-testing is running a test again; regression testing looks for unexpected side effects
b) re-testing looks for unexpected side effects; regression testing is repeating those tests
c) re-testing is done after faults are fixed; regression testing is done earlier
d) re-testing is done by developers, regression testing is done by independent testers

7) Expected results are:
a) only important in system testing
b) only used in component testing
c) never specified in advance
d) most useful when specified in advance

8) The cost of fixing a fault:
a) Is not important
b) Increases as we move the product towards live use
c) Decreases as we move the product towards live use
d) Is more expensive if found in requirements than functional design

9) Fault Masking is
                                                                                               4
a. Error condition hiding another error condition
b. creating a test case which does not reveal a fault
c. masking a fault by developer
d. masking a fault by a tester

10) One Key reason why developers have difficulty testing their own work is:
a. Lack of technical documentation
b. Lack of test tools on the market for developers
c. Lack of training
d. Lack of Objectivity

11) Enough testing has been performed when:
a) Time runs out.
b) The required level of confidence has been achieved.
c) No more faults are found.
d) The users won’t find any serious faults.

12) Which of the following is false?
a) In a system two different failures may have different severities.
b) A system is necessarily more reliable after debugging for the removal of a fault.
c) A fault need not affect the reliability of a system.
d) Undetected errors may lead to faults and eventually to incorrect behavior.

13) Which of the following characterises the cost of faults?
a) They are cheapest to find in the early development phases and the less expensive to fix.
b) They are easiest to find during system testing but the most expensive to fix then.
c) Faults are cheapest to find in the early development phases but the most expensive to fix
then.
d) Although faults are most expensive to find during early development phases, they are
cheapest to fix then.

14) According to the ISTQB Glossary a risk relates to which of the following?
a) Negative feedback to the tester.
b) Negative consequences that will occur.
c) Negative consequences that could occur.
d) Negative consequences for the test object.


15) Ensuring that a test design start during the requirements definition phase is important to
enable which of the following test objectives?
a) Preventing defects in the system.
b) Finding defects through dynamic testing.
c) Gaining confidence in the system.
d) Finishing the project on time.

16) A failure is:
a) Found in the software; the result of an error.
                                                                                                 5
b) Departure from specified behavior.
c) An incorrect step, process or data definition in a computer program.
d) A human action that produces an incorrect result.

17) Faults found by users are due to:
a. Poor quality software
b. Poor software and poor testing
c. bad luck
d. insufficient time for testing

18) Which of the following statements are true?
a. Faults in program specifications are the most expensive to fix.
b. Faults in code are the most expensive to fix.
c. Faults in requirements are the most expensive to fix
d. Faults in designs are the most expensive to fix.

19) COTS is known as:
a. Commercial off the shelf software
b. Compliance of the software
c. Change control of the software
d. Capable off the shelf software

20) Which is not the testing objective?
a. Finding defects
b. Gaining confidence about the level of quality and providing information
c. Preventing defects.
d. Debugging defects

21) Exhaustive Testing is
a) Impractical but possible
b) Practically possible
c) Impractical and impossible
d) Always possible

22) Which of the following is most important to promote and maintain good relationships
between developers and testers?
a) Understanding what Managers value about testing.
b) Explaining test results in a neutral fashion.
c) Identifying potential customer work-around for bugs.
d) Promoting better quality software whenever possible.

23) According to ISTQB Glossary, the word ‘Error’ is synonymous with which of the
following?
a) Failure
b) Defect
c) Mistake

                                                                                          6
d) Bug

24) In prioritising what to test, the most important objective is to:
a) Find as many faults as possible.
b) Test high risk areas.
c) Obtain good test coverage.
d) Test whatever is easiest to test.

25) Incidents would not be raised against:
a) Requirements
b) Documentation
c) Test cases
d) Improvements suggested by users

26) Designing the test environment set-up and identifying any required infrastructure and
tools are a part of which phase
a) Test Implementation and execution
b) Test Analysis and Design
c) Evaluating the Exit Criteria and reporting
d) Test Closure Activities

27) Which of the following is not a part of the Test Implementation and Execution Phase?
a) Creating test suites from the test cases
b) Executing test cases either manually or by using test execution tools
c) Comparing actual results
d) Designing the Tests

28) Test Case are grouped into Manageable (and scheduled) units are called as
a. Test Harness
b. Test Suite
c. Test Cycle
d. Test Driver

29) Which of the following could be a reason for a failure
1) Testing fault
2) Software fault
3) Design fault
4) Environment Fault
5) Documentation Fault
a. 2 is a valid reason; 1,3,4 & 5 are not
b. 1,2,3,4 are valid reasons; 5 is not
c. 1,2,3 are valid reasons; 4 & 5 are not
d. All of them are valid reasons for failure

30) Handover of Testware is a part of which Phase
a) Test Analysis and Design
                                                                                            7
b) Test Planning and control
c) Test Closure Activities
d) Evaluating exit criteria and reporting


31) An exhaustive test suit would include:
a) All combination of input values and preconditions.
b) All combination of input values and output values.
c) All pairs of input values and preconditions.
d) All states and state transitions.

32) Which of the following encourages objective testing?
a) Unit Testing.
b) System Testing.
c) Independent Testing.
d) Destructive Testing.

33) Consider the following list of test process activities:
I Analysis and Design
II Test Closure activities
III Evaluating exit criteria and reporting
IV Planning and Control
V Implementation and execution
Which of the following places these in their logical sequence?
a) I, II, III, IV and V
b) IV, I, V, III and II
c) IV, I, V, II and III
d) I, IV, V, III and II

34) According to ISTQB Glossary, debugging:
a) Is part of the fundamental test process.
b) Includes the repair of the cause of a failure
c) Involves intentionally adding known defects
d) Follows the steps of a test procedure

35) Which of the following could be a root cause of a defect in financial software in which an
incorrect interest rate is calculated?
a) Insufficient funds were available to pay the interest rate calculated.
b) Insufficient calculations of compound interest were included.
c) Insufficient training was given to the developers concerning compound interest calculation
rules.
d) Incorrect calculators were used to calculate the expected results.

36) When should you stop testing?
a) When the time for testing has run out
b) When all planned tests have been run
c) When the test completion criteria have been met

                                                                                             8
d) When no faults have been found by the tests run

37) An incident logging system:
a) Only records defects
b) is of limited value
c) is a valuable source of project information during testing if it contains all incidents
d) Should be used only by the test team

38) The term confirmation testing is synonymous to
a) Exploratory testing
b) Regression testing
c) Exhaustive testing
d) Re- testing

39) Consider the following statements:
i. an incident may be closed without being fixed.
ii. Incidents may not be raised against documentation.
iii. The final stage of incident tracking is fixing.
iv. The incident record does not include information on test environments.
a) ii is true, i, iii and iv are false
b) i is true, ii, iii and iv are false
c) i and iv are true, ii and iii are false
d) i and ii are true, iii and iv are false

40) Which of the following is not a characteristic of software?
a) Software is developed or engineered; it is not manufactured in the classic sense
b) Software doesn’t “wear out” with the time
c) The traditional industry is moving toward component based assembly, whereas most
software continues to be custom built and the concept of component based assembly is still
taking shape
d) Software does not require maintenance

41) If the expected result is not specified then:
a) We cannot run the test
b) It may be difficult to repeat the test
c) It may be difficult to determine if the test has passed or failed
d) We cannot automate the user inputs

42) A reliable system will be one that:
a) is unlikely to be completed on schedule
b) is unlikely to cause a failure
c) is likely to be fault free
d) is likely to be liked by the users

43) What is the purpose of Exit Criteria?
a) To determine when writing a test case is complete
                                                                                             9
b) To determine when to stop the testing
c) To ensure the test specification is complete
d) To determine when to stop writing the test plan

44) What is the focus of Re-Testing?
a) Re-Testing ensures the original fault has been removed
b) Re-Testing prevents future faults
c) Re-Testing looks for unexpected side effects
d) Re-Testing ensures the original fault is still present

45) How is the amount of Re-Testing required normally defined?
a) Discussions with the end users
b) Discussions with the developers
c) Metrics from Previous projects
d) none of the above

46) A manifestation of an ‘error’ in software is
a) An Error
b) A Fault
c) A Failure
d) An Action

47) If testing time is limited, we should …
a) Only test high risk areas
b) Only test simple areas
c) Only test low risk areas
d) Only test complicated areas

48) The quality of the product is said to increase when?
a) All faults have been reviewed
b) All faults have been found
c) All faults have been raised
d) All faults have been rectified

49) Pick the best definition of quality
a) Quality is job one
b) Zero defects
c) Conformance to requirements
d) Work as designed

50) What is the Main reason for testing software before releasing it?
a) To show the system will work after release
b) To decide when software is of sufficient quality to release
c) To find as many of bugs as possible before release
d) To give information for a risk based decision about release


                                                                        10
51) Select a reason that does not agree with the fact that complete testing is impossible:
a) The domain of possible inputs is too large to test.
b) Limited financial resources.
c) There are too many possible paths through the program to test.
d) The user interface issues (and thus the design issues) are too complex to completely test.




                                                                                                11
Chaper 2. Testing throughout the software life cycle
International Software Testing Quality Board, Terms Development models
Software development models, Test levels , test types and maintenance testing, Tests and
answers, sample questionaire, Detailed explanation of SDLC and V models can be referred
from software testing theory Part 1 ( Index and Sub Index)


2.1 Software development models
COTS, interactive-incremental development model, validation, verification, V-model.




2.2 Test levels
 Alpha testing, beta testing, component testing (also known as unit/module/program testing),
driver, stub, field testing, functional requirement, non-functional requirement, integration,
integration testing, robustness testing, system testing, test level, test-driven development, test
                                                                                                12
environment, user acceptance testing.

2.3 Test types
 Black box testing, code coverage, functional testing, interoperability testing, load testing,
maintainability testing, performance testing, portability testing, reliability testing, security
testing, specification based testing, stress testing, structural testing, usability testing, white
box testing

2.4 Maintenance testing
Impact analysis, maintenance testing.

i) Software development models
a) V-model (sequential development model)
 Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to the four development levels.
The four levels used in this syllabus are:
component (unit) testing;
integration testing;
system testing;
acceptance testing.

b) Iterative-incremental development models
Iterative-incremental development is the process of establishing requirements, designing,
building and testing a system, done as a series of shorter development cycles. Examples are:
prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile
development models.


c) Testing within a life cycle model
In any life cycle model, there are several characteristics of good testing:
? For every development activity there is a corresponding testing activity.
? Each test level has test objectives specific to that level.
? The analysis and design of tests for a given test level should begin during the corresponding
development activity.
? Testers should be involved in reviewing documents as soon as drafts are available in the
development life cycle.


ii) Test levels
a) Component testing
Component testing searches for defects in, and verifies the functioning of, software (e.g.
modules, programs, objects, classes, etc.) that are separately testable.
Component testing may include testing of functionality and specific non-functional

                                                                                                     13
characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as well
as structural testing (e.g. branch coverage).
One approach to component testing is to prepare and automate test cases before coding. This
is called a test-first approach or test-driven development.
b) Integration testing
 Integration testing tests interfaces between components, interactions with different parts of a
system, such as the operating system, file system, hardware, or interfaces between systems.
Component integration testing tests the interactions between software components and is
done after component testing;
System integration testing tests the interactions between different systems and may be done
after system testing.
Testing of specific non-functional characteristics (e.g. performance) may be included in
integration testing.
c) System testing
 System testing is concerned with the behaviour of a whole system/product as defined by the
scope of a development project or programme.
In system testing, the test environment should correspond to the final target or production
environment as much as possible in order to minimize the risk of environment-specific
failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications,
business processes, use cases, or other high level descriptions of system behaviour,
interactions with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the
system.
d) Acceptance testing
 Acceptance testing is often the responsibility of the customers or users of a system; other
stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or
specific non-functional characteristics of the system
Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract’s acceptance criteria for producing
custom-developed software. Acceptance criteria should be defined when the contract is
agreed. Regulation acceptance testing is performed against any regulations that must be
adhered to, such as governmental, legal or safety regulations.
Alpha and beta (or field) testing
Alpha testing is performed at the developing organization’s site. Beta testing, or field testing,
is performed by people at their own locations. Both are performed by potential customers, not
the developers of the product.
iii) Test types
a) Testing of function (functional testing)
The functions that a system, subsystem or component are to perform may be described in
                                                                                               14
work products such as a requirements specification, use cases, or a functional specification, or
they may be undocumented. The functions are “what” the system does.
A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating
to detection of threats, such as viruses, from malicious outsiders. Another type of functional
testing, interoperability testing, evaluates the capability of the software product to interact
with one or more specified components or systems.
b) Testing of non-functional software characteristics (non-functional testing)
 Non-functional testing includes, but is not limited to, performance testing, load testing, stress
testing, usability testing, maintainability testing, reliability testing and portability testing. It is
the testing of “how” the system works.
Non-functional testing may be performed at all test levels.
c) Testing of software structure/architecture (structural testing)
 Structural (white-box) testing may be performed at all test levels. Structural techniques are
best used after specification-based techniques, in order to help measure the thoroughness of
testing through assessment of coverage of a type of structure.
Structural testing approaches can also be applied at system, system integration or acceptance
testing levels (e.g. to business models or menu structures).
d) Testing related to changes (confirmation testing (retesting) and regression
testing)
After a defect is detected and fixed, the software should be retested to confirm that the original
defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is
a development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program, after modification, to
discover any defects introduced or uncovered as a result of the change(s). It is
performed when the software, or its environment, is changed.
Regression testing may be performed at all test levels, and applies to functional, non-
functional and structural testing.
iv) Maintenance testing
Once deployed, a software system is often in service for years or decades. During this time the
system and its environment are often corrected, changed or extended.
Modifications include planned enhancement changes (e.g. release-based), corrective and
emergency changes, and changes of environment,
Maintenance testing for migration (e.g. from one platform to another) should include
operational tests of the new environment, as well as of the changed software.
Maintenance testing for the retirement of a system may include the testing of data migration
or archiving if long data-retention periods are required.
Maintenance testing may be done at any or all test levels and for any or all test types.
Questions:
1) What are the good practices for testing with in the software development life cycle?
a) Early test analysis and design
b) Different test levels are defined with specific objectives
                                                                                                    15
c) Testers will start to get involved as soon as coding is done.
d) A and B above


2) Which option best describes objectives for test levels with a life cycle model?
a) Objectives should be generic for any test level
b) Objectives are the same for each test level.
c) The objectives of a test level don’t need to be defined in advance
d) Each level has objectives specific to that level.

3) Which of the following is a type?
a) Component testing
b) Functional testing
c) System testing
d) Acceptance testing

4) Non-functional system testing includes:
a) Testing to see where the system does not function properly
b) testing quality attributes of the system including performance and usability
c) testing a system feature using only the software required for that action
d) testing for functions that should not exist

5) Beta testing is:
a) Performed by customers at their own site
b) Performed by customers at their software developer’s site
c) Performed by an independent test team
d) Performed as early as possible in the lifecycle


6) Which of the following is not part of performance testing:
a) Measuring response time
b) Measuring transaction rates
c) Recovery testing
d) Simulating many users

7) Which one of the following statements about system testing is NOT true?
a) System tests are often performed by independent teams.
b) Functional testing is used more than structural testing.
c) Faults found during system tests can be very expensive to fix.
d) End-users should be involved in system tests.

8) Integration testing in the small:
a) Tests the individual components that have been developed.
b) Tests interactions between modules or subsystems.
c) Only uses components that form part of the live system.
d) Tests interfaces to other systems.


                                                                                     16
9) Alpha testing is:
a) Post-release testing by end user representatives at the developer’s site.
b) The first testing that is performed.
c) Pre-release testing by end user representatives at the developer’s site.
d) Pre-release testing by end user representatives at their sites.

10) Software testing activities should start
a) As soon as the code is written
b) during the design stage
c) when the requirements have been formally documented
d) as soon as possible in the development life cycle


11) consider the following statements about regression tests
I. they may useful be automated if they are well designed.
II. They are the same as confirmation tests
III. They are a way to reduce the risk of a change having an adverse affect elsewhere in the
system
IV. They are only effective if automated.
Which pairs of statements are true?
a) I and II
b) I and III
c) II and III
d) II and IV

12) Which of these statements about functional testing is true?
a) Structural testing is more important than functional testing as it addresses the code
b) Functional testing is useful throughout the life cycle and can be applied by business
annalists, developers, testers and users
c) Functional testing is more powerful than static testing as you actually run the system and
see what happens.
d) Inspection is a functional testing

13) Consider the following statements about maintenance testing:
I. It requires both re-test and regression test and may require additional new tests
II. It is testing to show how easy it will be to maintain the system
III) It is difficult to scope and therefore needs careful risk and impact analysis
IV. It need not be done for emergency bug fixes.
Which of the statement are true?
a) I and III
b) I and IV
c) II and III
d) II and IV
14) Which of the following is NOT part of system testing:
a) business process-based testing
b) performance, load and stress testing
c) requirements-based testing
                                                                                                17
d) top-down integration testing
15) To test a function, the programmer has to write a _________, which calls the function to
be tested and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
16) Which of the following is a form of functional testing?
a) Boundary value analysis
b) Usability testing
c) Performance testing
d) Security testing
17) Which one of the following describes the major benefit of verification early in the life
cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
18) The most important thing about early test design is that it:
a) makes test preparation easier.
b) means inspections are not required.
c) can prevent fault multiplication.
d) will find all faults.
19). Which of the following statements is not true
a. performance testing can be done during unit testing as well as during the testing of whole
system
b. The acceptance test does not necessarily include a regression test
c. Verification activities should not involve testers (reviews, inspections etc)
d. Test environments should be as similar to production environments as possible
20) Which of the following is NOT a type of non-functional test?
a. State-Transition
b. Usability
c. Performance
d. Reliability
21) Which of the following is not the integration strategy?
a. Design based
b. Big-bang
c. Bottom-up
d. Top-down
22) Big bang approach is related to
a) Regression testing
b) Inter system testing
c) Re-testing
d). Integration testing


                                                                                                18
23) “Which life cycle model is basically driven by schedule and budget risks” This statement is
best suited for
a) Water fall model
b) Spiral model
c) Incremental model
d) V-Model
24) Use cases can be performed to test
A. Performance testing
B. Unit testing
C. Business scenarios
D. Static testing
25) Which testing is performed at an external site?
A. Unit testing
B. Regression testing
C. Beta testing
D. Integration testing
26) Functional system testing is:
a) Testing that the system functions with other systems
b) Testing that the components that comprise the system function together
c) Testing the end to end functionality of the system as a whole
d) Testing the system performs functions within specified response times


27) Maintenance testing is:
a) Updating tests when the software has changed
b) testing a released system that has been changed
c) testing by users to ensure that the system meets a business need
d) testing to maintain business advantage
28) Which of the following uses Impact Analysis most?
a)component testing
b)non-functional system testing
c)user acceptance testing
d)maintenance testing
29) Which of the following is NOT part of system testing?
a) business process-based testing
b) performance, load and stress testing
c) usability testing
d) top-down integration testing
30) Which of the following list contains only non-functional tests?
a. compatibility testing, usability testing, performance testing
b. System testing, performance testing
c. Load testing, stress testing, component testing, portability testing
d. Testing various configurations, beta testing, load testing
31) V-Model is:

                                                                                            19
a. A software development model that illustrates how testing activities integrate with software
development phases
b. A software life-cycle model that is not relevant for testing
c. The official software development and testing life-cycle model of ISTQB
d. A testing life cycle model including unit, integration, system and acceptance phases
32) Maintenance testing is:
a. Testing management
b. Synonym of testing the quality of service
c. Triggered by modifications, migration or retirement of existing software
d. Testing the level of maintenance by the vendor
33) Link Testing is also called as:
a) Component Integration testing
b) Component System Testing
c) Component Sub System Testing
d) Maintenance testing
34) Component Testing is also called as:
i. Unit Testing
ii. Program Testing
iii. Module Testing
iv. System Component Testing
Which of the following is correct?
a) i, ii, iii are true and iv is false
b) i, ii, iii, iv are false
c) i, ii, iv are true and iii is false
d) All of above is true
35) Match every stage of the software Development Life cycle with the Testing Life cycle:
i. Global design
ii. System Requirements
iii. Detailed design
iv. User Requirements
a) Unit tests
b) Acceptance tests
c) System tests
d) Integration tests
a) i -d , ii-a , iii-b , iv-c
b) i -c , ii-d , iii-a , iv-b
c) i -d , ii-c , iii-a , iv-b
d) i -c , ii-d , iii-b , iv-a
36) Which of these is a functional test?
a) Measuring response time on an online booking system
b) Checking the effect of high volumes of traffic in a call-center system.
c) Checking the on-line booking screens information and the database contacts against the
information on the letter to the customers
d) Checking how easy the system is to use


                                                                                             20
Chaper 3: Static Techniques
International Software Testing and Quality Board, Objective Tests, sample questions
This chapter explains the static and dynamic techniques of testing, Review process, formal
and informal, inspection, and technical review, Analysis compiler, complexity, control flow,
data flow and walkthrough. For other manual testing concepts and SDLC, STLC, BLC topics ,
Please refer the Main chapter of Software testing theory.


3.1 Static techniques and the test process
dynamic testing, static testing, static technique




                  Img 1.1 Permanent Demand trend Jobmarket Internet.UK
 The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing
   ISTQB within the UK as a proportion of the total demand within the Qualifications
                                     category. 2009
3.2 Review process
entry criteria, formal review, informal review, inspection, metric, moderator/inspection
leader, peer review, reviewer, scribe, technical review, walkthrough.


3.3 Static analysis by tools
Compiler, complexity, control flow, data flow, static analysis


I) Phases of a formal review
1) Planning
Selecting the personal, allocating roles, defining entry and exit criteria for more formal
reviews etc.

                                                                                             21
2) Kick-off
Distributing documents, explaining the objectives, checking entry criteria etc.
3) Individual preparation
Work done by each of the participants on their own work before the review meeting, questions
and comments
4) Review meeting
Discussion or logging, make recommendations for handling the defects, or make decisions
about the defects
5) Rework
Fixing defects found, typically done by the author Fixing defects found, typically done by the
author
6) Follow-up
Checking the defects have been addressed, gathering metrics and checking on exit criteria


II) Roles and responsibilities
Manager Decides on execution of reviews, allocates time in projects schedules, and
determines if the review objectives have been met
Moderator Leads the review, including planning, running the meeting, follow-up after the
meeting.
Author The writer or person with chief responsibility of the document(s) to be reviewed.
Reviewers Individuals with a specific technical or business background. Identify defects and
describe findings.
Scribe (recorder) Documents all the issues, problems


III) Types of review
Informal review No formal process, pair programming or a technical lead reviewing designs
and code.
Main purpose: inexpensive way to get some benefit.
Walkthrough Meeting led by the author, ‘scenarios, dry runs, peer group’, open-ended
sessions.
Main purpose: learning, gaining understanding, defect finding
Technical review Documented, defined defect detection process, ideally led by trained
moderator, may be performed as a peer review, pre meeting preparation, involved by peers
and technical experts
Main purpose: discuss, make decisions, find defects, solve technical problems and check
conformance to specifications and standards
Inspection Led by trained moderator (not the author), usually peer examination, defined
roles, includes metrics, formal process, pre-meeting preparation, formal follow-up process
Main purpose: find defects.
Note: walkthroughs, technical reviews and inspections can be performed within a peer group-
colleague at the same organization level. This type of review is called a “peer review”.



                                                                                             22
IV) Success factors for reviews
Each review has a clear predefined objective.
The right people for the review objectives are involved.
Defects found are welcomed, and expressed objectively.
People issues and psychological aspects are dealt with (e.g. making it a positive experience for
the author).
Review techniques are applied that are suitable to the type and level of software work
products and reviewers.
Checklists or roles are used if appropriate to increase effectiveness of defect identification.
Training is given in review techniques, especially the more formal techniques, such as
inspection.
Management supports a good review process (e.g. by incorporating adequate time for review
activities in project schedules). There is an emphasis on learning and process improvement.

V) Cyclomatic Complexity
The number of independent paths through a program
Cyclomatic Complexity is defined as: L – N + 2P
L = the number of edges/links in a graph
N = the number of nodes in a graphs
P = the number of disconnected parts of the graph (connected components)
Alternatively one may calculate Cyclomatic Complexity using decision point rule
Decision points +1
Cyclomatic Complexity and Risk Evaluation
1 to 10a simple program, without very much risk
11 to 20 a complex program, moderate risk
21 to 50, a more complex program, high risk
> 50an un-testable program (very high risk)


Questions
1) Which of the following statements is NOT true:
a) inspection is the most formal review process
b) inspections should be led by a trained leader
c) managers can perform inspections on management documents
d) inspection is appropriate even when there are no written documents


2) Which expression best matches the following characteristics or review processes:
1. led by author
2. undocumented
3. no management participation
4. led by a trained moderator or leader
5. uses entry exit criteria
s) inspection
                                                                                              23
t) peer review
u) informal review
v) walkthrough
a) s = 4, t = 3, u = 2 and 5, v = 1
b) s = 4 and 5, t = 3, u = 2, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 5, t = 4, u = 3, v = 1 and 2


3) Could reviews or inspections be considered part of testing:
a) No, because they apply to development documentation
b) No, because they are normally applied before testing
c) No, because they do not apply to the test documentation
d) Yes, because both help detect faults and improve quality


4) In a review meeting a moderator is a person who
a. Takes minutes of the meeting
b. Mediates among people
c. Takes telephone calls
d. writes the documents to be reviewed


5) Which of the following statements about reviews is true?
a) Reviews cannot be performed on user requirements specifications.
b) Reviews are the least effective way of testing code.
c) Reviews are unlikely to find faults in test plans.
d) Reviews should be performed on specifications, code, and test plans.


6) What is the main difference between a walkthrough and an inspection?
a) An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.
b) An inspection has a trained leader, whilst a walkthrough has no leader.
c) Authors are not present during inspections, whilst they are during walkthroughs.
d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.


7) Which of the following is a static test?
a. code inspection
b. coverage analysis
c. usability assessment
d. installation test




                                                                                               24
8) Who is responsible for document all the issues, problems and open point that were
identified during the review meeting
A. Moderator
B. Scribe
C. Reviewers
D. Author


9) What is the main purpose of Informal review?
A. Inexpensive way to get some benefit
B. Find defects
C. Learning, gaining understanding, effect finding
D. Discuss, make decisions and solve technical problems


10) Which of the following is not a static testing technique?
a. Error guessing
b. Walkthrough
c. Data flow analysis
d. Inspections


11) Inspections can find all the following except
a. Variables not defined in the code
b. Spelling and grammar faults in the documents
c. Requirements that have been omitted from the design documents
d. How much of the code has been covered


12) Which of the following artifacts can be examined by using review techniques?
a) Software code
b) Requirements specification
c) Test designs
d) All of the above


13) Which is not a type of review?
a) Walkthrough
b) Inspection
c) Management approval
d) Informal review


14) Which of the following statements about early test design are true and which are false?
1. Defects found during early test design are more expensive to fix
2. Early test design can find defects
3. Early test design can cause to the changes to the requirements
4. Early test design can takes more effort
a) 1 and 3 are true. 2 and 4 are false.

                                                                                              25
b) 2 is true. 1, 3 and 4 are false.
c) 2 and 3 are true. 1 and 4 are false.
d) 2, 3, and 4 are true. 1 is false


15) Static code analysis typically identifies all but one of the following problems. Which is it?
a) Unreachable code
b) Faults in requirements
c) Undeclared variables
d) Too few comments


16) What is the best description of static analysis?
a) The analysis of bath programs
b) The reviewing of test plans
c) The analysis of program code or other software a rtifacts
d) The use of black-box testing


17) What is the more important factor for successful performance of review?
a) A separate scribe during the logging meeting
b) Trained participants and review leaders
c) The availability of tools to support the review process
d) A reviewed test plan


18) Code Walkthrough is
a. type of dynamic testing
b. type of static testing
c. neither dynamic nor static
d. performed by the testing team


19) Static Analysis
a. same as static testing
b. done by the developers
c. both a and b
d. none of the above


20) Which review is inexpensive?
a. Informal Review
b. Walkthrough
c. Technical review
d. Inspection


21) Who should have technical or Business background?
a. Moderator
b. Author

                                                                                                26
c. Reviewer
d. Recorder


22) The person who leads the review of the document(s), planning the review, running the
meeting and follow-up after the meeting
a. Reviewer
b. Author
c. Moderator
d. Auditor


23) Peer Reviews are also called as:
a) Inspection
b) Walkthrough
c) Technical Review
d) Formal Review


24) The Kick Off phase of a formal review includes the following
a) Explaining the objective
b) Fixing defects found typically done by author
c) Follow up
d) Individual Meeting preparations

25) Success Factors for a review include:
i. Each Review does not have a predefined objective
ii. Defects found are welcomed and expressed objectively
iii. Management supports a good review process.
iv. There is an emphasis on learning and process improvement.
a) ii, iii, iv are correct and i is incorrect
b) iii , i , iv is correct and ii is incorrect
c) i , iii , iv are correct and ii is in correct
d) i, ii are correct and iii, iv are incorrect


26) Why static testing described as complementary for dynamic testing?
a) Because they share the aim of identifying defects and finds the same types of defect.
b) Because they have different aims and differ in the types of defect they find.
c) Because they have different aims but find the same types of defect.
d) Because they share the aim of identifying defects but differ in the types of defect they find.


27) Which of the following statements regarding static testing is false?
a) Static testing requires the running of tests through the code
b) Static testing includes desk checking
c) Static testing includes techniques such as reviews and inspections
d) Static testing can give measurements such as cyclomatic complexity


                                                                                                27
28) Which of the following is true about Formal Review or Inspection?
i. Led by Trained Moderator (not the author).
ii. No Pre Meeting Preparations
iii. Formal Follow up process.
iv. Main objective is to find defects
a) ii is true and i, iii, iv are false
b) i, iii, iv are true and ii is false
c) i, iii, iv are false and ii is true
d) iii is true and I, ii, iv are false


29) The Phases of formal review process is mentioned below arrange them in the correct
order.
i. Planning ii. Review Meeting
iii. Rework iv. Individual Preparations
v. Kick Off vi. Follow up
a) i,ii,iii,iv,v,vi
b) vi,i,ii,iii,iv,v
c) i,v,iv,ii,iii,vi
d) i,ii,iii,v,iv,vi


30) Which of the following is Key Characteristics of Walk Through?
a) Scenario, Dry Run, Peer Group
b) Pre Meeting Preparations
c) Formal Follow up Process
d) Includes Metrics
References
     1 ISTQB3 Gilb, T & Graham, D (1993) Software Inspection, Addison-Wesley: London
       Hatton, L. (1997) 'Reexamining the Fault Density-component Size Connection' in IEEE
       Software, vol. 14 Issue 2, March 197; pp. 89-97.
     2 ISTQB3, Van Veenendaal, E. (2004). The Testing Practitioner, 2nd edition, UTN
       publishing van Veenendaal, E. and van der Zwan, M. (2000) 'GQM based Inspections'
       in proceedings of the 11th European software control and metrics conference (ESCOM),
       Munich, May 2000.




                                                                                         28
Chaper 4: Test Design Techniques – Modules
International Software testing & Quality Board, Test design techniques, Categories
Specifcations and White Box / Black Box techniques, BVA and ECP , Decision table testing,
state transition , and use case tesing. Test case specification, test design, test execution
schedule, test procedure specification, test script, traceability. For more details on testing and
related concepts plese refer the main chapter of software testing theory Part 1


4.1 The test development process
Test case specification, test design, test execution schedule, test procedure specification, test
script, traceability.

4.2 Categories of test design techniques
Black-box test design technique, specification-based test design technique, white-box test
design technique, structure-based test design technique, experience-based test design
technique.

4.3 Specification-based or black box techniques
Boundary value analysis, decision table testing, equivalence partitioning, state transition
testing, use case testing.

4.4 Structure-based or white box techniques
Code coverage, decision coverage, statement coverage, structure-based testing.

4.5 Experience-based techniques
Exploratory testing, fault attack.

4.6 Choosing test techniques
No specific terms.


Test Design Techniques
     Specification-based/Black-box techniques
     Structure-based/White-box techniques
     Experience-based techniques


I) Specification-based/Black-box techniques
Equivalence partitioning
Boundary value analysis
Decision table testing

                                                                                                29
State transition testing
Use case testing

Equivalence partitioning
o Inputs to the software or system are divided in to groups that are expected to exhibit similar
behavior
o Equivalence partitions or classes can be found for both valid data and invalid data
o Partitions can also be identified for outputs, internal values, time related values and for
interface values.
o Equivalence partitioning is applicable all levels of testing


Boundary value analysis
o Behavior at the edge of each equivalence partition is more likely to be incorrect. The
maximum and minimum values of a partition are its boundary values.
o A boundary value for a valid partition is a valid boundary value; the boundary of an invalid
partition is an invalid boundary value.
o Boundary value analysis can be applied at all test levels
o It is relatively easy to apply and its defect-finding capability is high
o This technique is often considered as an extension of equivalence partitioning.


Decision table testing
o In Decision table testing test cases are designed to execute the combination of inputs
o Decision tables are good way to capture system requirements that contain logical
conditions.
o The decision table contains triggering conditions, often combinations of true and false for all
input conditions o It maybe applied to all situations when the action of the software depends
on several logical decisions


State transition testing
o In state transition testing test cases are designed to execute valid and invalid state
transitions
o A system may exhibit a deferent response on current conditions or previous history. In this
case, that aspect of the system can be shown as a state transition diagram.
o State transition testing is much used in embedded software and technical automation.


Use case testing
o In use case testing test cases are designed to execute user scenarios
o A use case describes interactions between actors, including users and the system
o Each use case has preconditions, which need to be met for a use case to work successfully.
o A use case usually has a mainstream scenario and some times alternative branches.
o Use cases, often referred to as scenarios, are very useful for designing acceptance tests with
customer/user participation


                                                                                              30
II) Structure-based/White-box techniques
o Statement testing and coverage
o Decision testing and coverage
o Other structure-based techniques
     condition coverage
     multi condition coverage
Statement testing and coverage:
Statement
An entity in a programming language, which is typically the smallest indivisible unit of
execution
Statement coverage
The percentage of executable statements that have been exercised by a test suite
Statement testing
A white box test design technique in which test cases are designed to execute statements


Decision testing and coverage
Decision
A program point at which the control flow has two or more alternative routes
A node with two or more links to separate branches
Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite
100% decision coverage implies both 100% branches coverage and 100% statement coverage
Decision testing
A white box test design technique in which test cases are designed to execute decision
outcomes.
Other structure-based techniques

Condition
A logical expression that can be evaluated as true or false
Condition coverage
The percentage of condition outcomes that have been exercised by a test suite
Condition testing
A white box test design technique in which test cases are designed to execute condition
outcomes
Multiple condition testing
A white box test design technique in which test cases are designed to execute combinations of
single condition outcomes


                                                                                           31
III) Experience-based techniques
o Error guessing
o Exploratory testing
Error guessing
o Error guessing is a commonly used experience-based technique
o Generally testers anticipate defects based on experience, these defects list can be built based
on experience, available defect data, and from common knowledge about why software fails.
Exploratory testing
o Exploratory testing is concurrent test design, test execution, test logging and learning ,
based on test charter containing test objectives and carried out within time boxes
o It is approach that is most useful where there are few or inadequate specifications and serve
time pressure.


Questions
1) Order numbers on a stock control system can range between 10000 and 99999 inclusive.
Which of the following inputs might be a result of designing tests for only valid equivalence
classes and valid boundaries:
a) 1000, 5000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999
e) 9999, 10000, 50000, 99999, 10000


2) Which of the following is NOT a black box technique:
a) Equivalence partitioning
b) State transition testing
c) Syntax testing
d) Boundary value analysis


3) Error guessing is best used
a) As the first approach to deriving test cases
b) After more formal techniques have been applied
c) By inexperienced testers
d) After the system has gone live
e) Only by end users


4) Which is not true-The black box tester
a. should be able to understand a functional specification or requirements document
b. should be able to understand the source code.
c. is highly motivated to find faults
d. is creative to find the system’s weaknesses.



                                                                                                32
5) A test design technique is
a. a process for selecting test cases
b. a process for determining expected outputs
c. a way to measure the quality of software
d. a way to measure in a test plan what has to be done


6) Which of the following is true?
a. Component testing should be black box, system testing should be white box.
b. if u find a lot of bugs in testing, you should not be very confident about the quality of
software
c. the fewer bugs you find, the better your testing was
d. the more tests you run, the more bugs you will find.


7) What is the important criterion in deciding what testing technique to use?
a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique


8) Which of the following is a black box design technique?
a. statement testing
b. equivalence partitioning
c. error- guessing
d. usability testing


9) A program validates a numeric field as follows:
values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or
equal to 22 are rejected
Which of the following input values cover all of the equivalence partitions?
a. 10, 11, 21
b. 3, 20, 21
c. 3, 10, 22
d. 10, 21, 22


10) Using the same specifications as question 9, which of the following covers the MOST
boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21


11) Error guessing:
a) supplements formal test design techniques.

                                                                                               33
b) can only be used in component, integration and system testing.
c) is only performed in user acceptance testing.
d) is not repeatable and should not be used.

12) Which of the following is NOT a white box technique?
a) Statement testing
b) Path testing
c) Data flow testing
d) State transition testing


13) Data flow analysis studies:
a) possible communications bottlenecks in a program.
b) the rate of change of data values as a program executes.
c) the use of data on paths through the code.
d) the intrinsic complexity of the code.


14) In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
Which of these groups of numbers would fall into the same equivalence class?
a) £4800; £14000; £28000
b) £5200; £5500; £28000
c) £28001; £32000; £35000
d) £5800; £28000; £32000


15) Test cases are designed during:
a) test recording.
b) test planning.
c) test configuration.
d) test specification.


16) An input field takes the year of birth between 1900 and 2004
The boundary values for testing this field are
a. 0,1900,2004,2005
b. 1900, 2004
c. 1899,1900,2004,2005
d. 1899, 1900, 1901,2003,2004,2005


17) Boundary value testing
a. Is the same as equivalence partitioning tests?
b. Test boundary conditions on, below and above the edges of input and output equivalence
classes
c. Tests combinations of input circumstances
d. Is used in white box testing strategy
                                                                                            34
18) When testing a grade calculation system, a tester determines that all scores from 90 to 100
will yield a grade of A, but scores below 90 will not. This analysis is known as:
a) Equivalence partitioning
b) Boundary value analysis
c) Decision table
d) Hybrid analysis


19) Which technique can be used to achieve input and output coverage? It can be applied to
human input, input via interfaces to a system, or interface parameters in integration testing.
a) Error Guessing
b) Boundary Value Analysis
c) Decision Table testing
d) Equivalence partitioning


20) Features to be tested, approach, item pass/fail criteria and test deliverables should be
specified in which document?
a) Test case specification
b) Test procedure specification
c) Test plan
d) Test design specification


21) Which specification-based testing techniques are most closely related to each other?
a) Decision tables and state transition testing
b) Equivalence partitioning and state transition testing
c) Decision tables and boundary value analysis
d) Equivalence partitioning and boundary value analysis


22) assume postal rates for ‘light letters’ are:
$0.25 up to 10 grams
$0.35 up to 50 grams
$0.45 up to 75 grams
$0.55 up to 100 grams
Which test inputs (in grams) would be selected using boundary value analysis?
a) 0, 9, 19, 49, 50, 74, 75, 99, 100
b) 10, 50, 75, 100, 250, 1000
c) 0, 1, 10, 11, 50, 51, 75, 76, 100, 101
d) 25, 26, 35, 36, 45, 46, 55, 56


23) If the temperature falls below 18 degrees, the heating system is switched on. When the
temperature reaches 21 degrees, the heating system is switched off. What is the minimum set
of test input values to cover all valid equivalence partitions?
a) 15, 19 and 25 degrees

                                                                                                 35
b) 17, 18, 20 and 21 degrees
c) 18, 20 and 22 degrees
d) 16 and 26 degrees


24) What is a test condition?
a) An input, expected outcome, precondition and post condition
b) The steps to be taken to get the system to a given point
c) Something that can be tested
d) A specific state of the software, ex: before a test can be run


25) What is a key characteristic of specification-based testing techniques?
a) Tests are derived from information about how the software is constructed
b) Tests are derived from models (formal or informal) that specify the problem to be solved by
the software or its components
c) Tests are derived based on the skills and experience of the tester
d) Tests are derived from the extent of the coverage of structural elements of the system or
components


26) Why are both specification-based and structure-based testing techniques useful?
a) They find different types of defect.
b) using more techniques is always better
c) both find the same types of defect.
d) Because specifications tend to be unstructured


27) Find the Equivalence class for the following test case
Enter a number to test the validity of being accepting the numbers between 1 and
99
a) All numbers < 1
b) All numbers > 99
c) Number = 0
d) All numbers between 1 and 99


28) What is the relationship between equivalence partitioning and boundary
value analysis techniques?
a) Structural testing
b) Opaque testing
c) Compatibility testing
d) All of the above


29) Suggest an alternative for requirement traceability matrix
a) Test Coverage matrix
b) Average defect aging
c) Test Effectiveness
d) Error discovery rate

                                                                                            36
30) The following defines the statement of what the tester is expected to accomplish or
validate during testing activity
a) Test scope
b) Test objective
c) Test environment
d) None of the above


31) One technique of Black Box testing is Equivalence Partitioning. In a program
statement that accepts only one choice from among 10 possible choices,
numbered 1 through 10, the middle partition would be from _____ to _____
a) 4 to 6
b) 0 to 10
c) 1 to 10
d) None of the above


32) Test design mainly emphasizes all the following except
a) Data planning
b) Test procedures planning
c) Mapping the requirements and test cases
d) Data synchronization

33) Deliverables of test design phase include all the following except
a) Test data
b) Test data plan
c) Test summary report
d) Test procedure plan


34) Test data planning essentially includes
a) Network
b) Operational Model
c) Boundary value analysis
d) Test Procedure Planning


35) Test coverage analysis is the process of
a) Creating additional test cases to increase coverage
b) Finding areas of program exercised by the test cases
c) Determining a quantitative measure of code coverage, which is a
direct measure of quality.
d) All of the above.


36) Branch Coverage
a) another name for decision coverage
b) another name for all-edges coverage

                                                                                          37
c) another name for basic path coverage
d) all the above


37) The following example is a
if (condition1 && (condition2 || function1()))
statement1;
else
statement2; (Testing concepts)
a) Decision coverage
b) Condition coverage
c) Statement coverage
d) Path Coverage


38) Test cases need to be written for
a) invalid and unexpected conditions
b) valid and expected conditions
c) both a and b
d) none of these


39) Path coverage includes
a) statement coverage
b) condition coverage
c) decision coverage
d) none of these


40) The benefits of glass box testing are
a) Focused Testing, Testing coverage, control flow
b) Data integrity, Internal boundaries, algorithm specific testing
c) Both a and b
d) Either a or b


41) Find the invalid equivalence class for the following test case
Draw a line up to the length of 4 inches
a) Line with 1 dot-width
b) Curve
c) line with 4 inches
d) line with 1 inch.


42) Error seeding
a) Evaluates the thoroughness with which a computer program is tested by purposely
inserting errors into a supposedly correct program.
b) Errors inserted by the developers intentionally to make the system
malfunctioning.
c) for identifying existing errors

                                                                                     38
d) Both a and b


43) Which of the following best describes the difference between clear
box and opaque box?
1. Clear box is structural testing, opaque box is Ad-hoc testing
2. Clear box is done by tester, and opaque box is done by developer
3. Opaque box is functional testing, clear box is exploratory testing
a) 1
b) 1 and 3
c) 2
d) 3


44) What is the concept of introducing a small change to the program and having the effects of
that change show up in some test?
a) Desk checking
b) Debugging a program
c) A mutation error
d) Introducing mutation


45) How many test cases are necessary to cover all the possible sequences of statements
(paths) for the following program fragment? Assume that the two conditions are independent
of each other : - …………
if (Condition 1)
then statement 1
else statement 2
fi
if (Condition 2)
then statement 3
fi
…………
a. 1 test case
b. 3 Test Cases
c. 4 Test Cases
d. Not achievable


46) Given the following code, which is true about the minimum number of test cases required
for full statement and branch coverage:
Read P
Read Q
IF P+Q > 100 THEN
Print “Large”
ENDIF
If P > 50 THEN
Print “P Large”
ENDIF

                                                                                           39
a) 1 test for statement coverage, 3 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage, 1 for branch coverage
d) 2 tests for statement coverage, 3 for branch coverage
e) 2 tests for statement coverage, 2 for branch coverage

47) Given the following:
Switch PC on
Start “outlook”
IF outlook appears THEN
Send an email
Close outlook
a) 1 test for statement coverage, 1 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage. 3 for branch coverage
d) 2 tests for statement coverage, 2 for branch coverage
e) 2 tests for statement coverage, 3 for branch coverage


48) If a candidate is given an exam of 40 questions, should get 25 marks to pass (61%) and
should get 80% for distinction, what is equivalence class?
A. 23, 24, 25
B. 0, 12, 25
C. 30, 36, 39
D. 32, 37, 40


49) Consider the following statements:
i. 100% statement coverage guarantees 100% branch coverage.
ii. 100% branch coverage guarantees 100% statement coverage.
iii. 100% branch coverage guarantees 100% decision coverage.
iv. 100% decision coverage guarantees 100% branch coverage.
v. 100% statement coverage guarantees 100% decision coverage.
a) ii is True; i, iii, iv & v are False
b) i & v are True; ii, iii & iv are False
c) ii & iii are True; i, iv & v are False
d) ii, iii & iv are True; i & v are False


50) Which statement about expected outcomes is FALSE?
a) Expected outcomes are defined by the software's behavior
b) Expected outcomes are derived from a specification, not from the code
c) Expected outcomes should be predicted before a test is run
d) Expected outcomes may include timing constraints such as response times


51) Which of the following is not a white box testing?
a) Random testing

                                                                                             40
b) Data Flow testing
c) Statement testing
d) Syntax testing


52) If the pseudo code below were a programming language, how many tests are required to
achieve 100% statement coverage?
1. If x=3 then
2. Display_messageX;
3. If y=2 then
4. Display_messageY;
5. Else
6. Display_messageZ;
a. 1
b. 2
c. 3
d. 4


53) Using the same code example as question 17, how many tests are required to achieve 100%
branch/decision coverage?
a. 1
b. 2
c. 3
d. 4


54) Which of the following technique is NOT a black box technique?
a) Equivalence partitioning
b) State transition testing
c) LCSAJ
d) Syntax testing


55) Given the following code, which is true?
IF A>B THEN
C=A–B
ELSE
C=A+B
ENDIF
Read D
IF C = D THEN
Print “Error”
ENDIF
a) 1 test for statement coverage, 1 for branch coverage
b) 2 tests for statement coverage, 2 for branch coverage
c) 2 tests for statement coverage, 3 for branch coverage

                                                                                           41
d) 3 tests for statement coverage, 3 for branch coverage
e) 3 tests for statement coverage, 2 for branch coverage


56) Consider the following:
Pick up and read the news paper
Look at what is on television
If there is a program that you are interested in watching then switch the television on and
watch the program
Otherwise
Continue reading the news paper
If there a crossword in the news paper then try and complete the crossword
a) SC = 1 and DC = 3
b) SC = 1 and DC = 2
c) SC = 2 and DC = 2
d) SC = 2 and DC = 3


57) The specification: an integer field shall contain values from and including 1 to and
including 12 (number of the month)
Which equivalence class partitioning is correct?
a) Less than 1, 1 through 12, larger than 12
b) Less than 1, 1 through 11, larger than 12
c) Less than 0, 1 through 12, larger than 12
d) Less than 1, 1 through 11, and above


58) Analyze the following highly simplified procedure:
Ask: “What type of ticket do you require, single or return?”
IF the customer wants ‘return’
     Ask: “What rate, Standard or Cheap-day?”
     IF the customer replies ‘Cheap-day’
           Say: “That will be £11:20”
     ELSE
           Say: “That will be £19:50”
     ENDIF
 ELSE
     Say: “That will be £9:75”
 ENDIF

Now decide the minimum number of tests that are needed to ensure that all the questions
have been asked, all combinations have occurred and all replies given.
a) 3
b) 4
c) 5
d) 6


                                                                                              42
43
Chapter 5: Test organization and independence

International Software Testing & Quality Board, Test management, Test reports
Testers, test leader, test manager, test approach, defect density, failure rate, test
manager, test approach, configuration management, version control, Risk,
product risk, prject risk, risk based testing, incident logging, incident
management. Please refer to the earlier version of Software testing theory part 1


The effectiveness of finding defects by testing and reviews can be improved by
using independent testers. Options for testing teams available are:


     No independent testers. Developers test their own code.
     Independent testers within the development teams.
     Independent test team or group within the organization, reporting to project
      management or executive management
     Independent testers from the business organization or user community.
     Independent test specialists for specific test targets such as usability testers, security
      testers or certification testers (who certify a software product against standards and
      regulations).
     Independent testers outsourced or external to the organization.


The benefits of independence include:
Independent testers see other and different defects, and are unbiased.
An independent tester can verify assumptions people made during specification and
implementation of the system.
Drawbacks include:
Isolation from the development team (if treated as totally independent).
Independent testers may be the bottleneck as the last checkpoint.
Developers may lose a sense of responsibility for quality.


b) Tasks of the test leader and tester
Test leader tasks may include:
     Coordinate the test strategy and plan with project managers and others.
     Write or review a test strategy for the project, and test policy for the organization.
     Contribute the testing perspective to other project activities, such as integration
      planning.
     Plan the tests – considering the context and understanding the test objectives and risks
                                                                                                   44
–including selecting test approaches, estimating the time, effort and cost of testing,
      acquiring resources, defining test levels, cycles, and planning incident management.
     Initiate the specification, preparation, implementation and execution of tests, monitor
      the test results and check the exit criteria.
     Adapt planning based on test results and progress (sometimes documented in status
      reports) and take any action necessary to compensate for problems.
     Set up adequate configuration management of testware for traceability.
     Introduce suitable metrics for measuring test progress and evaluating the quality of the
      testing and the product.
     Decide what should be automated, to what degree, and how.
     Select tools to support testing and organize any training in tool use for testers.
     Decide about the implementation of the test environment.
     Write test summary reports based on the information gathered during testing.


Tester tasks may include:
     Review and contribute to test plans.
     Analyze, review and assess user requirements, specifications and models for testability.
     Create test specifications.
     Set up the test environment (often coordinating with system administration and
      network management).
     Prepare and acquire test data.
     Implement tests on all test levels, execute and log the tests, evaluate the results and
      document the deviations from expected results.
     Use test administration or management tools and test monitoring tools as required.
     Automate tests (may be supported by a developer or a test automation expert).
     Measure performance of components and systems (if applicable).
     Review tests developed by others.
Note: People who work on test analysis, test design, specific test types or test automation
may be specialists in these roles. Depending on the test level and the risks related to the
product and the project, different people may take over the role of tester, keeping some degree
of independence. Typically testers at the component and integration level would be
developers; testers at the acceptance test level would be business experts and users, and
testers for operational acceptance testing would be operators.


c) Defining skills test staff need
Now days a testing professional must have ‘application’ or ‘business domain’ knowledge and
‘Technology’ expertise apart from ‘Testing’ Skills



                                                                                                45
2) Test planning and estimation
a) Test planning activities
     Determining the scope and risks, and identifying the objectives of testing.
     Defining the overall approach of testing (the test strategy), including the definition of
      the test levels and entry and exit criteria.
     Integrating and coordinating the testing activities into the software life cycle activities:
      acquisition, supply, development, operation and maintenance.
     Making decisions about what to test, what roles will perform the test activities, how the
      test activities should be done, and how the test results will be evaluated.
     Scheduling test analysis and design activities.
     Scheduling test implementation, execution and evaluation.
     Assigning resources for the different activities defined.
     Defining the amount, level of detail, structure and templates for the test
      documentation.
     Selecting metrics for monitoring and controlling test preparation and execution, defect
      resolution and risk issues.
     Setting the level of detail for test procedures in order to provide enough information to
      support reproducible test preparation and execution.


b) Exit criteria
The purpose of exit criteria is to define when to stop testing, such as at the end of a test level
or when a set of tests has a specific goal.
Typically exit criteria may consist of:
     Thoroughness measures, such as coverage of code, functionality or risk.
     Estimates of defect density or reliability measures.
     Cost.
     Residual risks, such as defects not fixed or lack of test coverage in certain areas.
     Schedules such as those based on time to market.


c) Test estimation
Two approaches for the estimation of test effort are covered in this syllabus:
     The metrics-based approach: estimating the testing effort based on metrics of former or
      similar projects or based on typical values.
     The expert-based approach: estimating the tasks by the owner of these tasks or by
      experts.
Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
The testing effort may depend on a number of factors, including:

                                                                                                     46
 Characteristics of the product: the quality of the specification and other information
      used for test models (i.e. the test basis), the size of the product, the complexity of the
      problem domain, the requirements for reliability and security, and the requirements for
      documentation.
     Characteristics of the development process: the stability of the organization, tools used,
      test process, skills of the people involved, and time pressure.
     The outcome of testing: the number of defects and the amount of rework required.


d) Test approaches (test strategies)
One way to classify test approaches or strategies is based on the point in time at which the
bulk of the test design work is begun:
     Preventative approaches, where tests are designed as early as possible.
     Reactive approaches, where test design comes after the software or system has been
      produced.
Typical approaches or strategies include:
     Analytical approaches, such as risk-based testing where testing is directed to areas of
      greatest risk
     Model-based approaches, such as stochastic testing using statistical information about
      failure rates (such as reliability growth models) or usage (such as operational profiles).
     Methodical approaches, such as failure-based (including error guessing and fault-
      attacks), experienced-based, check-list based, and quality characteristic based.
     Process- or standard-compliant approaches, such as those specified by industry-
      specific standards or the various agile methodologies.
     Dynamic and heuristic approaches, such as exploratory testing where testing is more
      reactive to events than pre-planned, and where execution and evaluation are
      concurrent tasks.
     Consultative approaches, such as those where test coverage is driven primarily by the
      advice and guidance of technology and/or business domain experts outside the test
      team.
     Regression-averse approaches, such as those that include reuse of existing test
      material, extensive automation of functional regression tests, and standard test suites.
Different approaches may be combined, for example, a risk-based dynamic approach.


The selection of a test approach should consider the context, including
     Risk of failure of the project, hazards to the product and risks of product failure to
      humans, the environment and the company.
     Skills and experience of the people in the proposed techniques, tools and methods.
     The objective of the testing endeavour and the mission of the testing team.
     Regulatory aspects, such as external and internal regulations for the development

                                                                                                47
process.
     The nature of the product and the business.


3) Test progress monitoring and control
a) Test progress monitoring


     Percentage of work done in test case preparation (or percentage of planned test cases
      prepared).
     Percentage of work done in test environment preparation.
     Test case execution (e.g. number of test cases run/not run, and test cases
      passed/failed).
     Defect information (e.g. defect density, defects found and fixed, failure rate, and retest
      results).
     Test coverage of requirements, risks or code.
     Subjective confidence of testers in the product.
     Dates of test milestones.
     Testing costs, including the cost compared to the benefit of finding the next defect or to
      run the next test.
b) Test Reporting
     What happened during a period of testing, such as dates when exit criteria were met.
     Analyzed information and metrics to support recommendations and decisions about
      future actions, such as an assessment of defects remaining, the economic benefit of
      continued testing, outstanding risks, and the level of confidence in tested software.
Metrics should be collected during and at the end of a test level in order to assess:
     The adequacy of the test objectives for that test level.
     The adequacy of the test approaches taken.
     The effectiveness of the testing with respect to its objectives.


c) Test control
Test control describes any guiding or corrective actions taken as a result of information and
metrics gathered and reported. Actions may cover any test activity and may affect any other
software life cycle activity or task.

Examples of test control actions are:
     Making decisions based on information from test monitoring.
     Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
     Change the test schedule due to availability of a test environment.

                                                                                                48
 Set an entry criterion requiring fixes to have been retested (confirmation tested) by a
      developer before accepting them into a build.


4) Configuration management
The purpose of configuration management is to establish and maintain the integrity of the
products (components, data and documentation) of the software or system through the
project and product life cycle.
For testing, configuration management may involve ensuring that:
     All items of testware are identified, version controlled, tracked for changes, related to
      each other and related to development items (test objects) so that traceability can be
      maintained throughout the test process.
     All identified documents and software items are referenced unambiguously in test
      documentation
For the tester, configuration management helps to uniquely identify (and to reproduce) the
tested item, test documents, the tests and the test harness.
During test planning, the configuration management procedures and infrastructure (tools)
should be chosen, documented and implemented.


5) Risk and testing
a) Project risks
Project risks are the risks that surround the project’s capability to deliver its objectives, such
as:
Organizational factors:
     skill and staff shortages;
     personal and training issues;
     political issues, such as
o problems with testers communicating their needs and test results;
o failure to follow up on information found in testing and reviews (e.g. not
improving development and testing practices).
     improper attitude toward or expectations of testing (e.g. not appreciating the value of
      finding defects during testing).
Technical issues:
     problems in defining the right requirements;
     the extent that requirements can be met given existing constraints;
     the quality of the design, code and tests.
Supplier issues:
     failure of a third party;

                                                                                                  49
contractual issues.
b) Product risks
Potential failure areas (adverse future events or hazards) in the software or system are known
as product risks, as they are a risk to the quality of the product, such as:
     Failure-prone software delivered.
     The potential that the software/hardware could cause harm to an individual or
      company.
     Poor software characteristics (e.g. functionality, reliability, usability and performance).
     Software that does not perform its intended functions.
Risks are used to decide where to start testing and where to test more; testing is used to
reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.
Product risks are a special type of risk to the success of a project. Testing as a risk-control
activity provides feedback about the residual risk by measuring the effectiveness of critical
defect removal and of contingency plans.
A risk-based approach to testing provides proactive opportunities to reduce the levels of
product risk, starting in the initial stages of a project. It involves the identification of product
risks and their use in guiding test planning and control, specification, preparation and
execution of tests. In a risk-based approach the risks identified may be used to:
     Determine the test techniques to be employed.
     Determine the extent of testing to be carried out.
     Prioritize testing in an attempt to find the critical defects as early as possible.
     Determine whether any non-testing activities could be employed to reduce risk (e.g.
      providing training to inexperienced designers).
Risk-based testing draws on the collective knowledge and insight of the project stakeholders
to determine the risks and the levels of testing required to address those risks.
To ensure that the chance of a product failure is minimized, risk management activities
provide a disciplined approach to:
     Assess (and reassess on a regular basis) what can go wrong (risks).
     Determine what risks are important to deal with.
     Implement actions to deal with those risks.
In addition, testing may support the identification of new risks, may help to determine what
risks should be reduced, and may lower uncertainty about risks.


6) Incident management
Since one of the objectives of testing is to find defects, the discrepancies between actual and
expected outcomes need to be logged as incidents. Incidents should be tracked from discovery
and classification to correction and confirmation of the solution. In order to manage all
incidents to completion, an organization should establish a process and rules for
classification.

                                                                                                   50
Incidents may be raised during development, review, testing or use of a software product.
They may be raised for issues in code or the working system, or in any type of documentation
including requirements, development documents, test documents, and user information such
as “Help” or installation guides.


Incident reports have the following objectives:
     Provide developers and other parties with feedback about the problem to enable
      identification, isolation and correction as necessary.
     Provide test leaders a means of tracking the quality of the system under test and the
      progress of the testing.
     Provide ideas for test process improvement.


Details of the incident report may include:
     Date of issue, issuing organization, and author.
     Expected and actual results.
     Identification of the test item (configuration item) and environment.
     Software or system life cycle process in which the incident was observed.
     Description of the incident to enable reproduction and resolution, including logs,
      database dumps or screenshots.
     Scope or degree of impact on stakeholder(s) interests.
     Severity of the impact on the system.
     Urgency/priority to fix.
     Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting
      retest, closed).
     Conclusions, recommendations and approvals.
     Global issues, such as other areas that may be affected by a change resulting from the
      incident.
     Change history, such as the sequence of actions taken by project team members with
      respect to the incident to isolate, repair, and confirm it as fixed.
References, including the identity of the test case specification that revealed the problem.


Questions
1) The following list contains risks that have been identified for a software product to be
developed. Which of these risks is an example of a product risk?
a) Not enough qualified testers to complete the planned tests
b) Software delivery is behind schedule
c) Threat to a patient’s life
d) 3rd party supplier does not supply as stipulated

                                                                                               51
2) Which set of metrics can be used for monitoring of the test execution?
a) Number of detected defects, testing cost;
b) Number of residual defects in the test object.
c) Percentage of completed tasks in the preparation of test environment; test cases
prepared
d) Number of test cases run / not run; test cases passed / failed


3) A defect management system shall keep track of the status of every defect registered and
enforce the rules about changing these states. If your task is to test the status tracking, which
method would be best?
a) Logic-based testing
b) Use-case-based testing
c) State transition testing
d) Systematic testing according to the V-model

4) Why can be tester dependent on configuration management?
a) Because configuration management assures that we know the exact version of the testware
and the test object
b) Because test execution is not allowed to proceed without the consent of the change control
board
c) Because changes in the test object are always subject to configuration management
d) Because configuration management assures the right configuration of the test tools


5) What test items should be put under configuration management?
a) The test object, the test material and the test environment
b) The problem reports and the test material
c) Only the test objects. The test cases need to be adapted during agile testing
d) The test object and the test material


6) Which of the following can be root cause of a bug in a software product?
(I) The project had incomplete procedures for configuration management.
(II) The time schedule to develop a certain component was cut.
(III) The specification was unclear
(IV) Use of the code standard was not followed up
(V) The testers were not certified
a) (I) and (II) are correct
b) (I) through (IV) are correct
c) (III) through (V) are correct
d) (I), (II) and (IV) are correct


7) Which of the following is most often considered as components interface bug?

                                                                                                52
a) For two components exchanging data, one component used metric units, the other one
used British units
b) The system is difficult to use due to a too complicated terminal input structure
c) The messages for user input errors are misleading and not helpful for understanding the
input error cause
d) Under high load, the system does not provide enough open ports to connect to


8) Which of the following project inputs influence testing?
(I) contractual requirements
(II) Legal requirements
(III) Industry standards
(IV) Application risk
(V) Project size
a) (I) through (III) are correct
b) All alternatives are correct
c) (II) and (V) are correct
d) (I), (III) and (V) are correct


9) What is the purpose of test exit criteria in the test plan?
a) To specify when to stop the testing activity
b) To set the criteria used in generating test inputs
c) To ensure that the test case specification is complete
d) To know when a specific test has finished its execution


10) Which of the following items need not to be given in an incident report?
a) The version number of the test object
b) Test data and used environment
c) Identification of the test case that failed
d) The location and instructions on how to correct the fault


11) Why is it necessary to define a Test Strategy?
a) As there are many different ways to test software, thought must be given to decide what will
be the most effective way to test the project on hand.
b) Starting testing without prior planning leads to chaotic and inefficient test project
c) A strategy is needed to inform the project management how the test team will schedule the
test-cycles
d) Software failure may cause loss of money, time, business reputation, and in extreme cases
injury and death. It is therefore critical to have a proper test strategy in place


12) IEEE 829 test plan documentation standard contains all of the following except:
a) test items
b) test deliverables
c) test tasks
d) test environment

                                                                                             53
e) test specification


13) Which of the following is NOT part of configuration management:
a) status accounting of configuration items
b) auditing conformance to ISO9001
c) identification of test versions
d) record of changes to documentation over time


14) What is the purpose of test completion criteria in a test plan:
a) to know when a specific test has finished its execution
b) to ensure that the test case specification is complete
c) to know when test planning is complete
d) to plan when to stop testing


15) Test managers should not:
a) report on deviations from the project plan
b) sign the system off for release
c) re-allocate resource to meet original plans
d) raise incidents on faults that they have found
e) provide information for risk analysis and quality improvement


16) What information need not be included in a test incident report:
a) how to fix the fault
b) how to reproduce the fault
c) test environment details
d) the actual and expected outcomes

17) Which of the following is NOT included in the Test Plan document of the Test
Documentation Standard:
a) Test items (i.e. software versions)
b) What is not to be tested
c) Test environments
d) Quality plans
e) Schedules and deadlines


18) Which of the following is NOT true of incidents?
a) Incident resolution is the responsibility of the author of the software under test.
b) Incidents may be raised against user requirements.
c) Incidents require investigation and/or correction.
d) Incidents are raised when expected and actual results differ.


19) Which of the following would NOT normally form part of a test plan?
a) Features to be tested

                                                                                         54
b) Incident reports
c) Risks
d) Schedule


20) A configuration management system would NOT normally provide:
a) linkage of customer requirements to version numbers.
b) Facilities to compare test results with expected results.
c) The precise differences in versions of software component source code.
d) Restricted access to the source code library.


21) Testware (test cases, test dataset)
a) needs configuration management just like requirements, design and code
b) should be newly constructed for each new version of the software
c) is needed only until the software is released into production or use
d) does not need to be documented and commented, as it does not form part of the released
software system


22) ‘Defect Density’ calculated in terms of
a) The number of defects identified in a component or system divided by the size of the
component or the system
b) The number of defects found by a test phase divided by the number found by that test
phase and any other means after wards
c) The number of defects identified in the component or system divided by the number of
defects found by a test phase
d) The number of defects found by a test phase divided by the number found by the size of the
system


23) An expert based test estimation is also known as
a) Narrow band Delphi
b) Wide band Delphi
c) Bespoke Delphi
d) Robust Delphi


24) During the testing of a module tester ‘X’ finds a bug and assigned it to developer. But
developer rejects the same, saying that it’s not a bug. What ‘X’ should do?
a) Report the issue to the test manager and try to settle with the developer.
b) Retest the module and confirm the bug
c) Assign the same bug to another developer
d) Send to the detailed information of the bug encountered and check the reproducibility


25) The primary goal of comparing a user manual with the actual behavior of the running
program during system testing is to
                                                                                              55
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB
Ôn tập kiến thức ISTQB

Mais conteúdo relacionado

Mais procurados

Mais procurados (16)

TESTING LIFE CYCLE PPT
TESTING LIFE CYCLE PPTTESTING LIFE CYCLE PPT
TESTING LIFE CYCLE PPT
 
Software Testing Introduction
Software Testing IntroductionSoftware Testing Introduction
Software Testing Introduction
 
Fundamentals of Testing 2
Fundamentals of Testing 2Fundamentals of Testing 2
Fundamentals of Testing 2
 
Transactionflow
TransactionflowTransactionflow
Transactionflow
 
Tlc
TlcTlc
Tlc
 
Food science and technology
Food science and technologyFood science and technology
Food science and technology
 
Istqb exam sample_paper_3
Istqb exam sample_paper_3Istqb exam sample_paper_3
Istqb exam sample_paper_3
 
ISTQB / ISEB Foundation Exam Practice - 6
ISTQB / ISEB Foundation Exam Practice - 6ISTQB / ISEB Foundation Exam Practice - 6
ISTQB / ISEB Foundation Exam Practice - 6
 
Test Execution
Test ExecutionTest Execution
Test Execution
 
Software Testing and Debugging
Software Testing and DebuggingSoftware Testing and Debugging
Software Testing and Debugging
 
Software testing and quality assurance
Software testing and quality assuranceSoftware testing and quality assurance
Software testing and quality assurance
 
software testing for beginners
software testing for beginnerssoftware testing for beginners
software testing for beginners
 
Unit 1 part 2
Unit 1 part 2Unit 1 part 2
Unit 1 part 2
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Fundamentals of testing 1
Fundamentals of testing 1Fundamentals of testing 1
Fundamentals of testing 1
 

Destaque

Module 3, topic 1 notes
Module 3, topic 1 notesModule 3, topic 1 notes
Module 3, topic 1 notesAnnie cox
 
MedyaTEQ Toplu Mesajlaşma Servisleri
MedyaTEQ Toplu Mesajlaşma ServisleriMedyaTEQ Toplu Mesajlaşma Servisleri
MedyaTEQ Toplu Mesajlaşma ServisleriMedyaTEQ
 
MedyaTEQ Mobil Pazarlama
MedyaTEQ  Mobil PazarlamaMedyaTEQ  Mobil Pazarlama
MedyaTEQ Mobil PazarlamaMedyaTEQ
 
Module 4 topic 1
Module 4 topic 1Module 4 topic 1
Module 4 topic 1Annie cox
 
How to go weekly
How to go weeklyHow to go weekly
How to go weeklytevinallen
 
Module 4 topic 1 part 2 notes
Module 4 topic 1 part 2 notesModule 4 topic 1 part 2 notes
Module 4 topic 1 part 2 notesAnnie cox
 
Module 4 topic 3 similar problems
Module 4 topic 3 similar problemsModule 4 topic 3 similar problems
Module 4 topic 3 similar problemsAnnie cox
 
Wykonanie budżetu Powiatu Mieleckiego w 2009 roku
Wykonanie budżetu Powiatu Mieleckiego w 2009 rokuWykonanie budżetu Powiatu Mieleckiego w 2009 roku
Wykonanie budżetu Powiatu Mieleckiego w 2009 rokuguestbdfb7e
 
How to go weekly
How to go weeklyHow to go weekly
How to go weeklytevinallen
 
Hot telecom 9.30.09 report
Hot telecom 9.30.09 reportHot telecom 9.30.09 report
Hot telecom 9.30.09 reportguestb7da9bc2
 
Cuoc song tinh yeu tieng cuoi tech24.vn
Cuoc song tinh yeu tieng cuoi tech24.vnCuoc song tinh yeu tieng cuoi tech24.vn
Cuoc song tinh yeu tieng cuoi tech24.vnJenny Nguyen
 
MedyaTEQ Hedefli SMS
MedyaTEQ Hedefli SMSMedyaTEQ Hedefli SMS
MedyaTEQ Hedefli SMSMedyaTEQ
 
Moduel 5 topic 3
Moduel 5 topic 3Moduel 5 topic 3
Moduel 5 topic 3Annie cox
 
Module 4 topic 2
Module 4 topic 2Module 4 topic 2
Module 4 topic 2Annie cox
 
Question ISTQB foundation 3
Question ISTQB foundation 3Question ISTQB foundation 3
Question ISTQB foundation 3Jenny Nguyen
 
Promo juni oriflame
Promo juni oriflamePromo juni oriflame
Promo juni oriflameefitea
 
Topic 1 factoring gcf
Topic 1   factoring gcfTopic 1   factoring gcf
Topic 1 factoring gcfAnnie cox
 

Destaque (20)

Module 3, topic 1 notes
Module 3, topic 1 notesModule 3, topic 1 notes
Module 3, topic 1 notes
 
MedyaTEQ Toplu Mesajlaşma Servisleri
MedyaTEQ Toplu Mesajlaşma ServisleriMedyaTEQ Toplu Mesajlaşma Servisleri
MedyaTEQ Toplu Mesajlaşma Servisleri
 
MedyaTEQ Mobil Pazarlama
MedyaTEQ  Mobil PazarlamaMedyaTEQ  Mobil Pazarlama
MedyaTEQ Mobil Pazarlama
 
Module 4 topic 1
Module 4 topic 1Module 4 topic 1
Module 4 topic 1
 
How to go weekly
How to go weeklyHow to go weekly
How to go weekly
 
Module 4 topic 1 part 2 notes
Module 4 topic 1 part 2 notesModule 4 topic 1 part 2 notes
Module 4 topic 1 part 2 notes
 
Quick testforexam
Quick testforexamQuick testforexam
Quick testforexam
 
Module 4 topic 3 similar problems
Module 4 topic 3 similar problemsModule 4 topic 3 similar problems
Module 4 topic 3 similar problems
 
Wykonanie budżetu Powiatu Mieleckiego w 2009 roku
Wykonanie budżetu Powiatu Mieleckiego w 2009 rokuWykonanie budżetu Powiatu Mieleckiego w 2009 roku
Wykonanie budżetu Powiatu Mieleckiego w 2009 roku
 
Qtp tutorial
Qtp tutorialQtp tutorial
Qtp tutorial
 
How to go weekly
How to go weeklyHow to go weekly
How to go weekly
 
77
7777
77
 
Hot telecom 9.30.09 report
Hot telecom 9.30.09 reportHot telecom 9.30.09 report
Hot telecom 9.30.09 report
 
Cuoc song tinh yeu tieng cuoi tech24.vn
Cuoc song tinh yeu tieng cuoi tech24.vnCuoc song tinh yeu tieng cuoi tech24.vn
Cuoc song tinh yeu tieng cuoi tech24.vn
 
MedyaTEQ Hedefli SMS
MedyaTEQ Hedefli SMSMedyaTEQ Hedefli SMS
MedyaTEQ Hedefli SMS
 
Moduel 5 topic 3
Moduel 5 topic 3Moduel 5 topic 3
Moduel 5 topic 3
 
Module 4 topic 2
Module 4 topic 2Module 4 topic 2
Module 4 topic 2
 
Question ISTQB foundation 3
Question ISTQB foundation 3Question ISTQB foundation 3
Question ISTQB foundation 3
 
Promo juni oriflame
Promo juni oriflamePromo juni oriflame
Promo juni oriflame
 
Topic 1 factoring gcf
Topic 1   factoring gcfTopic 1   factoring gcf
Topic 1 factoring gcf
 

Semelhante a Ôn tập kiến thức ISTQB

Semelhante a Ôn tập kiến thức ISTQB (20)

Bab i fundamental of testing
Bab i fundamental of testingBab i fundamental of testing
Bab i fundamental of testing
 
Chapter 1 Fundamental of Testing
Chapter 1 Fundamental of TestingChapter 1 Fundamental of Testing
Chapter 1 Fundamental of Testing
 
Bab i fundamental of testing (yoga)
Bab i fundamental of testing (yoga)Bab i fundamental of testing (yoga)
Bab i fundamental of testing (yoga)
 
T0 numtq0nje=
T0 numtq0nje=T0 numtq0nje=
T0 numtq0nje=
 
201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)201008 Software Testing Notes (part 1/2)
201008 Software Testing Notes (part 1/2)
 
Fundamentals of Testing
Fundamentals of TestingFundamentals of Testing
Fundamentals of Testing
 
IT8076 – Software Testing Intro
IT8076 – Software Testing IntroIT8076 – Software Testing Intro
IT8076 – Software Testing Intro
 
Software testing ppt
Software testing pptSoftware testing ppt
Software testing ppt
 
Lesson 7...Question Part 1
Lesson 7...Question Part 1Lesson 7...Question Part 1
Lesson 7...Question Part 1
 
Manual Testing Interview Questions & Answers.docx
Manual Testing Interview Questions & Answers.docxManual Testing Interview Questions & Answers.docx
Manual Testing Interview Questions & Answers.docx
 
Fundamentals of Testing Section 1/6
Fundamentals of Testing   Section 1/6Fundamentals of Testing   Section 1/6
Fundamentals of Testing Section 1/6
 
1651003086422.pptx
1651003086422.pptx1651003086422.pptx
1651003086422.pptx
 
Aim (A).pptx
Aim (A).pptxAim (A).pptx
Aim (A).pptx
 
Bab i fundamental of testing
Bab i fundamental of testingBab i fundamental of testing
Bab i fundamental of testing
 
ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2
 
S440999102
S440999102S440999102
S440999102
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
ISTQB, ISEB Lecture Notes
ISTQB, ISEB Lecture NotesISTQB, ISEB Lecture Notes
ISTQB, ISEB Lecture Notes
 
Fundamental of testing
Fundamental of testingFundamental of testing
Fundamental of testing
 
stlc
stlcstlc
stlc
 

Último

Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 

Último (20)

Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 

Ôn tập kiến thức ISTQB

  • 1. Chapter -1 Fundamentals of Testing 1.1 Why is testing necessary? bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing. 1.2 What is testing? code, debugging, requirement, test basis, test case, test objective 1.3 Testing principles 1.4 Fundamental test process conformation testing, exit criteria, incident, regression testing, test condition, test coverage, test data, test execution, test log, test plan, test strategy, test summary report and testware. 1.5 The psychology of testing independence. I) General testing principles Principles A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing. Principle 1 – Testing shows presence of defects Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. Principle 2 – Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. Principle 3 – Early testing Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives. Principle 4 – Defect clustering A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures. Principle 5 – Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. Principle 6 – Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. Principle 7 – Absence-of-errors fallacy Finding and fixing defects does not help if the system built is unusable and does not fulfill the 1
  • 2. users’ needs and expectations. II) Fundamental test process 1) Test planning and control Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission. It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities. 2) Test analysis and design Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases. Test analysis and design has the following major tasks:  Reviewing the test basis (such as requirements, architecture, design, interfaces).  Evaluating testability of the test basis and test objects.  Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.  Designing and prioritizing test cases.  Identifying necessary test data to support the test conditions and test cases.  Designing the test environment set-up and identifying any required infrastructure and tools. 3) Test implementation and execution  Developing, implementing and prioritizing test cases.  Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.  Creating test suites from the test procedures for efficient test execution.  Verifying that the test environment has been set up correctly.  Executing test procedures either manually or by using test execution tools, according to the planned sequence.  Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.  Comparing actual results with expected results.  Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).  Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing). 2
  • 3. 4) Evaluating exit criteria and reporting  Checking test logs against the exit criteria specified in test planning.  Assessing if more tests are needed or if the exit criteria specified should be changed.  Writing a test summary report for stakeholders. 5) Test closure activities  Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.  Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.  Handover of testware to the maintenance organization.  Analyzing lessons learned for future releases and projects, and the improvement of test maturity. III) The psychology of testing  Tests designed by the person(s) who wrote the software under test (low level of independence).  Tests designed by another person(s) (e.g. from the development team).  Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).  Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body). Questions : Example 1: 50 Question(knol.google.com) ( These are sample questions related to chapter 1 , For answers see other sections of ISTQB below) 1) When what is visible to end-users is a deviation from the specific or expected behavior, this is called: a) an error b) a fault c) a failure d) a defect 2) Regression testing should be performed: v) every week w) after the software has changed x) as often as possible y) when the environment has changed z) when the project manager says a) v & w are true, x – z are false 3
  • 4. b) w, x & y are true, v & z are false c) w & y are true, v, x & z are false d) w is true, v, x y and z are false 3) Testing should be stopped when: a) all the planned tests have been run b) time has run out c) all faults have been fixed correctly d) it depends on the risks for the system being tested 4) Consider the following statements about early test design: i. early test design can prevent fault multiplication ii. faults found during early test design are more expensive to fix iii. early test design can find faults iv. early test design can cause changes to the requirements v. early test design takes more effort a) i, iii & iv are true. Ii & v are false b) iii is true, I, ii, iv & v are false c) iii & iv are true. i, ii & v are false d) i, iii, iv & v are true, ii us false 5) The main focus of acceptance testing is: a) finding faults in the system b) ensuring that the system is acceptable to all users c) testing the system with other systems d) testing for a business perspective 6) The difference between re-testing and regression testing is a) re-testing is running a test again; regression testing looks for unexpected side effects b) re-testing looks for unexpected side effects; regression testing is repeating those tests c) re-testing is done after faults are fixed; regression testing is done earlier d) re-testing is done by developers, regression testing is done by independent testers 7) Expected results are: a) only important in system testing b) only used in component testing c) never specified in advance d) most useful when specified in advance 8) The cost of fixing a fault: a) Is not important b) Increases as we move the product towards live use c) Decreases as we move the product towards live use d) Is more expensive if found in requirements than functional design 9) Fault Masking is 4
  • 5. a. Error condition hiding another error condition b. creating a test case which does not reveal a fault c. masking a fault by developer d. masking a fault by a tester 10) One Key reason why developers have difficulty testing their own work is: a. Lack of technical documentation b. Lack of test tools on the market for developers c. Lack of training d. Lack of Objectivity 11) Enough testing has been performed when: a) Time runs out. b) The required level of confidence has been achieved. c) No more faults are found. d) The users won’t find any serious faults. 12) Which of the following is false? a) In a system two different failures may have different severities. b) A system is necessarily more reliable after debugging for the removal of a fault. c) A fault need not affect the reliability of a system. d) Undetected errors may lead to faults and eventually to incorrect behavior. 13) Which of the following characterises the cost of faults? a) They are cheapest to find in the early development phases and the less expensive to fix. b) They are easiest to find during system testing but the most expensive to fix then. c) Faults are cheapest to find in the early development phases but the most expensive to fix then. d) Although faults are most expensive to find during early development phases, they are cheapest to fix then. 14) According to the ISTQB Glossary a risk relates to which of the following? a) Negative feedback to the tester. b) Negative consequences that will occur. c) Negative consequences that could occur. d) Negative consequences for the test object. 15) Ensuring that a test design start during the requirements definition phase is important to enable which of the following test objectives? a) Preventing defects in the system. b) Finding defects through dynamic testing. c) Gaining confidence in the system. d) Finishing the project on time. 16) A failure is: a) Found in the software; the result of an error. 5
  • 6. b) Departure from specified behavior. c) An incorrect step, process or data definition in a computer program. d) A human action that produces an incorrect result. 17) Faults found by users are due to: a. Poor quality software b. Poor software and poor testing c. bad luck d. insufficient time for testing 18) Which of the following statements are true? a. Faults in program specifications are the most expensive to fix. b. Faults in code are the most expensive to fix. c. Faults in requirements are the most expensive to fix d. Faults in designs are the most expensive to fix. 19) COTS is known as: a. Commercial off the shelf software b. Compliance of the software c. Change control of the software d. Capable off the shelf software 20) Which is not the testing objective? a. Finding defects b. Gaining confidence about the level of quality and providing information c. Preventing defects. d. Debugging defects 21) Exhaustive Testing is a) Impractical but possible b) Practically possible c) Impractical and impossible d) Always possible 22) Which of the following is most important to promote and maintain good relationships between developers and testers? a) Understanding what Managers value about testing. b) Explaining test results in a neutral fashion. c) Identifying potential customer work-around for bugs. d) Promoting better quality software whenever possible. 23) According to ISTQB Glossary, the word ‘Error’ is synonymous with which of the following? a) Failure b) Defect c) Mistake 6
  • 7. d) Bug 24) In prioritising what to test, the most important objective is to: a) Find as many faults as possible. b) Test high risk areas. c) Obtain good test coverage. d) Test whatever is easiest to test. 25) Incidents would not be raised against: a) Requirements b) Documentation c) Test cases d) Improvements suggested by users 26) Designing the test environment set-up and identifying any required infrastructure and tools are a part of which phase a) Test Implementation and execution b) Test Analysis and Design c) Evaluating the Exit Criteria and reporting d) Test Closure Activities 27) Which of the following is not a part of the Test Implementation and Execution Phase? a) Creating test suites from the test cases b) Executing test cases either manually or by using test execution tools c) Comparing actual results d) Designing the Tests 28) Test Case are grouped into Manageable (and scheduled) units are called as a. Test Harness b. Test Suite c. Test Cycle d. Test Driver 29) Which of the following could be a reason for a failure 1) Testing fault 2) Software fault 3) Design fault 4) Environment Fault 5) Documentation Fault a. 2 is a valid reason; 1,3,4 & 5 are not b. 1,2,3,4 are valid reasons; 5 is not c. 1,2,3 are valid reasons; 4 & 5 are not d. All of them are valid reasons for failure 30) Handover of Testware is a part of which Phase a) Test Analysis and Design 7
  • 8. b) Test Planning and control c) Test Closure Activities d) Evaluating exit criteria and reporting 31) An exhaustive test suit would include: a) All combination of input values and preconditions. b) All combination of input values and output values. c) All pairs of input values and preconditions. d) All states and state transitions. 32) Which of the following encourages objective testing? a) Unit Testing. b) System Testing. c) Independent Testing. d) Destructive Testing. 33) Consider the following list of test process activities: I Analysis and Design II Test Closure activities III Evaluating exit criteria and reporting IV Planning and Control V Implementation and execution Which of the following places these in their logical sequence? a) I, II, III, IV and V b) IV, I, V, III and II c) IV, I, V, II and III d) I, IV, V, III and II 34) According to ISTQB Glossary, debugging: a) Is part of the fundamental test process. b) Includes the repair of the cause of a failure c) Involves intentionally adding known defects d) Follows the steps of a test procedure 35) Which of the following could be a root cause of a defect in financial software in which an incorrect interest rate is calculated? a) Insufficient funds were available to pay the interest rate calculated. b) Insufficient calculations of compound interest were included. c) Insufficient training was given to the developers concerning compound interest calculation rules. d) Incorrect calculators were used to calculate the expected results. 36) When should you stop testing? a) When the time for testing has run out b) When all planned tests have been run c) When the test completion criteria have been met 8
  • 9. d) When no faults have been found by the tests run 37) An incident logging system: a) Only records defects b) is of limited value c) is a valuable source of project information during testing if it contains all incidents d) Should be used only by the test team 38) The term confirmation testing is synonymous to a) Exploratory testing b) Regression testing c) Exhaustive testing d) Re- testing 39) Consider the following statements: i. an incident may be closed without being fixed. ii. Incidents may not be raised against documentation. iii. The final stage of incident tracking is fixing. iv. The incident record does not include information on test environments. a) ii is true, i, iii and iv are false b) i is true, ii, iii and iv are false c) i and iv are true, ii and iii are false d) i and ii are true, iii and iv are false 40) Which of the following is not a characteristic of software? a) Software is developed or engineered; it is not manufactured in the classic sense b) Software doesn’t “wear out” with the time c) The traditional industry is moving toward component based assembly, whereas most software continues to be custom built and the concept of component based assembly is still taking shape d) Software does not require maintenance 41) If the expected result is not specified then: a) We cannot run the test b) It may be difficult to repeat the test c) It may be difficult to determine if the test has passed or failed d) We cannot automate the user inputs 42) A reliable system will be one that: a) is unlikely to be completed on schedule b) is unlikely to cause a failure c) is likely to be fault free d) is likely to be liked by the users 43) What is the purpose of Exit Criteria? a) To determine when writing a test case is complete 9
  • 10. b) To determine when to stop the testing c) To ensure the test specification is complete d) To determine when to stop writing the test plan 44) What is the focus of Re-Testing? a) Re-Testing ensures the original fault has been removed b) Re-Testing prevents future faults c) Re-Testing looks for unexpected side effects d) Re-Testing ensures the original fault is still present 45) How is the amount of Re-Testing required normally defined? a) Discussions with the end users b) Discussions with the developers c) Metrics from Previous projects d) none of the above 46) A manifestation of an ‘error’ in software is a) An Error b) A Fault c) A Failure d) An Action 47) If testing time is limited, we should … a) Only test high risk areas b) Only test simple areas c) Only test low risk areas d) Only test complicated areas 48) The quality of the product is said to increase when? a) All faults have been reviewed b) All faults have been found c) All faults have been raised d) All faults have been rectified 49) Pick the best definition of quality a) Quality is job one b) Zero defects c) Conformance to requirements d) Work as designed 50) What is the Main reason for testing software before releasing it? a) To show the system will work after release b) To decide when software is of sufficient quality to release c) To find as many of bugs as possible before release d) To give information for a risk based decision about release 10
  • 11. 51) Select a reason that does not agree with the fact that complete testing is impossible: a) The domain of possible inputs is too large to test. b) Limited financial resources. c) There are too many possible paths through the program to test. d) The user interface issues (and thus the design issues) are too complex to completely test. 11
  • 12. Chaper 2. Testing throughout the software life cycle International Software Testing Quality Board, Terms Development models Software development models, Test levels , test types and maintenance testing, Tests and answers, sample questionaire, Detailed explanation of SDLC and V models can be referred from software testing theory Part 1 ( Index and Sub Index) 2.1 Software development models COTS, interactive-incremental development model, validation, verification, V-model. 2.2 Test levels Alpha testing, beta testing, component testing (also known as unit/module/program testing), driver, stub, field testing, functional requirement, non-functional requirement, integration, integration testing, robustness testing, system testing, test level, test-driven development, test 12
  • 13. environment, user acceptance testing. 2.3 Test types Black box testing, code coverage, functional testing, interoperability testing, load testing, maintainability testing, performance testing, portability testing, reliability testing, security testing, specification based testing, stress testing, structural testing, usability testing, white box testing 2.4 Maintenance testing Impact analysis, maintenance testing. i) Software development models a) V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels. The four levels used in this syllabus are: component (unit) testing; integration testing; system testing; acceptance testing. b) Iterative-incremental development models Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models. c) Testing within a life cycle model In any life cycle model, there are several characteristics of good testing: ? For every development activity there is a corresponding testing activity. ? Each test level has test objectives specific to that level. ? The analysis and design of tests for a given test level should begin during the corresponding development activity. ? Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle. ii) Test levels a) Component testing Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may include testing of functionality and specific non-functional 13
  • 14. characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. b) Integration testing Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems. Component integration testing tests the interactions between software components and is done after component testing; System integration testing tests the interactions between different systems and may be done after system testing. Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing. c) System testing System testing is concerned with the behaviour of a whole system/product as defined by the scope of a development project or programme. In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing. System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behaviour, interactions with the operating system, and system resources. System testing should investigate both functional and non-functional requirements of the system. d) Acceptance testing Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well. The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system Contract and regulation acceptance testing Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations. Alpha and beta (or field) testing Alpha testing is performed at the developing organization’s site. Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product. iii) Test types a) Testing of function (functional testing) The functions that a system, subsystem or component are to perform may be described in 14
  • 15. work products such as a requirements specification, use cases, or a functional specification, or they may be undocumented. The functions are “what” the system does. A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders. Another type of functional testing, interoperability testing, evaluates the capability of the software product to interact with one or more specified components or systems. b) Testing of non-functional software characteristics (non-functional testing) Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. It is the testing of “how” the system works. Non-functional testing may be performed at all test levels. c) Testing of software structure/architecture (structural testing) Structural (white-box) testing may be performed at all test levels. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure. Structural testing approaches can also be applied at system, system integration or acceptance testing levels (e.g. to business models or menu structures). d) Testing related to changes (confirmation testing (retesting) and regression testing) After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is a development activity, not a testing activity. Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). It is performed when the software, or its environment, is changed. Regression testing may be performed at all test levels, and applies to functional, non- functional and structural testing. iv) Maintenance testing Once deployed, a software system is often in service for years or decades. During this time the system and its environment are often corrected, changed or extended. Modifications include planned enhancement changes (e.g. release-based), corrective and emergency changes, and changes of environment, Maintenance testing for migration (e.g. from one platform to another) should include operational tests of the new environment, as well as of the changed software. Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required. Maintenance testing may be done at any or all test levels and for any or all test types. Questions: 1) What are the good practices for testing with in the software development life cycle? a) Early test analysis and design b) Different test levels are defined with specific objectives 15
  • 16. c) Testers will start to get involved as soon as coding is done. d) A and B above 2) Which option best describes objectives for test levels with a life cycle model? a) Objectives should be generic for any test level b) Objectives are the same for each test level. c) The objectives of a test level don’t need to be defined in advance d) Each level has objectives specific to that level. 3) Which of the following is a type? a) Component testing b) Functional testing c) System testing d) Acceptance testing 4) Non-functional system testing includes: a) Testing to see where the system does not function properly b) testing quality attributes of the system including performance and usability c) testing a system feature using only the software required for that action d) testing for functions that should not exist 5) Beta testing is: a) Performed by customers at their own site b) Performed by customers at their software developer’s site c) Performed by an independent test team d) Performed as early as possible in the lifecycle 6) Which of the following is not part of performance testing: a) Measuring response time b) Measuring transaction rates c) Recovery testing d) Simulating many users 7) Which one of the following statements about system testing is NOT true? a) System tests are often performed by independent teams. b) Functional testing is used more than structural testing. c) Faults found during system tests can be very expensive to fix. d) End-users should be involved in system tests. 8) Integration testing in the small: a) Tests the individual components that have been developed. b) Tests interactions between modules or subsystems. c) Only uses components that form part of the live system. d) Tests interfaces to other systems. 16
  • 17. 9) Alpha testing is: a) Post-release testing by end user representatives at the developer’s site. b) The first testing that is performed. c) Pre-release testing by end user representatives at the developer’s site. d) Pre-release testing by end user representatives at their sites. 10) Software testing activities should start a) As soon as the code is written b) during the design stage c) when the requirements have been formally documented d) as soon as possible in the development life cycle 11) consider the following statements about regression tests I. they may useful be automated if they are well designed. II. They are the same as confirmation tests III. They are a way to reduce the risk of a change having an adverse affect elsewhere in the system IV. They are only effective if automated. Which pairs of statements are true? a) I and II b) I and III c) II and III d) II and IV 12) Which of these statements about functional testing is true? a) Structural testing is more important than functional testing as it addresses the code b) Functional testing is useful throughout the life cycle and can be applied by business annalists, developers, testers and users c) Functional testing is more powerful than static testing as you actually run the system and see what happens. d) Inspection is a functional testing 13) Consider the following statements about maintenance testing: I. It requires both re-test and regression test and may require additional new tests II. It is testing to show how easy it will be to maintain the system III) It is difficult to scope and therefore needs careful risk and impact analysis IV. It need not be done for emergency bug fixes. Which of the statement are true? a) I and III b) I and IV c) II and III d) II and IV 14) Which of the following is NOT part of system testing: a) business process-based testing b) performance, load and stress testing c) requirements-based testing 17
  • 18. d) top-down integration testing 15) To test a function, the programmer has to write a _________, which calls the function to be tested and passes it test data. a. Stub b. Driver c. Proxy d. None of the above 16) Which of the following is a form of functional testing? a) Boundary value analysis b) Usability testing c) Performance testing d) Security testing 17) Which one of the following describes the major benefit of verification early in the life cycle? a) It allows the identification of changes in user requirements. b) It facilitates timely set up of the test environment. c) It reduces defect multiplication. d) It allows testers to become involved early in the project. 18) The most important thing about early test design is that it: a) makes test preparation easier. b) means inspections are not required. c) can prevent fault multiplication. d) will find all faults. 19). Which of the following statements is not true a. performance testing can be done during unit testing as well as during the testing of whole system b. The acceptance test does not necessarily include a regression test c. Verification activities should not involve testers (reviews, inspections etc) d. Test environments should be as similar to production environments as possible 20) Which of the following is NOT a type of non-functional test? a. State-Transition b. Usability c. Performance d. Reliability 21) Which of the following is not the integration strategy? a. Design based b. Big-bang c. Bottom-up d. Top-down 22) Big bang approach is related to a) Regression testing b) Inter system testing c) Re-testing d). Integration testing 18
  • 19. 23) “Which life cycle model is basically driven by schedule and budget risks” This statement is best suited for a) Water fall model b) Spiral model c) Incremental model d) V-Model 24) Use cases can be performed to test A. Performance testing B. Unit testing C. Business scenarios D. Static testing 25) Which testing is performed at an external site? A. Unit testing B. Regression testing C. Beta testing D. Integration testing 26) Functional system testing is: a) Testing that the system functions with other systems b) Testing that the components that comprise the system function together c) Testing the end to end functionality of the system as a whole d) Testing the system performs functions within specified response times 27) Maintenance testing is: a) Updating tests when the software has changed b) testing a released system that has been changed c) testing by users to ensure that the system meets a business need d) testing to maintain business advantage 28) Which of the following uses Impact Analysis most? a)component testing b)non-functional system testing c)user acceptance testing d)maintenance testing 29) Which of the following is NOT part of system testing? a) business process-based testing b) performance, load and stress testing c) usability testing d) top-down integration testing 30) Which of the following list contains only non-functional tests? a. compatibility testing, usability testing, performance testing b. System testing, performance testing c. Load testing, stress testing, component testing, portability testing d. Testing various configurations, beta testing, load testing 31) V-Model is: 19
  • 20. a. A software development model that illustrates how testing activities integrate with software development phases b. A software life-cycle model that is not relevant for testing c. The official software development and testing life-cycle model of ISTQB d. A testing life cycle model including unit, integration, system and acceptance phases 32) Maintenance testing is: a. Testing management b. Synonym of testing the quality of service c. Triggered by modifications, migration or retirement of existing software d. Testing the level of maintenance by the vendor 33) Link Testing is also called as: a) Component Integration testing b) Component System Testing c) Component Sub System Testing d) Maintenance testing 34) Component Testing is also called as: i. Unit Testing ii. Program Testing iii. Module Testing iv. System Component Testing Which of the following is correct? a) i, ii, iii are true and iv is false b) i, ii, iii, iv are false c) i, ii, iv are true and iii is false d) All of above is true 35) Match every stage of the software Development Life cycle with the Testing Life cycle: i. Global design ii. System Requirements iii. Detailed design iv. User Requirements a) Unit tests b) Acceptance tests c) System tests d) Integration tests a) i -d , ii-a , iii-b , iv-c b) i -c , ii-d , iii-a , iv-b c) i -d , ii-c , iii-a , iv-b d) i -c , ii-d , iii-b , iv-a 36) Which of these is a functional test? a) Measuring response time on an online booking system b) Checking the effect of high volumes of traffic in a call-center system. c) Checking the on-line booking screens information and the database contacts against the information on the letter to the customers d) Checking how easy the system is to use 20
  • 21. Chaper 3: Static Techniques International Software Testing and Quality Board, Objective Tests, sample questions This chapter explains the static and dynamic techniques of testing, Review process, formal and informal, inspection, and technical review, Analysis compiler, complexity, control flow, data flow and walkthrough. For other manual testing concepts and SDLC, STLC, BLC topics , Please refer the Main chapter of Software testing theory. 3.1 Static techniques and the test process dynamic testing, static testing, static technique Img 1.1 Permanent Demand trend Jobmarket Internet.UK The chart provides the 3-month moving total beginning in 2004 of permanent IT jobs citing ISTQB within the UK as a proportion of the total demand within the Qualifications category. 2009 3.2 Review process entry criteria, formal review, informal review, inspection, metric, moderator/inspection leader, peer review, reviewer, scribe, technical review, walkthrough. 3.3 Static analysis by tools Compiler, complexity, control flow, data flow, static analysis I) Phases of a formal review 1) Planning Selecting the personal, allocating roles, defining entry and exit criteria for more formal reviews etc. 21
  • 22. 2) Kick-off Distributing documents, explaining the objectives, checking entry criteria etc. 3) Individual preparation Work done by each of the participants on their own work before the review meeting, questions and comments 4) Review meeting Discussion or logging, make recommendations for handling the defects, or make decisions about the defects 5) Rework Fixing defects found, typically done by the author Fixing defects found, typically done by the author 6) Follow-up Checking the defects have been addressed, gathering metrics and checking on exit criteria II) Roles and responsibilities Manager Decides on execution of reviews, allocates time in projects schedules, and determines if the review objectives have been met Moderator Leads the review, including planning, running the meeting, follow-up after the meeting. Author The writer or person with chief responsibility of the document(s) to be reviewed. Reviewers Individuals with a specific technical or business background. Identify defects and describe findings. Scribe (recorder) Documents all the issues, problems III) Types of review Informal review No formal process, pair programming or a technical lead reviewing designs and code. Main purpose: inexpensive way to get some benefit. Walkthrough Meeting led by the author, ‘scenarios, dry runs, peer group’, open-ended sessions. Main purpose: learning, gaining understanding, defect finding Technical review Documented, defined defect detection process, ideally led by trained moderator, may be performed as a peer review, pre meeting preparation, involved by peers and technical experts Main purpose: discuss, make decisions, find defects, solve technical problems and check conformance to specifications and standards Inspection Led by trained moderator (not the author), usually peer examination, defined roles, includes metrics, formal process, pre-meeting preparation, formal follow-up process Main purpose: find defects. Note: walkthroughs, technical reviews and inspections can be performed within a peer group- colleague at the same organization level. This type of review is called a “peer review”. 22
  • 23. IV) Success factors for reviews Each review has a clear predefined objective. The right people for the review objectives are involved. Defects found are welcomed, and expressed objectively. People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author). Review techniques are applied that are suitable to the type and level of software work products and reviewers. Checklists or roles are used if appropriate to increase effectiveness of defect identification. Training is given in review techniques, especially the more formal techniques, such as inspection. Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules). There is an emphasis on learning and process improvement. V) Cyclomatic Complexity The number of independent paths through a program Cyclomatic Complexity is defined as: L – N + 2P L = the number of edges/links in a graph N = the number of nodes in a graphs P = the number of disconnected parts of the graph (connected components) Alternatively one may calculate Cyclomatic Complexity using decision point rule Decision points +1 Cyclomatic Complexity and Risk Evaluation 1 to 10a simple program, without very much risk 11 to 20 a complex program, moderate risk 21 to 50, a more complex program, high risk > 50an un-testable program (very high risk) Questions 1) Which of the following statements is NOT true: a) inspection is the most formal review process b) inspections should be led by a trained leader c) managers can perform inspections on management documents d) inspection is appropriate even when there are no written documents 2) Which expression best matches the following characteristics or review processes: 1. led by author 2. undocumented 3. no management participation 4. led by a trained moderator or leader 5. uses entry exit criteria s) inspection 23
  • 24. t) peer review u) informal review v) walkthrough a) s = 4, t = 3, u = 2 and 5, v = 1 b) s = 4 and 5, t = 3, u = 2, v = 1 c) s = 1 and 5, t = 3, u = 2, v = 4 d) s = 5, t = 4, u = 3, v = 1 and 2 3) Could reviews or inspections be considered part of testing: a) No, because they apply to development documentation b) No, because they are normally applied before testing c) No, because they do not apply to the test documentation d) Yes, because both help detect faults and improve quality 4) In a review meeting a moderator is a person who a. Takes minutes of the meeting b. Mediates among people c. Takes telephone calls d. writes the documents to be reviewed 5) Which of the following statements about reviews is true? a) Reviews cannot be performed on user requirements specifications. b) Reviews are the least effective way of testing code. c) Reviews are unlikely to find faults in test plans. d) Reviews should be performed on specifications, code, and test plans. 6) What is the main difference between a walkthrough and an inspection? a) An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator. b) An inspection has a trained leader, whilst a walkthrough has no leader. c) Authors are not present during inspections, whilst they are during walkthroughs. d) A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator. 7) Which of the following is a static test? a. code inspection b. coverage analysis c. usability assessment d. installation test 24
  • 25. 8) Who is responsible for document all the issues, problems and open point that were identified during the review meeting A. Moderator B. Scribe C. Reviewers D. Author 9) What is the main purpose of Informal review? A. Inexpensive way to get some benefit B. Find defects C. Learning, gaining understanding, effect finding D. Discuss, make decisions and solve technical problems 10) Which of the following is not a static testing technique? a. Error guessing b. Walkthrough c. Data flow analysis d. Inspections 11) Inspections can find all the following except a. Variables not defined in the code b. Spelling and grammar faults in the documents c. Requirements that have been omitted from the design documents d. How much of the code has been covered 12) Which of the following artifacts can be examined by using review techniques? a) Software code b) Requirements specification c) Test designs d) All of the above 13) Which is not a type of review? a) Walkthrough b) Inspection c) Management approval d) Informal review 14) Which of the following statements about early test design are true and which are false? 1. Defects found during early test design are more expensive to fix 2. Early test design can find defects 3. Early test design can cause to the changes to the requirements 4. Early test design can takes more effort a) 1 and 3 are true. 2 and 4 are false. 25
  • 26. b) 2 is true. 1, 3 and 4 are false. c) 2 and 3 are true. 1 and 4 are false. d) 2, 3, and 4 are true. 1 is false 15) Static code analysis typically identifies all but one of the following problems. Which is it? a) Unreachable code b) Faults in requirements c) Undeclared variables d) Too few comments 16) What is the best description of static analysis? a) The analysis of bath programs b) The reviewing of test plans c) The analysis of program code or other software a rtifacts d) The use of black-box testing 17) What is the more important factor for successful performance of review? a) A separate scribe during the logging meeting b) Trained participants and review leaders c) The availability of tools to support the review process d) A reviewed test plan 18) Code Walkthrough is a. type of dynamic testing b. type of static testing c. neither dynamic nor static d. performed by the testing team 19) Static Analysis a. same as static testing b. done by the developers c. both a and b d. none of the above 20) Which review is inexpensive? a. Informal Review b. Walkthrough c. Technical review d. Inspection 21) Who should have technical or Business background? a. Moderator b. Author 26
  • 27. c. Reviewer d. Recorder 22) The person who leads the review of the document(s), planning the review, running the meeting and follow-up after the meeting a. Reviewer b. Author c. Moderator d. Auditor 23) Peer Reviews are also called as: a) Inspection b) Walkthrough c) Technical Review d) Formal Review 24) The Kick Off phase of a formal review includes the following a) Explaining the objective b) Fixing defects found typically done by author c) Follow up d) Individual Meeting preparations 25) Success Factors for a review include: i. Each Review does not have a predefined objective ii. Defects found are welcomed and expressed objectively iii. Management supports a good review process. iv. There is an emphasis on learning and process improvement. a) ii, iii, iv are correct and i is incorrect b) iii , i , iv is correct and ii is incorrect c) i , iii , iv are correct and ii is in correct d) i, ii are correct and iii, iv are incorrect 26) Why static testing described as complementary for dynamic testing? a) Because they share the aim of identifying defects and finds the same types of defect. b) Because they have different aims and differ in the types of defect they find. c) Because they have different aims but find the same types of defect. d) Because they share the aim of identifying defects but differ in the types of defect they find. 27) Which of the following statements regarding static testing is false? a) Static testing requires the running of tests through the code b) Static testing includes desk checking c) Static testing includes techniques such as reviews and inspections d) Static testing can give measurements such as cyclomatic complexity 27
  • 28. 28) Which of the following is true about Formal Review or Inspection? i. Led by Trained Moderator (not the author). ii. No Pre Meeting Preparations iii. Formal Follow up process. iv. Main objective is to find defects a) ii is true and i, iii, iv are false b) i, iii, iv are true and ii is false c) i, iii, iv are false and ii is true d) iii is true and I, ii, iv are false 29) The Phases of formal review process is mentioned below arrange them in the correct order. i. Planning ii. Review Meeting iii. Rework iv. Individual Preparations v. Kick Off vi. Follow up a) i,ii,iii,iv,v,vi b) vi,i,ii,iii,iv,v c) i,v,iv,ii,iii,vi d) i,ii,iii,v,iv,vi 30) Which of the following is Key Characteristics of Walk Through? a) Scenario, Dry Run, Peer Group b) Pre Meeting Preparations c) Formal Follow up Process d) Includes Metrics References 1 ISTQB3 Gilb, T & Graham, D (1993) Software Inspection, Addison-Wesley: London Hatton, L. (1997) 'Reexamining the Fault Density-component Size Connection' in IEEE Software, vol. 14 Issue 2, March 197; pp. 89-97. 2 ISTQB3, Van Veenendaal, E. (2004). The Testing Practitioner, 2nd edition, UTN publishing van Veenendaal, E. and van der Zwan, M. (2000) 'GQM based Inspections' in proceedings of the 11th European software control and metrics conference (ESCOM), Munich, May 2000. 28
  • 29. Chaper 4: Test Design Techniques – Modules International Software testing & Quality Board, Test design techniques, Categories Specifcations and White Box / Black Box techniques, BVA and ECP , Decision table testing, state transition , and use case tesing. Test case specification, test design, test execution schedule, test procedure specification, test script, traceability. For more details on testing and related concepts plese refer the main chapter of software testing theory Part 1 4.1 The test development process Test case specification, test design, test execution schedule, test procedure specification, test script, traceability. 4.2 Categories of test design techniques Black-box test design technique, specification-based test design technique, white-box test design technique, structure-based test design technique, experience-based test design technique. 4.3 Specification-based or black box techniques Boundary value analysis, decision table testing, equivalence partitioning, state transition testing, use case testing. 4.4 Structure-based or white box techniques Code coverage, decision coverage, statement coverage, structure-based testing. 4.5 Experience-based techniques Exploratory testing, fault attack. 4.6 Choosing test techniques No specific terms. Test Design Techniques  Specification-based/Black-box techniques  Structure-based/White-box techniques  Experience-based techniques I) Specification-based/Black-box techniques Equivalence partitioning Boundary value analysis Decision table testing 29
  • 30. State transition testing Use case testing Equivalence partitioning o Inputs to the software or system are divided in to groups that are expected to exhibit similar behavior o Equivalence partitions or classes can be found for both valid data and invalid data o Partitions can also be identified for outputs, internal values, time related values and for interface values. o Equivalence partitioning is applicable all levels of testing Boundary value analysis o Behavior at the edge of each equivalence partition is more likely to be incorrect. The maximum and minimum values of a partition are its boundary values. o A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. o Boundary value analysis can be applied at all test levels o It is relatively easy to apply and its defect-finding capability is high o This technique is often considered as an extension of equivalence partitioning. Decision table testing o In Decision table testing test cases are designed to execute the combination of inputs o Decision tables are good way to capture system requirements that contain logical conditions. o The decision table contains triggering conditions, often combinations of true and false for all input conditions o It maybe applied to all situations when the action of the software depends on several logical decisions State transition testing o In state transition testing test cases are designed to execute valid and invalid state transitions o A system may exhibit a deferent response on current conditions or previous history. In this case, that aspect of the system can be shown as a state transition diagram. o State transition testing is much used in embedded software and technical automation. Use case testing o In use case testing test cases are designed to execute user scenarios o A use case describes interactions between actors, including users and the system o Each use case has preconditions, which need to be met for a use case to work successfully. o A use case usually has a mainstream scenario and some times alternative branches. o Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation 30
  • 31. II) Structure-based/White-box techniques o Statement testing and coverage o Decision testing and coverage o Other structure-based techniques  condition coverage  multi condition coverage Statement testing and coverage: Statement An entity in a programming language, which is typically the smallest indivisible unit of execution Statement coverage The percentage of executable statements that have been exercised by a test suite Statement testing A white box test design technique in which test cases are designed to execute statements Decision testing and coverage Decision A program point at which the control flow has two or more alternative routes A node with two or more links to separate branches Decision Coverage The percentage of decision outcomes that have been exercised by a test suite 100% decision coverage implies both 100% branches coverage and 100% statement coverage Decision testing A white box test design technique in which test cases are designed to execute decision outcomes. Other structure-based techniques Condition A logical expression that can be evaluated as true or false Condition coverage The percentage of condition outcomes that have been exercised by a test suite Condition testing A white box test design technique in which test cases are designed to execute condition outcomes Multiple condition testing A white box test design technique in which test cases are designed to execute combinations of single condition outcomes 31
  • 32. III) Experience-based techniques o Error guessing o Exploratory testing Error guessing o Error guessing is a commonly used experience-based technique o Generally testers anticipate defects based on experience, these defects list can be built based on experience, available defect data, and from common knowledge about why software fails. Exploratory testing o Exploratory testing is concurrent test design, test execution, test logging and learning , based on test charter containing test objectives and carried out within time boxes o It is approach that is most useful where there are few or inadequate specifications and serve time pressure. Questions 1) Order numbers on a stock control system can range between 10000 and 99999 inclusive. Which of the following inputs might be a result of designing tests for only valid equivalence classes and valid boundaries: a) 1000, 5000, 99999 b) 9999, 50000, 100000 c) 10000, 50000, 99999 d) 10000, 99999 e) 9999, 10000, 50000, 99999, 10000 2) Which of the following is NOT a black box technique: a) Equivalence partitioning b) State transition testing c) Syntax testing d) Boundary value analysis 3) Error guessing is best used a) As the first approach to deriving test cases b) After more formal techniques have been applied c) By inexperienced testers d) After the system has gone live e) Only by end users 4) Which is not true-The black box tester a. should be able to understand a functional specification or requirements document b. should be able to understand the source code. c. is highly motivated to find faults d. is creative to find the system’s weaknesses. 32
  • 33. 5) A test design technique is a. a process for selecting test cases b. a process for determining expected outputs c. a way to measure the quality of software d. a way to measure in a test plan what has to be done 6) Which of the following is true? a. Component testing should be black box, system testing should be white box. b. if u find a lot of bugs in testing, you should not be very confident about the quality of software c. the fewer bugs you find, the better your testing was d. the more tests you run, the more bugs you will find. 7) What is the important criterion in deciding what testing technique to use? a. how well you know a particular technique b. the objective of the test c. how appropriate the technique is for testing the application d. whether there is a tool to support the technique 8) Which of the following is a black box design technique? a. statement testing b. equivalence partitioning c. error- guessing d. usability testing 9) A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected Which of the following input values cover all of the equivalence partitions? a. 10, 11, 21 b. 3, 20, 21 c. 3, 10, 22 d. 10, 21, 22 10) Using the same specifications as question 9, which of the following covers the MOST boundary values? a. 9,10,11,22 b. 9,10,21,22 c. 10,11,21,22 d. 10,11,20,21 11) Error guessing: a) supplements formal test design techniques. 33
  • 34. b) can only be used in component, integration and system testing. c) is only performed in user acceptance testing. d) is not repeatable and should not be used. 12) Which of the following is NOT a white box technique? a) Statement testing b) Path testing c) Data flow testing d) State transition testing 13) Data flow analysis studies: a) possible communications bottlenecks in a program. b) the rate of change of data values as a program executes. c) the use of data on paths through the code. d) the intrinsic complexity of the code. 14) In a system designed to work out the tax to be paid: An employee has £4000 of salary tax free. The next £1500 is taxed at 10% The next £28000 is taxed at 22% Any further amount is taxed at 40% Which of these groups of numbers would fall into the same equivalence class? a) £4800; £14000; £28000 b) £5200; £5500; £28000 c) £28001; £32000; £35000 d) £5800; £28000; £32000 15) Test cases are designed during: a) test recording. b) test planning. c) test configuration. d) test specification. 16) An input field takes the year of birth between 1900 and 2004 The boundary values for testing this field are a. 0,1900,2004,2005 b. 1900, 2004 c. 1899,1900,2004,2005 d. 1899, 1900, 1901,2003,2004,2005 17) Boundary value testing a. Is the same as equivalence partitioning tests? b. Test boundary conditions on, below and above the edges of input and output equivalence classes c. Tests combinations of input circumstances d. Is used in white box testing strategy 34
  • 35. 18) When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as: a) Equivalence partitioning b) Boundary value analysis c) Decision table d) Hybrid analysis 19) Which technique can be used to achieve input and output coverage? It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing. a) Error Guessing b) Boundary Value Analysis c) Decision Table testing d) Equivalence partitioning 20) Features to be tested, approach, item pass/fail criteria and test deliverables should be specified in which document? a) Test case specification b) Test procedure specification c) Test plan d) Test design specification 21) Which specification-based testing techniques are most closely related to each other? a) Decision tables and state transition testing b) Equivalence partitioning and state transition testing c) Decision tables and boundary value analysis d) Equivalence partitioning and boundary value analysis 22) assume postal rates for ‘light letters’ are: $0.25 up to 10 grams $0.35 up to 50 grams $0.45 up to 75 grams $0.55 up to 100 grams Which test inputs (in grams) would be selected using boundary value analysis? a) 0, 9, 19, 49, 50, 74, 75, 99, 100 b) 10, 50, 75, 100, 250, 1000 c) 0, 1, 10, 11, 50, 51, 75, 76, 100, 101 d) 25, 26, 35, 36, 45, 46, 55, 56 23) If the temperature falls below 18 degrees, the heating system is switched on. When the temperature reaches 21 degrees, the heating system is switched off. What is the minimum set of test input values to cover all valid equivalence partitions? a) 15, 19 and 25 degrees 35
  • 36. b) 17, 18, 20 and 21 degrees c) 18, 20 and 22 degrees d) 16 and 26 degrees 24) What is a test condition? a) An input, expected outcome, precondition and post condition b) The steps to be taken to get the system to a given point c) Something that can be tested d) A specific state of the software, ex: before a test can be run 25) What is a key characteristic of specification-based testing techniques? a) Tests are derived from information about how the software is constructed b) Tests are derived from models (formal or informal) that specify the problem to be solved by the software or its components c) Tests are derived based on the skills and experience of the tester d) Tests are derived from the extent of the coverage of structural elements of the system or components 26) Why are both specification-based and structure-based testing techniques useful? a) They find different types of defect. b) using more techniques is always better c) both find the same types of defect. d) Because specifications tend to be unstructured 27) Find the Equivalence class for the following test case Enter a number to test the validity of being accepting the numbers between 1 and 99 a) All numbers < 1 b) All numbers > 99 c) Number = 0 d) All numbers between 1 and 99 28) What is the relationship between equivalence partitioning and boundary value analysis techniques? a) Structural testing b) Opaque testing c) Compatibility testing d) All of the above 29) Suggest an alternative for requirement traceability matrix a) Test Coverage matrix b) Average defect aging c) Test Effectiveness d) Error discovery rate 36
  • 37. 30) The following defines the statement of what the tester is expected to accomplish or validate during testing activity a) Test scope b) Test objective c) Test environment d) None of the above 31) One technique of Black Box testing is Equivalence Partitioning. In a program statement that accepts only one choice from among 10 possible choices, numbered 1 through 10, the middle partition would be from _____ to _____ a) 4 to 6 b) 0 to 10 c) 1 to 10 d) None of the above 32) Test design mainly emphasizes all the following except a) Data planning b) Test procedures planning c) Mapping the requirements and test cases d) Data synchronization 33) Deliverables of test design phase include all the following except a) Test data b) Test data plan c) Test summary report d) Test procedure plan 34) Test data planning essentially includes a) Network b) Operational Model c) Boundary value analysis d) Test Procedure Planning 35) Test coverage analysis is the process of a) Creating additional test cases to increase coverage b) Finding areas of program exercised by the test cases c) Determining a quantitative measure of code coverage, which is a direct measure of quality. d) All of the above. 36) Branch Coverage a) another name for decision coverage b) another name for all-edges coverage 37
  • 38. c) another name for basic path coverage d) all the above 37) The following example is a if (condition1 && (condition2 || function1())) statement1; else statement2; (Testing concepts) a) Decision coverage b) Condition coverage c) Statement coverage d) Path Coverage 38) Test cases need to be written for a) invalid and unexpected conditions b) valid and expected conditions c) both a and b d) none of these 39) Path coverage includes a) statement coverage b) condition coverage c) decision coverage d) none of these 40) The benefits of glass box testing are a) Focused Testing, Testing coverage, control flow b) Data integrity, Internal boundaries, algorithm specific testing c) Both a and b d) Either a or b 41) Find the invalid equivalence class for the following test case Draw a line up to the length of 4 inches a) Line with 1 dot-width b) Curve c) line with 4 inches d) line with 1 inch. 42) Error seeding a) Evaluates the thoroughness with which a computer program is tested by purposely inserting errors into a supposedly correct program. b) Errors inserted by the developers intentionally to make the system malfunctioning. c) for identifying existing errors 38
  • 39. d) Both a and b 43) Which of the following best describes the difference between clear box and opaque box? 1. Clear box is structural testing, opaque box is Ad-hoc testing 2. Clear box is done by tester, and opaque box is done by developer 3. Opaque box is functional testing, clear box is exploratory testing a) 1 b) 1 and 3 c) 2 d) 3 44) What is the concept of introducing a small change to the program and having the effects of that change show up in some test? a) Desk checking b) Debugging a program c) A mutation error d) Introducing mutation 45) How many test cases are necessary to cover all the possible sequences of statements (paths) for the following program fragment? Assume that the two conditions are independent of each other : - ………… if (Condition 1) then statement 1 else statement 2 fi if (Condition 2) then statement 3 fi ………… a. 1 test case b. 3 Test Cases c. 4 Test Cases d. Not achievable 46) Given the following code, which is true about the minimum number of test cases required for full statement and branch coverage: Read P Read Q IF P+Q > 100 THEN Print “Large” ENDIF If P > 50 THEN Print “P Large” ENDIF 39
  • 40. a) 1 test for statement coverage, 3 for branch coverage b) 1 test for statement coverage, 2 for branch coverage c) 1 test for statement coverage, 1 for branch coverage d) 2 tests for statement coverage, 3 for branch coverage e) 2 tests for statement coverage, 2 for branch coverage 47) Given the following: Switch PC on Start “outlook” IF outlook appears THEN Send an email Close outlook a) 1 test for statement coverage, 1 for branch coverage b) 1 test for statement coverage, 2 for branch coverage c) 1 test for statement coverage. 3 for branch coverage d) 2 tests for statement coverage, 2 for branch coverage e) 2 tests for statement coverage, 3 for branch coverage 48) If a candidate is given an exam of 40 questions, should get 25 marks to pass (61%) and should get 80% for distinction, what is equivalence class? A. 23, 24, 25 B. 0, 12, 25 C. 30, 36, 39 D. 32, 37, 40 49) Consider the following statements: i. 100% statement coverage guarantees 100% branch coverage. ii. 100% branch coverage guarantees 100% statement coverage. iii. 100% branch coverage guarantees 100% decision coverage. iv. 100% decision coverage guarantees 100% branch coverage. v. 100% statement coverage guarantees 100% decision coverage. a) ii is True; i, iii, iv & v are False b) i & v are True; ii, iii & iv are False c) ii & iii are True; i, iv & v are False d) ii, iii & iv are True; i & v are False 50) Which statement about expected outcomes is FALSE? a) Expected outcomes are defined by the software's behavior b) Expected outcomes are derived from a specification, not from the code c) Expected outcomes should be predicted before a test is run d) Expected outcomes may include timing constraints such as response times 51) Which of the following is not a white box testing? a) Random testing 40
  • 41. b) Data Flow testing c) Statement testing d) Syntax testing 52) If the pseudo code below were a programming language, how many tests are required to achieve 100% statement coverage? 1. If x=3 then 2. Display_messageX; 3. If y=2 then 4. Display_messageY; 5. Else 6. Display_messageZ; a. 1 b. 2 c. 3 d. 4 53) Using the same code example as question 17, how many tests are required to achieve 100% branch/decision coverage? a. 1 b. 2 c. 3 d. 4 54) Which of the following technique is NOT a black box technique? a) Equivalence partitioning b) State transition testing c) LCSAJ d) Syntax testing 55) Given the following code, which is true? IF A>B THEN C=A–B ELSE C=A+B ENDIF Read D IF C = D THEN Print “Error” ENDIF a) 1 test for statement coverage, 1 for branch coverage b) 2 tests for statement coverage, 2 for branch coverage c) 2 tests for statement coverage, 3 for branch coverage 41
  • 42. d) 3 tests for statement coverage, 3 for branch coverage e) 3 tests for statement coverage, 2 for branch coverage 56) Consider the following: Pick up and read the news paper Look at what is on television If there is a program that you are interested in watching then switch the television on and watch the program Otherwise Continue reading the news paper If there a crossword in the news paper then try and complete the crossword a) SC = 1 and DC = 3 b) SC = 1 and DC = 2 c) SC = 2 and DC = 2 d) SC = 2 and DC = 3 57) The specification: an integer field shall contain values from and including 1 to and including 12 (number of the month) Which equivalence class partitioning is correct? a) Less than 1, 1 through 12, larger than 12 b) Less than 1, 1 through 11, larger than 12 c) Less than 0, 1 through 12, larger than 12 d) Less than 1, 1 through 11, and above 58) Analyze the following highly simplified procedure: Ask: “What type of ticket do you require, single or return?” IF the customer wants ‘return’ Ask: “What rate, Standard or Cheap-day?” IF the customer replies ‘Cheap-day’ Say: “That will be £11:20” ELSE Say: “That will be £19:50” ENDIF ELSE Say: “That will be £9:75” ENDIF Now decide the minimum number of tests that are needed to ensure that all the questions have been asked, all combinations have occurred and all replies given. a) 3 b) 4 c) 5 d) 6 42
  • 43. 43
  • 44. Chapter 5: Test organization and independence International Software Testing & Quality Board, Test management, Test reports Testers, test leader, test manager, test approach, defect density, failure rate, test manager, test approach, configuration management, version control, Risk, product risk, prject risk, risk based testing, incident logging, incident management. Please refer to the earlier version of Software testing theory part 1 The effectiveness of finding defects by testing and reviews can be improved by using independent testers. Options for testing teams available are:  No independent testers. Developers test their own code.  Independent testers within the development teams.  Independent test team or group within the organization, reporting to project management or executive management  Independent testers from the business organization or user community.  Independent test specialists for specific test targets such as usability testers, security testers or certification testers (who certify a software product against standards and regulations).  Independent testers outsourced or external to the organization. The benefits of independence include: Independent testers see other and different defects, and are unbiased. An independent tester can verify assumptions people made during specification and implementation of the system. Drawbacks include: Isolation from the development team (if treated as totally independent). Independent testers may be the bottleneck as the last checkpoint. Developers may lose a sense of responsibility for quality. b) Tasks of the test leader and tester Test leader tasks may include:  Coordinate the test strategy and plan with project managers and others.  Write or review a test strategy for the project, and test policy for the organization.  Contribute the testing perspective to other project activities, such as integration planning.  Plan the tests – considering the context and understanding the test objectives and risks 44
  • 45. –including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.  Set up adequate configuration management of testware for traceability.  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.  Decide what should be automated, to what degree, and how.  Select tools to support testing and organize any training in tool use for testers.  Decide about the implementation of the test environment.  Write test summary reports based on the information gathered during testing. Tester tasks may include:  Review and contribute to test plans.  Analyze, review and assess user requirements, specifications and models for testability.  Create test specifications.  Set up the test environment (often coordinating with system administration and network management).  Prepare and acquire test data.  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.  Use test administration or management tools and test monitoring tools as required.  Automate tests (may be supported by a developer or a test automation expert).  Measure performance of components and systems (if applicable).  Review tests developed by others. Note: People who work on test analysis, test design, specific test types or test automation may be specialists in these roles. Depending on the test level and the risks related to the product and the project, different people may take over the role of tester, keeping some degree of independence. Typically testers at the component and integration level would be developers; testers at the acceptance test level would be business experts and users, and testers for operational acceptance testing would be operators. c) Defining skills test staff need Now days a testing professional must have ‘application’ or ‘business domain’ knowledge and ‘Technology’ expertise apart from ‘Testing’ Skills 45
  • 46. 2) Test planning and estimation a) Test planning activities  Determining the scope and risks, and identifying the objectives of testing.  Defining the overall approach of testing (the test strategy), including the definition of the test levels and entry and exit criteria.  Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance.  Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated.  Scheduling test analysis and design activities.  Scheduling test implementation, execution and evaluation.  Assigning resources for the different activities defined.  Defining the amount, level of detail, structure and templates for the test documentation.  Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues.  Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution. b) Exit criteria The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set of tests has a specific goal. Typically exit criteria may consist of:  Thoroughness measures, such as coverage of code, functionality or risk.  Estimates of defect density or reliability measures.  Cost.  Residual risks, such as defects not fixed or lack of test coverage in certain areas.  Schedules such as those based on time to market. c) Test estimation Two approaches for the estimation of test effort are covered in this syllabus:  The metrics-based approach: estimating the testing effort based on metrics of former or similar projects or based on typical values.  The expert-based approach: estimating the tasks by the owner of these tasks or by experts. Once the test effort is estimated, resources can be identified and a schedule can be drawn up. The testing effort may depend on a number of factors, including: 46
  • 47.  Characteristics of the product: the quality of the specification and other information used for test models (i.e. the test basis), the size of the product, the complexity of the problem domain, the requirements for reliability and security, and the requirements for documentation.  Characteristics of the development process: the stability of the organization, tools used, test process, skills of the people involved, and time pressure.  The outcome of testing: the number of defects and the amount of rework required. d) Test approaches (test strategies) One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun:  Preventative approaches, where tests are designed as early as possible.  Reactive approaches, where test design comes after the software or system has been produced. Typical approaches or strategies include:  Analytical approaches, such as risk-based testing where testing is directed to areas of greatest risk  Model-based approaches, such as stochastic testing using statistical information about failure rates (such as reliability growth models) or usage (such as operational profiles).  Methodical approaches, such as failure-based (including error guessing and fault- attacks), experienced-based, check-list based, and quality characteristic based.  Process- or standard-compliant approaches, such as those specified by industry- specific standards or the various agile methodologies.  Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive to events than pre-planned, and where execution and evaluation are concurrent tasks.  Consultative approaches, such as those where test coverage is driven primarily by the advice and guidance of technology and/or business domain experts outside the test team.  Regression-averse approaches, such as those that include reuse of existing test material, extensive automation of functional regression tests, and standard test suites. Different approaches may be combined, for example, a risk-based dynamic approach. The selection of a test approach should consider the context, including  Risk of failure of the project, hazards to the product and risks of product failure to humans, the environment and the company.  Skills and experience of the people in the proposed techniques, tools and methods.  The objective of the testing endeavour and the mission of the testing team.  Regulatory aspects, such as external and internal regulations for the development 47
  • 48. process.  The nature of the product and the business. 3) Test progress monitoring and control a) Test progress monitoring  Percentage of work done in test case preparation (or percentage of planned test cases prepared).  Percentage of work done in test environment preparation.  Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).  Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).  Test coverage of requirements, risks or code.  Subjective confidence of testers in the product.  Dates of test milestones.  Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test. b) Test Reporting  What happened during a period of testing, such as dates when exit criteria were met.  Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in tested software. Metrics should be collected during and at the end of a test level in order to assess:  The adequacy of the test objectives for that test level.  The adequacy of the test approaches taken.  The effectiveness of the testing with respect to its objectives. c) Test control Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task. Examples of test control actions are:  Making decisions based on information from test monitoring.  Re-prioritize tests when an identified risk occurs (e.g. software delivered late).  Change the test schedule due to availability of a test environment. 48
  • 49.  Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build. 4) Configuration management The purpose of configuration management is to establish and maintain the integrity of the products (components, data and documentation) of the software or system through the project and product life cycle. For testing, configuration management may involve ensuring that:  All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process.  All identified documents and software items are referenced unambiguously in test documentation For the tester, configuration management helps to uniquely identify (and to reproduce) the tested item, test documents, the tests and the test harness. During test planning, the configuration management procedures and infrastructure (tools) should be chosen, documented and implemented. 5) Risk and testing a) Project risks Project risks are the risks that surround the project’s capability to deliver its objectives, such as: Organizational factors:  skill and staff shortages;  personal and training issues;  political issues, such as o problems with testers communicating their needs and test results; o failure to follow up on information found in testing and reviews (e.g. not improving development and testing practices).  improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing). Technical issues:  problems in defining the right requirements;  the extent that requirements can be met given existing constraints;  the quality of the design, code and tests. Supplier issues:  failure of a third party; 49
  • 50. contractual issues. b) Product risks Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product, such as:  Failure-prone software delivered.  The potential that the software/hardware could cause harm to an individual or company.  Poor software characteristics (e.g. functionality, reliability, usability and performance).  Software that does not perform its intended functions. Risks are used to decide where to start testing and where to test more; testing is used to reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect. Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans. A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation and execution of tests. In a risk-based approach the risks identified may be used to:  Determine the test techniques to be employed.  Determine the extent of testing to be carried out.  Prioritize testing in an attempt to find the critical defects as early as possible.  Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers). Risk-based testing draws on the collective knowledge and insight of the project stakeholders to determine the risks and the levels of testing required to address those risks. To ensure that the chance of a product failure is minimized, risk management activities provide a disciplined approach to:  Assess (and reassess on a regular basis) what can go wrong (risks).  Determine what risks are important to deal with.  Implement actions to deal with those risks. In addition, testing may support the identification of new risks, may help to determine what risks should be reduced, and may lower uncertainty about risks. 6) Incident management Since one of the objectives of testing is to find defects, the discrepancies between actual and expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and classification to correction and confirmation of the solution. In order to manage all incidents to completion, an organization should establish a process and rules for classification. 50
  • 51. Incidents may be raised during development, review, testing or use of a software product. They may be raised for issues in code or the working system, or in any type of documentation including requirements, development documents, test documents, and user information such as “Help” or installation guides. Incident reports have the following objectives:  Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.  Provide test leaders a means of tracking the quality of the system under test and the progress of the testing.  Provide ideas for test process improvement. Details of the incident report may include:  Date of issue, issuing organization, and author.  Expected and actual results.  Identification of the test item (configuration item) and environment.  Software or system life cycle process in which the incident was observed.  Description of the incident to enable reproduction and resolution, including logs, database dumps or screenshots.  Scope or degree of impact on stakeholder(s) interests.  Severity of the impact on the system.  Urgency/priority to fix.  Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest, closed).  Conclusions, recommendations and approvals.  Global issues, such as other areas that may be affected by a change resulting from the incident.  Change history, such as the sequence of actions taken by project team members with respect to the incident to isolate, repair, and confirm it as fixed. References, including the identity of the test case specification that revealed the problem. Questions 1) The following list contains risks that have been identified for a software product to be developed. Which of these risks is an example of a product risk? a) Not enough qualified testers to complete the planned tests b) Software delivery is behind schedule c) Threat to a patient’s life d) 3rd party supplier does not supply as stipulated 51
  • 52. 2) Which set of metrics can be used for monitoring of the test execution? a) Number of detected defects, testing cost; b) Number of residual defects in the test object. c) Percentage of completed tasks in the preparation of test environment; test cases prepared d) Number of test cases run / not run; test cases passed / failed 3) A defect management system shall keep track of the status of every defect registered and enforce the rules about changing these states. If your task is to test the status tracking, which method would be best? a) Logic-based testing b) Use-case-based testing c) State transition testing d) Systematic testing according to the V-model 4) Why can be tester dependent on configuration management? a) Because configuration management assures that we know the exact version of the testware and the test object b) Because test execution is not allowed to proceed without the consent of the change control board c) Because changes in the test object are always subject to configuration management d) Because configuration management assures the right configuration of the test tools 5) What test items should be put under configuration management? a) The test object, the test material and the test environment b) The problem reports and the test material c) Only the test objects. The test cases need to be adapted during agile testing d) The test object and the test material 6) Which of the following can be root cause of a bug in a software product? (I) The project had incomplete procedures for configuration management. (II) The time schedule to develop a certain component was cut. (III) The specification was unclear (IV) Use of the code standard was not followed up (V) The testers were not certified a) (I) and (II) are correct b) (I) through (IV) are correct c) (III) through (V) are correct d) (I), (II) and (IV) are correct 7) Which of the following is most often considered as components interface bug? 52
  • 53. a) For two components exchanging data, one component used metric units, the other one used British units b) The system is difficult to use due to a too complicated terminal input structure c) The messages for user input errors are misleading and not helpful for understanding the input error cause d) Under high load, the system does not provide enough open ports to connect to 8) Which of the following project inputs influence testing? (I) contractual requirements (II) Legal requirements (III) Industry standards (IV) Application risk (V) Project size a) (I) through (III) are correct b) All alternatives are correct c) (II) and (V) are correct d) (I), (III) and (V) are correct 9) What is the purpose of test exit criteria in the test plan? a) To specify when to stop the testing activity b) To set the criteria used in generating test inputs c) To ensure that the test case specification is complete d) To know when a specific test has finished its execution 10) Which of the following items need not to be given in an incident report? a) The version number of the test object b) Test data and used environment c) Identification of the test case that failed d) The location and instructions on how to correct the fault 11) Why is it necessary to define a Test Strategy? a) As there are many different ways to test software, thought must be given to decide what will be the most effective way to test the project on hand. b) Starting testing without prior planning leads to chaotic and inefficient test project c) A strategy is needed to inform the project management how the test team will schedule the test-cycles d) Software failure may cause loss of money, time, business reputation, and in extreme cases injury and death. It is therefore critical to have a proper test strategy in place 12) IEEE 829 test plan documentation standard contains all of the following except: a) test items b) test deliverables c) test tasks d) test environment 53
  • 54. e) test specification 13) Which of the following is NOT part of configuration management: a) status accounting of configuration items b) auditing conformance to ISO9001 c) identification of test versions d) record of changes to documentation over time 14) What is the purpose of test completion criteria in a test plan: a) to know when a specific test has finished its execution b) to ensure that the test case specification is complete c) to know when test planning is complete d) to plan when to stop testing 15) Test managers should not: a) report on deviations from the project plan b) sign the system off for release c) re-allocate resource to meet original plans d) raise incidents on faults that they have found e) provide information for risk analysis and quality improvement 16) What information need not be included in a test incident report: a) how to fix the fault b) how to reproduce the fault c) test environment details d) the actual and expected outcomes 17) Which of the following is NOT included in the Test Plan document of the Test Documentation Standard: a) Test items (i.e. software versions) b) What is not to be tested c) Test environments d) Quality plans e) Schedules and deadlines 18) Which of the following is NOT true of incidents? a) Incident resolution is the responsibility of the author of the software under test. b) Incidents may be raised against user requirements. c) Incidents require investigation and/or correction. d) Incidents are raised when expected and actual results differ. 19) Which of the following would NOT normally form part of a test plan? a) Features to be tested 54
  • 55. b) Incident reports c) Risks d) Schedule 20) A configuration management system would NOT normally provide: a) linkage of customer requirements to version numbers. b) Facilities to compare test results with expected results. c) The precise differences in versions of software component source code. d) Restricted access to the source code library. 21) Testware (test cases, test dataset) a) needs configuration management just like requirements, design and code b) should be newly constructed for each new version of the software c) is needed only until the software is released into production or use d) does not need to be documented and commented, as it does not form part of the released software system 22) ‘Defect Density’ calculated in terms of a) The number of defects identified in a component or system divided by the size of the component or the system b) The number of defects found by a test phase divided by the number found by that test phase and any other means after wards c) The number of defects identified in the component or system divided by the number of defects found by a test phase d) The number of defects found by a test phase divided by the number found by the size of the system 23) An expert based test estimation is also known as a) Narrow band Delphi b) Wide band Delphi c) Bespoke Delphi d) Robust Delphi 24) During the testing of a module tester ‘X’ finds a bug and assigned it to developer. But developer rejects the same, saying that it’s not a bug. What ‘X’ should do? a) Report the issue to the test manager and try to settle with the developer. b) Retest the module and confirm the bug c) Assign the same bug to another developer d) Send to the detailed information of the bug encountered and check the reproducibility 25) The primary goal of comparing a user manual with the actual behavior of the running program during system testing is to 55