The document discusses various aspects of developing a test strategy for software projects. It covers topics like test levels, roles and responsibilities, test types, test methodologies, test estimation processes, risk analysis and management. Some key points include defining the scope, risks, test priorities and approach in the strategy. It also discusses test estimation techniques like use case points, function points and test case points to estimate the testing effort.
2. Test Strategy
A strategy for software testing integrates software test case design
methods into a well-planned series of steps that result in the successful
construction of software
The strategy provides a road map that describes the steps to be
conducted as part of testing, when these steps are planned and then
undertaken, and how much effort, time, and resources will be required.
Testing is a set of activities that can be planned in advance and
conducted systematically for this reason a template for software testing
—a set of steps into which we can place specific test case design
techniques and testing methods—should be defined for the software
process.
3. Generic Characteristics of a Test
Strategy
Testing begins at the component level and works "outward" toward the
integration of the entire computer-based system.
Different testing techniques are appropriate at different points in time.
Testing is conducted by the developer of the software and an
independent test group.
Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy.
4. Preparing a Test Strategy
Any Test Strategy Should consist of the following
Scope
Out of Scope
Test Levels
Roles and Responsibilities
Testing Tools
Risks and Mitigation
Regression Test Approach
Test Groups
5. Preparing a Test Strategy
Mile Stones
Test Priorities
Escalation Mechanism
Test Summary Reports
Status reporting Mechanism
Test Records
6. Some Best Practices for Developing
Test Strategies
•
Specify product requirements in a quantifiable manner long before testing
commences.
•
Understand the users of the software and develop a profile for each user category.
•
State testing objectives explicitly.
•
Develop a testing plan that emphasizes “rapid cycle testing.”
•
Use effective formal technical reviews as a filter prior to testing.
•
Develop a continuous improvement approach for the testing process.
•
A good test strategy is Specific, Practical, Justified
•
The purpose of a test strategy is to clarify the major tasks and challenges of the test
project.
7. Web Testing Check List
• User Interface Testing
A. Easy to use
B. Instructions are simple and clear.
• Site map or navigation bar
A. Is the site map correct?
B. Does each link on the map actually exist?
C. Is the navigational bar present on every
screen?
• Site Content
8. Contd..
•
Application Specific Functionality
Correctness of the functionality of the website
No internal and external broken links.
User submitted information through forms, needs to work properly.
•
Cookies
If the system uses cookies, make sure the cookies work. If cookies
store login information, make sure the information is encrypted in the
cookie file.
If the cookie is used for statistics, make sure those cookies are
encrypted too. Otherwise people can edit their cookies information
10. Strategy for Test Automation
What can be automated and what cannot be automated..
Identifying the test cases to automate.
Test Automation Architecture
Defining a framework for automation.
Test Automation Process.
11. Strategy for Performance Testing
Choosing the tool.
• Performance Test Objective.
• Performance test scenarios.
• Defining the SLA.
• Defining Performance Counters.
• Performance Test Architecture.
• Performance Test Process.
12. Test Strategy for A Maintenance
Project
• Defining the feature to test
• Analyzing the scope of the specific maintenance
project (e.g., upgrade, minor implementation,
support packs )
• Identifying missing test scenarios and
documentation Performing the priority analysis
from an IT perspective rather than from the
business perspective as in the previous phase.
The IT point of view should reflect an analysis of
the impact of the changes introduced to the
system
13. Items for a maintenance Project
• A description of the test assignment and scope
• A high-level description of the features to test and
not to test
• A list of item pass/fail criteria
• A list of the suspension and resumption
requirements
• Strategy load test data into test environment
• Strategy load test data into test environment
15. Test Approach
• The test approach describes the types of tests
performed and the sequence of tests as executed
in the test lab
• In the Test Approach the following three Items
have to be clearly specified.
Test Types
Test Sequence
Test Cycle
17. BVT
• The following goals are adopted for build verification testing:
• Clarity: The steps should clearly describe the options that must be
selected to enable the service.
• Completeness: All configuration options required to configure the
service are described
• Presentation: The process for configuring the service is linearly
presented and represents the process that must be followed when
initially deploying the solution.
• Consistency: The configuration options presented in the solution
should always use the tools that are best suited for configuring the
service.
18. Functional Tests
• Atomicity: The test verifies the individual units of functionality such as
the configuration of a particular functional option for a service.
• Integration: The functional test verifies the overall function of the
services considering all the options that were configured in the solution.
• Completeness: The functional test cases ensure that the technical
goals of the solution are achieved.
19. Technical Writing Tests
• Prerequisites: Documented hardware and software
prerequisites should be verified.
• References: References between documents or
chapters of the solution should be validated.
• Links: Links to Web pages on the web page or other
approved sites should be verified as valid.
23. Test Life Cycle
•
Requirements Analysis: Testing should begin in the requirements phase of the
software life cycle (SDLC).
•
Design Analysis: During the design phase, testers work with developers in
determining what aspects of a design are testable and under what parameter those
tests work.
•
Test Planning: Test Strategy, Test Plan (s), Test Bed creation.
•
Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use
in testing software.
•
Test Execution: Testers execute the software based on the plans and tests and
report any errors found to the development team.
•
Test Reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for
release.
•
Retesting the Defects
24. Test Methodologies
• When Planning for the methodology the following things have
to be considered:
• Where will the testing take place?
• Who will perform the tests?
• How will you communicate with and involve participants?
• How will you schedule the testing?
• How will you manage application problems?
25. Contd..
• Do you have staff available for testing?
• Does your staff have the appropriate level of expertise?
• What are the internal costs compared to the outsourcing costs?
• What is your time frame? Can the testing get done faster if you
outsource it?
• What are your security requirements? Would you need to provide
confidential data to an external organization?
26. Task List for Application Testing
•
Inventory the applications used for business tasks.
•
Develop a system for prioritizing business tasks.
•
Prioritize the applications by how critical they are to running your
business
•
Write a test plan, including test methodology, lab and test resource
requirements, and schedule.
•
Develop a test tracking system for capturing and reporting test results.
•
Promote the testing methodology.
•
Schedule test events.
•
Test applications and record results.
•
Report on testing progress.
27. Automation Test methodology
• Functional Decomposition:
The Fundamental Areas Include:
Navigation
Availability
Specific Business Function
28. A hierarchical architecture Model
• Start Up Script
• Main Script
Module
Use case
• User Defined Functions
Utility Functions
• Reusable actions
Business Functions
• Reports
Log Reports
Status Reports
• Recovery Scenarios
• Test Batches
30. Test Estimation Process
• Based on the risk and criticality associated with
the AUT, the project team should establish a
coverage goal during the test planning.
• Test scope is the important factor to be
considered.
• The objective of test estimation is to assure that
the estimation process has covered the every
aspect of application areas.
• Estimation of effort is very critical for estimation
31. Test Effort Estimation - Approaches
•
•
•
•
•
•
Experiences in similar/previous projects
Historical data
Predefined budget
Intuition of the experienced tester
Extrapolation
Testing best practice
34. Use Case Point
• Use Case Point Analysis is an estimation method
for new technology projects.
• Widely Used
• Can be used for estimation from the requirement
phase itself
• Complexity of use cases are considered
• A quantified Weightage factor is used for
adjustments
• Independent of technology being used
35. Contd..
• Ensures that all requirements are met.
• The Use Case Approach captures requirements
from the perspective of how the user will actually
use the system .
36. Steps For Use Case Point Analysis
•
•
•
•
•
•
•
•
Define Use cases
Count The unadjusted use case weight Factor
Count Unadjusted Actor Weight Factor
Calculate the unadjusted Use case Points
Calculate the technical Complexity Factor
Calculate The environmental Complexity Factor
Calculate Use case points
Convert Use case into Person Effort
37. Case Study.
A Case study to make an estimation on use case
Point Estimation. All the Details , Formulae will be
Given. A sample Estimation will be shown before
Case study.
38. Function Point Analysis
What is Function Point Analysis (FPA)?
• It is designed to estimate and measure the time, and
thereby the cost, of developing new software applications
and maintaining existing software applications.
• It is also useful in comparing and highlighting
opportunities for productivity improvements in software
development.
• It was developed by A.J. Albrecht of the IBM Corporation
in the early 1980s.
• The main other approach used for measuring the size,
and therefore the time required, of software project is
lines of code (LOC) – which has a number of inherent
problems.
39. Function Point Analysis
How is Function Point Analysis done?
Working from the project design specifications, the
following system functions are measured
(counted):
• Inputs
• Outputs
• Files
• Inquires
• Interfaces
40. Function Point Analysis
These function-point counts are then weighed
(multiplied) by their degree of complexity:
Simple Average Complex
Inputs
2
4
6
Outputs
3
5
7
Files
5
10
15
Inquires
2
4
6
Interfaces
4
7
10
41. Function Point Analysis
A simple example:
inputs
3 simple X 2 = 6
4 average X 4 = 16
1 complex X 6 = 6
outputs
6 average X 5 = 30
2 complex X 7 = 14
files
5 complex X 15 = 75
inquiries
8 average X 4 = 32
interfaces
3 average X 7 = 21
4 complex X 10 = 40
Unadjusted function points 240
42. Function Point Analysis
In addition to these individually weighted function points, there are
factors that affect the project and/or system as a whole. There are a
number (~35) of these factors that affect the size of the project effort,
and each is ranked from “0”- no influence to “5”- essential.
The following are some examples of these factors:
• Is high performance critical?
• Is the internal processing complex?
• Is the system to be used in multiple sites and/or by multiple
organizations?
• Is the code designed to be reusable?
• Is the processing to be distributed?
• and so forth . . .
43. Function Point Analysis
Continuing our example . . .
Complex internal processing
Code to be reusable
High performance
Multiple sites
Distributed processing
Project adjustment factor
= 3
= 2
= 4
= 3
= 5
= 17
Adjustment calculation:
Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor X 0.01)]
=
240
X [0.65 + (
17
X 0.01)]
=
240
X [0.82]
=
197 Adjusted function points
44. Function Point Analysis
But how long will the project take and how much
will it cost?
• As previously measured, programmers in our
organization average 18 function points per
month. Thus . . .
197 FP divided by 18 = 11 man-months
• If the average programmer is paid $5,200 per
month (including benefits), then the [labor] cost of
the project will be . . .
11 man-months X $5,200 = $57,200
45. Function Point Analysis
Because function point analysis is independent of
language used, development platform, etc. it can
be used to identify the productivity benefits of . . .
• One programming language over another
• One development platform over another
• One development methodology over another
• One programming department over another
• Before-and-after gains in investing in
programmer training
• And so forth . .
46. Function Point Analysis
But there are problems and criticisms:
• Function point counts are affected by project size
• Difficult to apply to massively distributed systems or to
systems with very complex internal processing
• Difficult to define logical files from physical files
• The validity of the weights that Albrecht established – and
the consistency of their application – has been
challenged
• Different companies will calculate function points slightly
different, making intercompany comparisons questionable
47. Test Case Point
• Analyze RFP
• Count the number of features
• Estimate the number of test cases using a thumb
rule based on the Historical Data
• Break the verification points into GUI, Functional,
Database, Exception and Navigation
• Add a weighting factor to all these
• Consider the Affecting factors
• Arrive at the test case points
• Calculate the person effort hours
54. Risk Analysis
Risk to the project with an emphasis on testing
Lack of Personnel resources when testing is to begin
Lack of availability of required hardware, software , hardware or tools
Late delivery of software, hardware or tools
Delay in training
Change in requirements
Complexities involved in testing the Application
55. RISK ANALYSIS AND MANAGEMENT
• Risk concerns future happenings.
• By changing our actions today, create an opportunity for
a different and hopefully better situation for ourselves
tomorrow.
• Risk involves change, such as in changes of mind,
opinion, actions, or places.
• how will changes in customer requirements, development
technologies.
• Target computers, and all other entities connected to the
project affect timeliness and overall success
56. Risk Analysis
• Risk analysis and management are a series of
steps that help a software team to understand
and manage uncertainty
• A risk is a potential problem—it might happen, it
might not.
• Regardless of the outcome, it’s a really good idea
to identify it, assess its probability of occurrence,
estimate its impact, and establish a contingency
plan should the problem actually occur.
57. Risk Analysis
• Who does it?
• Everyone involved in the software
• process—managers, software engineers, and
customers— participate in risk analysis and
management.
• understanding the risks and taking proactive
measures to avoid or manage them—is a key
element of good software project management.
58. Risk Analysis - Steps
• Recognizing what can go wrong is the first step,
called “risk identification.”
• each risk is analyzed to determine the likelihood
that it will occur and the damage that it will do if it
does occur
• Once this information is established, risks are
ranked, by probability and impact.
• Finally, a plan is developed to manage those
risks with high probability and high impact.
59. REACTIVE VS. PROACTIVE RISK
STRATEGIES
• A reactive strategy monitors the project for likely
risks. Resources are set aside to deal with them,
should they become actual problems. More
commonly, the software team does nothing about
risks until something goes wrong. Then, the team
flies into action in an attempt to correct the
problem rapidly.
60. REACTIVE VS. PROACTIVE RISK
STRATEGIES
• A considerably more intelligent strategy for risk
management is to be proactive. A proactive
strategy begins long before technical work is
initiated. Potential risks are identified, their
probability and impact are assessed, and they
are ranked by importance. Then, the software
team establishes a plan for managing risk. The
primary objective is to avoid risk, but because not
all risks can be avoided, the team works to
develop a contingency plan that will enable it to
respond in a controlled and effective manner.
62. RISK IDENTIFICATION
•
Product size—risks associated with the overall size of the software to be built or
modified.
•
Business impact—risks associated with constraints imposed by manage mentor the
marketplace.
•
Customer characteristics—risks associated with the sophistication of the customer and
the developer's ability to communicate with the customer in a timely manner.
•
Process definition—risks associated with the degree to which the software process has
been defined and is followed by the development organization.
•
Development environment—risks associated with the availability and quality of the tools
to be used to build the product.
•
Technology to be built—risks associated with the complexity of the system to be built
and the "newness" of the technology that is packaged by the system.
•
Staff size and experience—risks associated with the overall technical and
63. Assessing Overall Project Risk
1. Have top software and customer managers
formally committed to support the project?
2. Are end-users enthusiastically committed to the
project and the system/product to be built?
3. Are requirements fully understood by the
software engineering team and their customers?
4. Have customers been involved fully in the
definition of requirements?
5. Do end-users have realistic expectations?
6. Is project scope stable?
64. Assessing Overall Project Risk
7. Does the software engineering team have the
right mix of skills?
8. Are project requirements stable?
9. Does the project team have experience with the
technology to be implemented?
10. Is the number of people on the project team
adequate to do the job?
11. Do all customer/user constituencies agree on
the importance of the project and on the
requirements for the system/product to be built?
65. Risk Drivers
• Performance risk—the degree of uncertainty that the
product will meet its requirements and be fit for its
intended use.
• Cost risk—the degree of uncertainty that the project
budget will be maintained.
• Support risk—the degree of uncertainty that the resultant
software will be easy to correct, adapt, and enhance.
• Schedule risk—the degree of uncertainty that the project
schedule will be maintained and that the product will be
delivered on time.