2. CRITICAL RELEASING OCT. 2013!
ISO 29119
It groups some of the most relevant ISO's for this area, such
as: IEEE Std. 829, Software Test Documentation, IEEE Std
1008, Software Unit Testing, IEEE Std 1012-1998 Software
Verification and Validation, IEEE Std 1028-1997 Software
Reviews, ISO/IEC 12207, Software Life Cycle Processes,
ISO/IEC 15289, System and Software Life Cycle Process
Information Products y ISO/IEC TR 19759, Guide to the
Software Engineering Body of Knowledge.
3. TEST
Activity in which a system or any of its components
are running with previously specified circumstances,
results are observed, recorded and performed an
assessment of some aspect
Test provides guidance on evaluating the
quality of the product.
4. The purposes of the Test discipline are as
follows:
•To find and document defects in software quality
•To generally advise on perceived software quality
•To prove the validity of the assumptions made in the design
and requirement specification through concrete
demonstration
•To validate that software product functions as designed
•To validate that the requirements have been implemented
appropriately
5. Basic Definitions
• Error
An error is a mistake, misconception, or misunderstanding on
the part of a software developer.
• Defects (Faults)
A defect (fault) is introduced into the software as the result of
an error. It is an anomaly in the software that may cause it to
behave incorrectly, and not according to its specification.
•Failures
A failure is the inability of a software system or component to
perform its required functions within specified requirements
or specification.
6. When the analyst or the developer makes an error (or mistake)
He/She will produce a fault. Faults are also called defects or
bugs.
A defect is a flaw in a component or system that can cause it to fail.
Many defects hide in the code, but are never discovered. The
moment they are discovered we speak of a failure, indicating that
the systems does not react as we expect.
Findings indicate an observed difference between expected and
implemented system behavior that can jeopardize the anticipated
goal. This definition includes both the experience of the tester and
the anticipated business goal. A finding can originate from a test
fault, a fault in the test base, or a bug in the code.
7. Realities
• The extensive testing of the software are
impractical (you can't test all possibilities of
operating even in simple programs). This is
impossible from all points of view: human,
economic and even math.
• The goal of the tests is to detect defects in
the software ... Discover an error is a
successful test
10. Testing levels
Source: Advanced
Software Testing—Vol.
3 - Guide to the ISTQB
Advanced Certification
as an Advanced
Technical Test Analyst .
http://www.istqb.org/
11. Testing levels
Source: Advanced Software Testing—Vol. 3 - Guide to the ISTQB Advanced
Certification as an Advanced Technical Test Analyst. http://www.istqb.org/
14. Black Box approach (Functional)
• There is no knowledge of its inner structure (i.e.,
how it works).
• The tester only has knowledge of what it does.
• The size of the software-under-test using this
approach can vary from a simple module, member
function, or object cluster to a subsystem or a
complete software system.
15. Black Box approach (Functional)
• The description of behavior or functionality for the
software-under-test may come from a formal specification,
an Input/Process/ Output Diagram (IPO), or a well-defined
set of pre and post conditions.
• Another source for information is a requirements
specification document that usually describes the
functionality of the software-under-test and its inputs and
expected outputs
16. Black Box approach (Functional)
The tester provides the specified inputs to the
software-under-test, runs the test and then
determines if the outputs produced are equivalent
to those in the specification
17. White Box approach
• Focuses on the inner structure of the software to be
tested.
• To design test cases using this strategy the tester
must have a knowledge of that structure. The code, or
a suitable pseudo code like representation must be
available.
• The tester selects test cases to exercise specific
internal structural elements to determine if they are
working properly
18. White Box approach
• Since designing, executing, and analyzing the results
of white box testing is very time consuming, this
strategy is usually applied to smaller-sized pieces of
software such as a module or member function.
• White box testing methods are especially useful for
revealing design and code-based control, logic and
sequence defects, initialization defects, and data flow
defects.
19.
20. Types of Test
Quality Dimension/ Quality Risk Type of Test
Functionality
Function test
Security test
Volume test
Usability Usability test
Reliability
Benchmark test
Integrity test
Structure test
Stress test
Performance
Contentiontest
Load test
Performance profile
Supportability
Configurationtest
Installationtest
21. Regression Test
Tests to be run on a modified program to ensure that
changes are correct and don't affect other parts of
software that have not changed.
Each time we receive a new version of software should run
our regression tests.
22. Test Ideas – Test Case
• Test Idea: expression that identifies a test that can be
helpful. The ideas are derived from Models, Specifications
and Storm’s Idea
• Test Case: Set of entries, performance conditions and
expected results developed for a particular purpose.
An idea of testing is different of a test case: the Test idea
does not contain the specification of test as such, only
the essence of the test.
Test ideas produce test cases.
23. Test Case
• Test cases must be written with sufficient detail to enable that a new
team member can begin working quickly to run tests and find
defects.
• Each test case should determine the result of the expected output,
which is compared with the result obtained.
• The programmer must try to avoid test their own programs as they
want (consciously or unconsciously) to prove that it works fine. In
addition, it's normal that the situations that he/she forgot to consider
when creating the program are again forgotten to create test cases
• Clearly identify the functionality that you want to test.
• Using test ideas as a basis for generating interesting scenarios to
test
24. Test Plan
• Defines the goals and objectives of testing
within the scope of the iteration (or project),
the items being targeted, the approach to be
taken, the resources required and the
deliverables to be produced (RUP).
• A test plan describes the strategies,
resources and planning of the test
25. Defects Reports
Defects (faults) could be ignored or postponed, according to
the way they are written. This can be:
- Difficult to understand
- Too complicated to solve
Keys to develop good reports of defects
• First, describe the problem.
• Then, describe the steps necessary to reproduce the
problem: neither more nor less.
• Describe the wrong behavior and if necessary, enter what
should have happened.
26. Defects Reports
Keys to develop good reports of defects:
(Continued)
• Describe the environment variables and other
details about the configuration of the machine
where the problem is happening.
• If you encounter two problems, report two defects
• Don’t use expressions such as "failure" or "not
working". We should be more explicit.
27. Defects Reports
A typical report includes defects: (Continued)
• Priority
• Report type (Error, suggestion, documentation
error, design problem)
• Impact on the customer
• Keywords
• Description of the problem (steps to reproduce).
29. Testing principles
• Tests help prevent deficiencies
• Tests are based on risks
• Testing should be planned
• Testing is a creative and hard work
• It's impossible to test all entries
• You can't test all the roads
• It's impossible to test the correction of an error using
logic only.
• The purpose of finding problems to fix it!
30.
31. There is inevitably an adversarial relationship between
testers and developers. This is no bad thing: developers
are trying to get something working and the testers are
trying to show it doesn’t. The keys to success are patience,
a sense of humor, clear definitions of responsibilities, a
plan, and good staff. So what else is new?
These roles depend on two distinctions being drawn:
1. Between the quality assurance and quality control roles of
the QA department, and the testers
2. Between the testing and development roles
33. Be careful
• New ISO Standard 29119: September 01 2013
Software Testing
• See:
http://www.iso.org/iso/home/store/catalogue_tc/catalogu
e_detail.htm?csnumber=45142
• See: http://in2test.lsi.uniovi.es/gt26/