SlideShare uma empresa Scribd logo
1 de 47
Executing software in a simulated or real
environment, using inputs selected somehow.




                    360 Logica
• Detect faults
• Establish confidence in software
• Evaluate properties of software
  •   Reliability
  •   Performance
  •   Memory Usage
  •   Security
  •   Usability



                      360 Logica
Most of the software testing literature equates test
case selection to software testing but that is just one
difficult part. Other difficult issues include:

• Determining whether or not outputs are correct.
• Comparing resulting internal states to expected states.
• Determining whether adequate testing has been done.
• Determining what you can say about the software when testing
  is completed.
• Measuring performance characteristics.
• Comparing testing strategies.



                                360 Logica
We frequently accept outputs because they are plausible
rather than correct.

It is difficult to determine whether outputs are correct because:

•We wrote the software to compute the answer.
•There is so much output that it is impossible to validate it all.
•There is no (visible) output.


                             360 Logica
• Stages of Development
• Source of Information for Test Case Selection




                      360 Logica
Testing in the Small

• Unit Testing
• Feature Testing
• Integration Testing




                        360 Logica
Tests the smallest individually executable code units.
Usually done by programmers. Test cases might be
selected based on code, specification, intuition, etc.

Tools:
• Test driver/harness
• Code coverage analyzer
• Automatic test case generator




                               360 Logica
Tests interactions between two or more units or
 components. Usually done by programmers.
 Emphasizes interfaces.

Issues:
• In what order are units combined?
• How do you assure the compatibility and correctness of
   externally-supplied components?




                           360 Logica
How are units integrated? What are the implications of this
 order?

• Top-down => need stubs; top-level tested repeatedly.
• Bottom-up => need drivers; bottom-levels tested repeatedly.
• Critical units first => stubs & drivers needed; critical units tested
  repeatedly.




                                360 Logica
Potential Problems:
• Inadequate unit testing.
• Inadequate planning & organization for integration testing.
• Inadequate documentation and testing of externally-supplied
  components.




                            360 Logica
Testing in the Large
• System Testing
• End-to-End Testing
• Operations Readiness Testing
• Beta Testing
• Load Testing
• Stress Testing
• Performance Testing
• Reliability Testing
• Regression Testing

                          360 Logica
Test the functionality of the entire system.
Usually done by professional testers.




                      360 Logica
• Not all problems will be found no matter how thorough or
  systematic the testing.
• Testing resources (staff, time, tools, labs) are limited.
• Specifications are frequently unclear/ambiguous and changing
  (and not necessarily complete and up-to-date).
• Systems are almost always too large to permit test cases to be
  selected based on code characteristics.




                              360 Logica
•   Exhaustive testing is not possible.
•   Testing is creative and difficult.
•   A major objective of testing is failure prevention.
•   Testing must be planned.
•   Testing should be done by people who are independent of the
    developers.




                              360 Logica
Every systematic test selection strategy can be viewed as
a way of dividing the input domain into subdomains, and
selecting one or more test case from each. The division
can be based on such things as code characteristics (white
box), specification details (black box), domain structure,
risk analysis, etc.


Subdomains are not necessarily disjoint, even though the
testing literature frequently refers to them as partitions.
                           360 Logica
• Can only be used at the unit testing level, and even then it
  can be prohibitively expensive.
• Don’t know the relationship between a “thoroughly” tested
  component and faults. Can generally argue that they are
  necessary conditions but not sufficient ones.




                            360 Logica
• Unless there is a formal specification, (which there rarely/never
  is) it is very difficult to assure that all parts of the specification
  have been used to select test cases.
• Specifications are rarely kept up-to-date as the system is
  modified.
• Even if every functionality unit of a specification has been
  tested, that doesn’t assure that there aren’t faults.




                                 360 Logica
An operational distribution is a probability distribution
that describes how the system is used in the field.




                                360 Logica
• The input stream for this system is also the input stream for a
  different already-operational system.
• The input stream for this system is the output stream for a
  different already-operational system.
• Although this system is new, it is replacing an existing system
  which ran on a different platform.
• Although this system is new, it is replacing an existing system
  which used a different design paradigm or different
  programming language.
• There has never been a software system to do this task, but
  there has been a manual process in place.


                               360 Logica
• A form of domain-based test case selection.
• Uses historical usage data to select test cases.
• Assures that the testing reflects how it will be used in the field and
  therefore uncovers the faults that users are likely to see.




                                360 Logica
• Can be difficult and expensive to collect necessary data.
• Not suitable if the usage distribution is uniform (which it
  never is).
• Does not take consequence of failure into consideration.




                             360 Logica
• Really does provide a user-centric view of the system.
• Allows you to say concretely what is known about the
  system’s behavior based on testing.
• Have metric that is meaningfully related to the system’s
  dependability.




                           360 Logica
Look at characteristics of the input domain or subdomains.
• Consider typical, boundary, & near-boundary cases (these
  can sometimes be automatically generated).
• This sort of boundary analysis may be meaningless for non-
  numeric inputs. What are the boundaries of {Rome, Paris,
  London, … }?
• Can also apply similar analysis to output values, producing
  output-based test cases.




                           360 Logica
US Income Tax System;

If income is              Tax is
$0 - 20K         15% of total income
$20 -50K         $3K + 25% of amount over $20K
Above $50K     $10.5K + 40% of amount over $50K

Boundary cases for inputs: $0, $20K, $50K


                          360 Logica
Random testing involves selecting test cases based
on a probability distribution. It is NOT the same as
ad hoc testing. Typical distributions are:

   • uniform: test cases are chosen with equal probability
   from the entire input domain.
   • operational: test cases are drawn from a distribution
   defined by carefully collected historical usage data.




                                360 Logica
• If the domain is well-structured, automatic generation can be
  used, allowing many more test cases to be run than if tests are
  manually generated.
• If an operational distribution is used, then it should
  approximate user behavior.




                           360 Logica
• An oracle (a mechanism for determining whether the output is
  correct) is required to determine whether the output is correct.
• Need a well-structured domain.
• Even a uniform distribution may be difficult or impossible to
  produce for complex domains, or when there is a non-numeric
  domains.
• If a uniform distribution is used, only a negligible fraction of
  the domain can be tested in most cases.
• Without an operational distribution, random testing does not
  approximate user behavior, and therefore does not provide an
  accurate picture of the way the system will behave.
                              360 Logica
Risk is the expected loss attributable to the failures
caused by faults remaining in the software.

Risk is based on
• Failure likelihood or likelihood of occurrence.
• Failure consequence.
So risk-based testing involves selecting test cases
in order to minimize risk by making sure that the most
likely inputs and highest consequence ones are selected.


                               360 Logica
Example: ATM Machine

Functions: Withdraw cash, transfer money, read balance, make
  payment, buy train ticket.

Attributes: Security, ease of use, availability




                                360 Logica
Features &         Occurrence          Failure     Priority
 Attributes         Likelihood        Consequence   (L x C)
Withdraw cash      High = 3          High = 3          9

Transfer money     Medium = 2        Medium = 2        4

Read balance       Low = 1           Low = 1           1

Make payment       Low = 1           High = 3          3

Buy train ticket   High = 3          Low = 1           3

Security           Medium = 2        High = 3          6


                                360 Logica
Features &         Occurrence          Failure     Priority
 Attributes         Likelihood        Consequence   (L x C)
Withdraw cash      High = 3           High = 3         9

Security           Medium = 2         High = 3         6

Transfer money     Medium = 2         Medium = 2       4

Make payment       Low = 1            High = 3         3

Buy train ticket   High = 3           Low 1            3

Read balance       Low = 1            Low = 1          1


                                360 Logica
The end user runs the system in their environment to
evaluate whether the system meets their criteria.
The outcome determines whether the customer will
accept system. This is often part of a contractual
agreement.




                            360 Logica
Test modified versions of a previously validated
system. Usually done by testers. The goal is to
assure that changes to the system have not
introduced errors (caused the system to regress).

The primary issue is how to choose an effective
regression test suite from existing, previously-run
test cases.



                               360 Logica
Once a test suite has been selected, it is often
desirable to prioritize test cases based on some
criterion. That way, since the time available for
testing is limited and therefore all tests can’t be
run, at least the “most important” ones can be.




                             360 Logica
•   Most frequently executed inputs.
•   Most critical functions.
•   Most critical individual inputs.
•   (Additional) statement or branch coverage.
•   (Additional) Function coverage.
•   Fault-exposing potential.




                             360 Logica
Methods based on the internal structure of code:
• Statement coverage
• Branch coverage
• Path coverage
• Data-flow coverage




                             360 Logica
White-box methods can be used for
• Test case selection or generation.
• Test case adequacy assessment.

In practice, the most common use of white-box
 methods is as adequacy criteria after tests have been
 generated by some other method.




                             360 Logica
Statement, branch, and path coverage are examples of control
  flow criteria. They rely solely on syntactic characteristics of the
  program (ignoring the semantics of the program computation.)

The data flow criteria require the execution of path segments that
  connect parts of the code that are intimately connected by the
  flow of data.




                                360 Logica
• Is code coverage an effective means of detecting faults?
• How much coverage is enough?
• Is one coverage criterion better than another?
• Does increasing coverage necessarily lead to higher fault
  detection?
• Are coverage criteria more effective than random test case
  selection?




                             360 Logica
• Test execution: Run large numbers of test cases/suites without
  human intervention.
• Test generation: Produce test cases by processing the
  specification, code, or model.
• Test management: Log test cases & results; map tests to
  requirements & functionality; track test progress & completeness




                              360 Logica
• More testing can be accomplished in less time.
• Testing is repetitive, tedious, and error-prone.
• Test cases are valuable - once they are created, they can
  and should be used again, particularly during regression
  testing.




                           360 Logica
• Does the payoff from test automation justify the expense
  and effort of automation?
• Learning to use an automation tool can be difficult.
• Tests, have a finite lifetime.
• Completely automated execution implies putting the system
  into the proper state, supplying the inputs, running the test
  case, collecting the results, and verifying the results.




                            360 Logica
• Automated tests are more expensive to create and maintain
  (estimates of 3-30 times).
• Automated tests can lose relevancy, particularly when the
  system under test changes.
• Use of tools require that testers learn how to use them, cope
  with their problems, and understand what they can and can’t
  do.



                            360 Logica
• Load/stress tests -Very difficult to have very large numbers
  of human testers simultaneously accessing a system.
• Regression test suites -Tests maintained from previous
  releases; run to check that changes haven’t caused faults.
• Sanity tests - Run after every new system build to check for
  obvious problems.
• Stability tests - Run the system for 24 hours to see that it can
  stay up.



                             360 Logica
NIST estimates that billions of dollars could be saved each
  year if improvements were made to the testing process.




*NIST Report: The Economic Impact of Inadequate Infrastructure
  for Software Testing, 2002.




                             360 Logica
Cost of Inadequate      Potential Cost
                        Software Testing       Reduction from
                                                 Feasible
                                               Improvements
Transportation        $1,800,000,000        $589,000,000
Manufacture

Financial Services    $3,340,000,000        $1,510,000,000


Total U.S. Economy    $59 billion           $22 billion



*NIST Report: The Economic Impact of Inadequate Infrastructure
for Software Testing, 2002. 360 Logica

Mais conteúdo relacionado

Mais procurados (20)

Test driven development - JUnit basics and best practices
Test driven development - JUnit basics and best practicesTest driven development - JUnit basics and best practices
Test driven development - JUnit basics and best practices
 
Junit
JunitJunit
Junit
 
Junit
JunitJunit
Junit
 
Java Unit Testing
Java Unit TestingJava Unit Testing
Java Unit Testing
 
Unit Testing with JUnit4 by Ravikiran Janardhana
Unit Testing with JUnit4 by Ravikiran JanardhanaUnit Testing with JUnit4 by Ravikiran Janardhana
Unit Testing with JUnit4 by Ravikiran Janardhana
 
JUnit 4
JUnit 4JUnit 4
JUnit 4
 
JUnit Presentation
JUnit PresentationJUnit Presentation
JUnit Presentation
 
Junit
JunitJunit
Junit
 
3 j unit
3 j unit3 j unit
3 j unit
 
05 junit
05 junit05 junit
05 junit
 
Simple Unit Testing With Netbeans 6.1
Simple Unit Testing With Netbeans 6.1Simple Unit Testing With Netbeans 6.1
Simple Unit Testing With Netbeans 6.1
 
Workshop unit test
Workshop   unit testWorkshop   unit test
Workshop unit test
 
Unit testing with Junit
Unit testing with JunitUnit testing with Junit
Unit testing with Junit
 
Unit testing
Unit testingUnit testing
Unit testing
 
Testing In Java
Testing In JavaTesting In Java
Testing In Java
 
Unit test
Unit testUnit test
Unit test
 
JUnit 5
JUnit 5JUnit 5
JUnit 5
 
JUnit 5 - The Next Generation of JUnit - Ted's Tool Time
JUnit 5 - The Next Generation of JUnit - Ted's Tool TimeJUnit 5 - The Next Generation of JUnit - Ted's Tool Time
JUnit 5 - The Next Generation of JUnit - Ted's Tool Time
 
Unit testing best practices with JUnit
Unit testing best practices with JUnitUnit testing best practices with JUnit
Unit testing best practices with JUnit
 
Embrace Unit Testing
Embrace Unit TestingEmbrace Unit Testing
Embrace Unit Testing
 

Semelhante a Software Testing Methods (20)

prova4
prova4prova4
prova4
 
provalast
provalastprovalast
provalast
 
test3
test3test3
test3
 
test2
test2test2
test2
 
provoora
provooraprovoora
provoora
 
stasera1
stasera1stasera1
stasera1
 
provarealw2
provarealw2provarealw2
provarealw2
 
provarealw3
provarealw3provarealw3
provarealw3
 
finalelocale
finalelocalefinalelocale
finalelocale
 
testsfw3
testsfw3testsfw3
testsfw3
 
 
provacompleta3
provacompleta3provacompleta3
provacompleta3
 
provacompleta4
provacompleta4provacompleta4
provacompleta4
 
prova7
prova7prova7
prova7
 
domenica1
domenica1domenica1
domenica1
 
testsfw5
testsfw5testsfw5
testsfw5
 
prova9
prova9prova9
prova9
 
prova2
prova2prova2
prova2
 
test
testtest
test
 
prova2
prova2prova2
prova2
 

Mais de 360logica Software Testing Services (A Saksoft Company)

Mais de 360logica Software Testing Services (A Saksoft Company) (17)

The future of the capital markets industry
The future of the capital markets industryThe future of the capital markets industry
The future of the capital markets industry
 
Case Study on Manual & Automation Testing ( Online Business Magazine)
Case Study on Manual & Automation Testing ( Online Business Magazine)Case Study on Manual & Automation Testing ( Online Business Magazine)
Case Study on Manual & Automation Testing ( Online Business Magazine)
 
Case Study – Regression Testing (Online Exam Software)
Case Study – Regression Testing (Online Exam Software)Case Study – Regression Testing (Online Exam Software)
Case Study – Regression Testing (Online Exam Software)
 
Case study-regression-testinga
Case study-regression-testingaCase study-regression-testinga
Case study-regression-testinga
 
Case Study : Manual & Automation Testing
Case Study : Manual & Automation TestingCase Study : Manual & Automation Testing
Case Study : Manual & Automation Testing
 
Case Study : Performance Testing (Educational Services)
Case Study : Performance Testing (Educational Services)Case Study : Performance Testing (Educational Services)
Case Study : Performance Testing (Educational Services)
 
Case study website
Case study websiteCase study website
Case study website
 
Case study on functional testing
Case study on functional testingCase study on functional testing
Case study on functional testing
 
Case study: Performance Testing using Load Runner
Case study: Performance Testing using Load RunnerCase study: Performance Testing using Load Runner
Case study: Performance Testing using Load Runner
 
QA Standards
QA StandardsQA Standards
QA Standards
 
QA standards
QA standardsQA standards
QA standards
 
Selenium php framework_case_study
Selenium php framework_case_studySelenium php framework_case_study
Selenium php framework_case_study
 
Software testing and analysis
Software testing and analysisSoftware testing and analysis
Software testing and analysis
 
Quality in Software Testing
Quality in Software TestingQuality in Software Testing
Quality in Software Testing
 
360Logica brochure
360Logica brochure360Logica brochure
360Logica brochure
 
Software testing - An Overview
Software testing - An OverviewSoftware testing - An Overview
Software testing - An Overview
 
Software testing
Software testingSoftware testing
Software testing
 

Último

What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 

Último (20)

What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 

Software Testing Methods

  • 1.
  • 2. Executing software in a simulated or real environment, using inputs selected somehow. 360 Logica
  • 3. • Detect faults • Establish confidence in software • Evaluate properties of software • Reliability • Performance • Memory Usage • Security • Usability 360 Logica
  • 4. Most of the software testing literature equates test case selection to software testing but that is just one difficult part. Other difficult issues include: • Determining whether or not outputs are correct. • Comparing resulting internal states to expected states. • Determining whether adequate testing has been done. • Determining what you can say about the software when testing is completed. • Measuring performance characteristics. • Comparing testing strategies. 360 Logica
  • 5. We frequently accept outputs because they are plausible rather than correct. It is difficult to determine whether outputs are correct because: •We wrote the software to compute the answer. •There is so much output that it is impossible to validate it all. •There is no (visible) output. 360 Logica
  • 6. • Stages of Development • Source of Information for Test Case Selection 360 Logica
  • 7. Testing in the Small • Unit Testing • Feature Testing • Integration Testing 360 Logica
  • 8. Tests the smallest individually executable code units. Usually done by programmers. Test cases might be selected based on code, specification, intuition, etc. Tools: • Test driver/harness • Code coverage analyzer • Automatic test case generator 360 Logica
  • 9. Tests interactions between two or more units or components. Usually done by programmers. Emphasizes interfaces. Issues: • In what order are units combined? • How do you assure the compatibility and correctness of externally-supplied components? 360 Logica
  • 10. How are units integrated? What are the implications of this order? • Top-down => need stubs; top-level tested repeatedly. • Bottom-up => need drivers; bottom-levels tested repeatedly. • Critical units first => stubs & drivers needed; critical units tested repeatedly. 360 Logica
  • 11. Potential Problems: • Inadequate unit testing. • Inadequate planning & organization for integration testing. • Inadequate documentation and testing of externally-supplied components. 360 Logica
  • 12. Testing in the Large • System Testing • End-to-End Testing • Operations Readiness Testing • Beta Testing • Load Testing • Stress Testing • Performance Testing • Reliability Testing • Regression Testing 360 Logica
  • 13. Test the functionality of the entire system. Usually done by professional testers. 360 Logica
  • 14. • Not all problems will be found no matter how thorough or systematic the testing. • Testing resources (staff, time, tools, labs) are limited. • Specifications are frequently unclear/ambiguous and changing (and not necessarily complete and up-to-date). • Systems are almost always too large to permit test cases to be selected based on code characteristics. 360 Logica
  • 15. Exhaustive testing is not possible. • Testing is creative and difficult. • A major objective of testing is failure prevention. • Testing must be planned. • Testing should be done by people who are independent of the developers. 360 Logica
  • 16. Every systematic test selection strategy can be viewed as a way of dividing the input domain into subdomains, and selecting one or more test case from each. The division can be based on such things as code characteristics (white box), specification details (black box), domain structure, risk analysis, etc. Subdomains are not necessarily disjoint, even though the testing literature frequently refers to them as partitions. 360 Logica
  • 17. • Can only be used at the unit testing level, and even then it can be prohibitively expensive. • Don’t know the relationship between a “thoroughly” tested component and faults. Can generally argue that they are necessary conditions but not sufficient ones. 360 Logica
  • 18. • Unless there is a formal specification, (which there rarely/never is) it is very difficult to assure that all parts of the specification have been used to select test cases. • Specifications are rarely kept up-to-date as the system is modified. • Even if every functionality unit of a specification has been tested, that doesn’t assure that there aren’t faults. 360 Logica
  • 19. An operational distribution is a probability distribution that describes how the system is used in the field. 360 Logica
  • 20. • The input stream for this system is also the input stream for a different already-operational system. • The input stream for this system is the output stream for a different already-operational system. • Although this system is new, it is replacing an existing system which ran on a different platform. • Although this system is new, it is replacing an existing system which used a different design paradigm or different programming language. • There has never been a software system to do this task, but there has been a manual process in place. 360 Logica
  • 21. • A form of domain-based test case selection. • Uses historical usage data to select test cases. • Assures that the testing reflects how it will be used in the field and therefore uncovers the faults that users are likely to see. 360 Logica
  • 22. • Can be difficult and expensive to collect necessary data. • Not suitable if the usage distribution is uniform (which it never is). • Does not take consequence of failure into consideration. 360 Logica
  • 23. • Really does provide a user-centric view of the system. • Allows you to say concretely what is known about the system’s behavior based on testing. • Have metric that is meaningfully related to the system’s dependability. 360 Logica
  • 24. Look at characteristics of the input domain or subdomains. • Consider typical, boundary, & near-boundary cases (these can sometimes be automatically generated). • This sort of boundary analysis may be meaningless for non- numeric inputs. What are the boundaries of {Rome, Paris, London, … }? • Can also apply similar analysis to output values, producing output-based test cases. 360 Logica
  • 25. US Income Tax System; If income is Tax is $0 - 20K 15% of total income $20 -50K $3K + 25% of amount over $20K Above $50K $10.5K + 40% of amount over $50K Boundary cases for inputs: $0, $20K, $50K 360 Logica
  • 26. Random testing involves selecting test cases based on a probability distribution. It is NOT the same as ad hoc testing. Typical distributions are: • uniform: test cases are chosen with equal probability from the entire input domain. • operational: test cases are drawn from a distribution defined by carefully collected historical usage data. 360 Logica
  • 27. • If the domain is well-structured, automatic generation can be used, allowing many more test cases to be run than if tests are manually generated. • If an operational distribution is used, then it should approximate user behavior. 360 Logica
  • 28. • An oracle (a mechanism for determining whether the output is correct) is required to determine whether the output is correct. • Need a well-structured domain. • Even a uniform distribution may be difficult or impossible to produce for complex domains, or when there is a non-numeric domains. • If a uniform distribution is used, only a negligible fraction of the domain can be tested in most cases. • Without an operational distribution, random testing does not approximate user behavior, and therefore does not provide an accurate picture of the way the system will behave. 360 Logica
  • 29. Risk is the expected loss attributable to the failures caused by faults remaining in the software. Risk is based on • Failure likelihood or likelihood of occurrence. • Failure consequence. So risk-based testing involves selecting test cases in order to minimize risk by making sure that the most likely inputs and highest consequence ones are selected. 360 Logica
  • 30. Example: ATM Machine Functions: Withdraw cash, transfer money, read balance, make payment, buy train ticket. Attributes: Security, ease of use, availability 360 Logica
  • 31. Features & Occurrence Failure Priority Attributes Likelihood Consequence (L x C) Withdraw cash High = 3 High = 3 9 Transfer money Medium = 2 Medium = 2 4 Read balance Low = 1 Low = 1 1 Make payment Low = 1 High = 3 3 Buy train ticket High = 3 Low = 1 3 Security Medium = 2 High = 3 6 360 Logica
  • 32. Features & Occurrence Failure Priority Attributes Likelihood Consequence (L x C) Withdraw cash High = 3 High = 3 9 Security Medium = 2 High = 3 6 Transfer money Medium = 2 Medium = 2 4 Make payment Low = 1 High = 3 3 Buy train ticket High = 3 Low 1 3 Read balance Low = 1 Low = 1 1 360 Logica
  • 33. The end user runs the system in their environment to evaluate whether the system meets their criteria. The outcome determines whether the customer will accept system. This is often part of a contractual agreement. 360 Logica
  • 34. Test modified versions of a previously validated system. Usually done by testers. The goal is to assure that changes to the system have not introduced errors (caused the system to regress). The primary issue is how to choose an effective regression test suite from existing, previously-run test cases. 360 Logica
  • 35. Once a test suite has been selected, it is often desirable to prioritize test cases based on some criterion. That way, since the time available for testing is limited and therefore all tests can’t be run, at least the “most important” ones can be. 360 Logica
  • 36. Most frequently executed inputs. • Most critical functions. • Most critical individual inputs. • (Additional) statement or branch coverage. • (Additional) Function coverage. • Fault-exposing potential. 360 Logica
  • 37. Methods based on the internal structure of code: • Statement coverage • Branch coverage • Path coverage • Data-flow coverage 360 Logica
  • 38. White-box methods can be used for • Test case selection or generation. • Test case adequacy assessment. In practice, the most common use of white-box methods is as adequacy criteria after tests have been generated by some other method. 360 Logica
  • 39. Statement, branch, and path coverage are examples of control flow criteria. They rely solely on syntactic characteristics of the program (ignoring the semantics of the program computation.) The data flow criteria require the execution of path segments that connect parts of the code that are intimately connected by the flow of data. 360 Logica
  • 40. • Is code coverage an effective means of detecting faults? • How much coverage is enough? • Is one coverage criterion better than another? • Does increasing coverage necessarily lead to higher fault detection? • Are coverage criteria more effective than random test case selection? 360 Logica
  • 41. • Test execution: Run large numbers of test cases/suites without human intervention. • Test generation: Produce test cases by processing the specification, code, or model. • Test management: Log test cases & results; map tests to requirements & functionality; track test progress & completeness 360 Logica
  • 42. • More testing can be accomplished in less time. • Testing is repetitive, tedious, and error-prone. • Test cases are valuable - once they are created, they can and should be used again, particularly during regression testing. 360 Logica
  • 43. • Does the payoff from test automation justify the expense and effort of automation? • Learning to use an automation tool can be difficult. • Tests, have a finite lifetime. • Completely automated execution implies putting the system into the proper state, supplying the inputs, running the test case, collecting the results, and verifying the results. 360 Logica
  • 44. • Automated tests are more expensive to create and maintain (estimates of 3-30 times). • Automated tests can lose relevancy, particularly when the system under test changes. • Use of tools require that testers learn how to use them, cope with their problems, and understand what they can and can’t do. 360 Logica
  • 45. • Load/stress tests -Very difficult to have very large numbers of human testers simultaneously accessing a system. • Regression test suites -Tests maintained from previous releases; run to check that changes haven’t caused faults. • Sanity tests - Run after every new system build to check for obvious problems. • Stability tests - Run the system for 24 hours to see that it can stay up. 360 Logica
  • 46. NIST estimates that billions of dollars could be saved each year if improvements were made to the testing process. *NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002. 360 Logica
  • 47. Cost of Inadequate Potential Cost Software Testing Reduction from Feasible Improvements Transportation $1,800,000,000 $589,000,000 Manufacture Financial Services $3,340,000,000 $1,510,000,000 Total U.S. Economy $59 billion $22 billion *NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002. 360 Logica