O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

A Software Testing Intro

What is testing?

“An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.”
- Cem Kaner

  • Entre para ver os comentários

A Software Testing Intro

  1. 1. Intro to Testing Testing Internship 2018 -Gabi Kis
  2. 2. What is Testing? “The process of executing a program with the intent of finding errors.” - Glen Myers “Questioning a product in order to evaluate it.” - James Bach “An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.” - Cem Kaner
  3. 3. What is Quality? – Joseph Juran “fitness for use” – Philip Crosby “conformance with requirements” – Jerry Weinberg “value to some person” “The totality of features and characteristics of a product that bear on its ability to satisfy a given need” – American Society for Quality “The degree to witch a component, system or process meets specified requirements and/or user/customer needs and expectations” – ISTQB (International Software Testing Qualifications Board)
  4. 4. What is a bug? “A bug is a failure to meet the reasonable expectations of a user.” - Glen Myers “A bug is something that threatens the value of the product.” - James Bach and Michael Bolton “A bug is anything that causes an unnecessary or unreasonable reduction in the quality of a software product.“ - Cem Kaner
  5. 5. Why do we test? • To find bugs! • To assess the level of quality and provide related information to stakeholders • To reduce the impact of the failures at the client’s site (live defects) and ensure that they will not affect costs & profitability • To decrease the rate of failures (increase the product’s reliability) • To improve the quality of the product • To ensure requirements are implemented fully & correctly • To validate that the product is fit for its intended purpose • To verify that required standards and legal requirements are met • To maintain the company reputation
  6. 6. Testing provides measure of quality! Can we test everything? Exhaustive testing is possible? • No, sorry …time & resources make it impractical !…but, instead: • We must understand the risk to the client’s business of the software not functioning correctly • We must manage and reduce risk, carry out a Risk Analysis of the application • Prioritize tests to focus them (time & resources) on the main areas of risk
  7. 7. Causes of errors
  8. 8. Causes of software defects
  9. 9. Cost of a bug in the software life cycle
  10. 10. How much testing is enough? • Predefined coverage goals have been meet • The defect discovery rate dropped below a predefined threshold • The cost of finding the “next” defect exceeds the loss from that defect • The project team reaches consensus that it is appropriate to release the product • The manager decides to deliver the product
  11. 11. Testing principles • Testing shows presence of defects, but cannot prove that there are no more defects; testing can only reduce the probability of undiscovered defects • Complete, exhaustive testing is impossible; good strategy and risk management must be used • Pareto rule (defect clustering): usually 20% of the modules contain 80% of the bugs • Early testing: testing activities should start as soon as possible (including here planning, design, reviews) • Pesticide paradox: if the same set of tests are repeated over again, no new bugs will be found; the test cases should be reviewed, modified and new test cases developed • Context dependence: test design and execution is context dependent (desktop, web applications, real-time, …) • Verification and Validation: discovering defects cannot help a product that is not fit to the users needs
  12. 12. Schools of Testing Analytical • Which techniques should we use? • Code Coverage (provides an objective “measure” of testing) Factory • What metrics should we use? • Requirements Traceability (make sure that every requirement has been tested) Quality Assurance • Are we following a good process? • The Gatekeeper (the software isn’t ready until QA says it’s ready) Context driven • What tests would be most valuable right now? • Exploratory Testing (concurrent test design and test execution, rapid learning) Agile • Is the story done? • Unit Tests (used for test-driven development)
  13. 13. Approaches to testing Black box testing • Testing an test design without knowledge of the code (or without use of knowledge of the code). White box testing • Testing or test design using knowledge of the details of the internals of the program (code and data). Gray box testing • Using variables that are not visible to the end user or stress relationships between variables that are not visible to the end user.
  14. 14. Testing levels Unit tests focus on individual units of the product. Integration tests study how two (or more) units work together. System testing focuses on the value of the running system. Acceptance testing confirm that the system works as specified.
  15. 15. Functional & Nonfunctional Functional testing: • Test the functionalities (features) of a product • Focused on checking the system against the specifications Non-functional testing: • Testing the attributes of a component or system that do not relate to functionality • Performance testing (measure response time, throughput, resources utilization) • Load testing (how much load can be handled by the system?) • Stress testing (evaluate system behavior at limits and out of limits) • Spike testing (short amounts of time, beyond its specified limits) • Endurance testing (Load Test performed for a long time interval (week(s))) • Volume testing (testing where the system is subjected to large volumes of data) • Usability testing (product is understood, easy to learn, easy to operate and attractive to users) • Reliability testing • Portability testing • Maintainability testing
  16. 16. “Selling” bugs – Cem Kaner “The best tester isn’t the one who finds the most bugs, the best tester is the one who gets the most bugs fixed” (Cem Kaner) • Motivate the programmer • Demonstrate the bug effects • Overcome objections • Increase the defect description coverage (indicate detailed preconditions, behavior) • Analyze the failure • Produce a clear, short, unambiguous bug report • Advocate error costs right
  17. 17. Confirmation & Regression Confirmation testing • Re-testing of a module or product, to confirm that the previously detected defect was fixed Regression testing • Re-testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered as a result of the changes made. It is performed when the software or its environment is changed
  18. 18. Verification & Validation Verification • Are we building the product right? • Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. Validation • Are we building the right product? • Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled
  19. 19. Test Inputs Precondition Data Environmental Inputs Precondition Program State System Under Test Test Results (Expected) PostconditionData (Expected) Environmental Results (Expected) PostconditionProgram State (Expected) Test Oracle Test Results (Actual) PostconditionData (Actual) Environmental Results (Actual) PostconditionProgram State (Actual) Test Oracle
  20. 20. The “V” testing model
  21. 21. Test Design
  22. 22. Test Design Techniques • Black-box techniques (Specification-based techniques): • Equivalence Partitioning • All-pairs Testing • Boundary Value Testing • Decision Table • State Transition Testing • Use Case Testing • White-box techniques (Structure-based techniques): • Statement Coverage Testing • Decision Coverage Testing • Condition Coverage Testing • Experience Based Techniques: • Error Guessing • Exploratory Testing
  23. 23. CONSISTENCY HEURISTICS Consistent with the vendor’s image (reputation) Consistent with its purpose Consistent with user’s expectations Consistent with the product’s history Consistent within product Consistent with comparable products Consistent with claims Consistent with statutes, regulations, or binding specifications
  24. 24. Equivalence partitioning To minimize testing, partition input (output) values into groups of equivalent values (equivalent from the test outcome perspective) If an input is a continuous range of values, then there is typically one class of valid values and two classes of invalid values, one below the valid class and one above it. Example: Rule for hiring a person is second its age: 0 – 15 = do not hire 16 – 17 = part time 18 – 54 = full time 55 -- 99 = do not hire Which are the valid equivalence classes? And the invalid ones? Give examples of representative values!
  25. 25. All-pairs testing In practice, there are situations when a great number of combinations must be tested. Example: A Web site must operate correctly with different browsers, using different plug- ins; running on different client operating systems; receiving pages from different servers; running on different server operating systems. Test environment combinations: • 8 browsers • 3 plug-ins • 6 client operating systems • 3 servers • 3 server OS 1,296 combinations ! All-pairs testing is the solution : tests a significant subset of variables pairs.
  26. 26. Boundary value (analysis) testing Boundaries = edges of the equivalence classes. Boundary values = values at the edge and nearest to the edge The steps for using boundary values: • First, identify the equivalence classes. • Second, identify the boundaries of each equivalence class. • Third, create test cases for each boundary value by choosing one point on the boundary, one point just below the boundary, and one point just above the boundary. "Below" and "above" are relative terms and depend on the data value's units • For the previous example: • boundary values are {-1,0,1}, {14,15,16},{15,16,17},{16,17,18}{17,18,19}, {54,55,56},{98, 99, 100} • omitting duplicate values: {-1,0,1,14,15,16,17,18,19,54,55,56,98,99,100}
  27. 27. Decision table
  28. 28. State transition table
  29. 29. Statement coverage • How many test cases do you need to get 100% statement coverage?
  30. 30. Configuration Management In Testing, Configuration Management must: • Identify all test-ware items • Establish and maintain the integrity of the testing deliverables (test plans, test cases, documentation) through the project life cycle • Set and maintain the version of these items • Track the changes of these items • Relate test-ware items to other software development items in order to maintain traceability • Reference clearly all necessary documents in the test plans and test cases
  31. 31. Tool Support
  32. 32. Test tool classification • Management of testing: • Test management • Requirements management • Bug tracking • Configuration management • Static testing: • Review support • Static analysis • Modeling • Test specification: • Test design • Test data preparation • Test execution: • Record and play • Unit test framework • Result comparators • Coverage measurement • Security • Performance and monitoring: • Dynamic analysis • Load and stress testing • Monitoring • Other tools
  33. 33. Tool support – benefits • Repetitive work is reduced (e.g. running regression tests, re-entering the same test data, and checking against coding standards). • Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived from requirements). • Objective assessment (e.g. static measures, coverage and system behavior). • Ease of access to information about tests or testing (e.g. statistics and graphs about test progress, incident rates and performance).
  34. 34. Tool support – risks • Unrealistic expectations for the tool (including functionality and ease of use). • Underestimating the time, cost and effort for the initial introduction of a tool (including training and external expertise). • Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the need for changes in the testing process and continuous improvement of the way the tool is used). • Underestimating the effort required to maintain the test assets generated by the tool. • Over-reliance on the tool (replacement for test design or where manual testing would be better). • Lack of a dedicated test automation specialist • Lack of good understanding and experience with the issues of test automation • Lack of stakeholders commitment for the implementation of a such tool
  35. 35. Reviews
  36. 36. Reviews and the testing process When to review? • As soon as an software artifact is produced, before it is used as the basis for the next step in development Benefits include: • Early defect detection • Reduced testing costs and time • Can find omissions Risks: • If misused they can lead to project team members frictions • The errors & omissions found should be regarded as a positive issue • The author should not take the errors & omissions personally • No follow up to is made to ensure correction has been made • Witch-hunts used when things are going wrong
  37. 37. Phases of a formal review • Planning: define scope, select participants, allocate roles, define entry & exit criteria • Kick-off: distribute documents, explain objectives, process, check entry criteria • Individual preparation: each of participants studies the documents, takes notes, issues questions and comments • Review meeting: meeting participants discuss and log defects, make recommendations • Rework: fixing defects (by the author) • Follow-up: verify again, gather metrics, check exit criteria
  38. 38. Roles in a formal review The formal reviews can use the following predefined roles: • Manager: schedules the review, monitor entry and exit criteria • Moderator: distributes the documents, leads the discussion, mediates various conflicting opinions • Author: owner of the deliverable to be reviewed • Reviewer: technical domain experts, identify and note findings • Scribe: records and documents the discussions during the meeting
  39. 39. Types of review Informal review • A peer or team lead reviews a software deliverable • Without applying a formal process • Documentation of the review is optional • Quick way of finding omissions and defects • Amplitude and depth of the review depends on the reviewer • Main purpose: inexpensive way to get some benefit Walkthrough • The author of the deliverable leads the review activity, others participate • Preparation of the reviewers is optional • Scenario based • The sessions are open-ended • Can be informal but also formal • Main purposes: learning, gaining understanding, defect finding Technical Review • Formal Defect detection process • Main meeting is prepared • Team includes peers and technical domain experts • May vary in practice from quite informal to very formal • Led by a moderator, which is not the author • Checklists may be used, reports can be prepared • Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical problems and check conformance to specifications and standards. Inspection • Formal process, based on checklists, entry and exit criteria • Dedicated, precise roles • Led by the moderator • Metrics may be used in the assessment • Reports, list-of-findings are mandatory • Follow-up process • Main purpose: find defects
  40. 40. Success factors for reviews • Clear objective is set • Appropriate experts are involved • Identify issues, not fix them on-the-spot • Adequate psychological handling (author is not punished for the found defects) • Level of formalism is adapted to the concrete situation • Minimal preparation and training • Management encourages learning, process improvement • Time-boxing is used to determine time allocated to each part of the document to be reviewed • Use of effective and specialized checklists ( requirements, test cases )

×