Slide deck from the talk I gave at the conference
1. QuickView for test data-> quick feedback
2. Automating everything on top of open-source tools like Selenium, Appium, jenkins etc.
3. Pain points and work arounds specially for Mobile App Test Automation
5. Why not only manual testing ?
Varieties of desktop web browsers
mWeb
Android, IOS, Windows Native/Hybrid apps
All popular Make/model mobile handsets
Last minute changes…………..
6. Are Unit Test & Integration test Enough?
Integration tests ✓
What about -
UI changes ?
Backend Releases affects mobile apps ?
API parameters changed
HTML templates being passed
8. Functional User Flow Tests
Web (Selenium)
Mobile App ??
MonkeyTest
Calabash
Appium
9. Choice of tool for mobile automation
Love its’ pain points too
Be part of its journey
work on:
1. server/inspector crash/stuck
2. right locating strategy
4. time consuming to test the test ( found a bug here ;-) )
5. Simulating gestures
6. same works manually but doesn’t in automation
- huge logs
- duplicate misleading nodes
11. Trigger & publish test results ?
Sell your bugs with CI !
Accessibility
Transparency
Every possible feedback data at one point
12. From local script to remote jenkins
Issue
Flaky tests on remote
To do’s to debug
logs
screenshots
understand processes
Solution
Xvfb 1400x1200x16 Dimension dimension = new Dimension(1224,800);
Monit
13. One screen Release approval
Release Status Analyser dashboard
Tests list
Tests status (Pass/Fail)
Developers responsible
Release Go/NoGo call quickly
RSA: https://p00j4.github.io (original: https://wiki.jenkins-ci.org/display/
JENKINS/eXtreme+Feedback+Panel+Plugin)
Release version to Revert Back to avoid the Fix bugs in production —>
15. Live Demo to RSA(QuickView)
All Tests execution in 1 go post release deployment
All feedback at 1 place
Implementing http://p00j4.github.io/
Understanding how could use for specific need
Certainly we don’t want such instant fixing
and running into more issues on production!
17. DDD (Debug Driven Development)
~ Oren Rubin
Test Code be of production quality
Appropriate Design pattern
Units (Say no to Large Tests)
Proper logging
Say no to “static” unless intended
Take screenshots as much as possible
Automated everything? way to start ->publish results in RSA