In a world that has a million different options for every creative element, CTA, form fields, and more... where do you even start? How do you know that element is where you'll see an impact big enough to make a difference for your bottom line? This is the #1 question I get as a CRO strategist, and my answer every time is: it depends! This session will walk you through how to understand your testing opportunities, generate test ideas, and measure your results with scientific accuracy
Key Takeaways:
+How to spot opportunities for testing+How to evaluate competitors and other sites for inspiration+Ways to organize and prioritize ideas based on your goals+How to use tools to evaluate test results (that don't require a stats degree)
15. @nochillfilter
We can maximize
our budgets and
efficiency by
improving landing
page conversion
rates
Photo by Isaac Smith on Unsplash @nochillfilter
16. @nochillfilter
AND don’t forget that landing page
experience is part of the Quality Score:
A good landing page can have a
positive impact on Google Ads Quality
Score, meaning lower CPCs and more
bid wins - which translates to more sales.
20. @nochillfilter
@nochillfilter
Step 1: Start with the data
and your own observations
- What seems to be
working and for whom?
- What does the audience
interact with, or ignore?
25. @nochillfilter
Mobile users have smaller average
purchase amounts, could we nudge
mobile users to larger purchases by
suggesting different products,
reframing our benefits, or use social
pressure?
An example!
27. @nochillfilter
@nochillfilter
How to formulate a hypothesis:
Changing __benefits module___ to __include
mobile app__ will impact mobile
conversions_ because it demonstrates ease
of use for mobile-first consumers.
28. @nochillfilter
@nochillfilter
A well crafted hypothesis will
always yield learnings, even if
the test fails.
It will also help you identify the
test variables and stay focused.
45. @nochillfilter
Yes, we COULD
change everything,
but we SHOULDN’T
change everything
- otherwise, how
will we know what’s
working, so we can
do more of it?
50. @nochillfilter
Sees a display ad….,
googles a specific
product….., clicks through
a paid result….., browses
the site, reads an
article….., signs up for
email…., clicks on an
email…… makes a
purchase.
Mark König on Unsplash
60. @nochillfilter
Even a massive difference in performance doesn’t
mean much if our audience isn’t big enough.
Going from 3 conversions to 6 is a 100% increase, but
it probably isn’t giving you a statistically significant
result.
61. @nochillfilter
@nochillfilter
Needed Sample Size (per variant):
https://www.optimizely.com/sample-size-calculator/
Try to let your tests run for at least 2 weeks
to normalize any weirdness in your data!
63. @nochillfilter
Can you…
● Run the test in several campaigns
instead of just in one?
● Add additional segments or
channels to your experiment?
● Remove one or more variants?
● Or…. Can you test something
where you think you’ll see a
bigger impact?
@nochillfilter
65. @nochillfilter
It is the most tempting...
Every testing platform
has a dashboard that
offers a glance into how
your test is doing right
now - which makes it so
hard to not snoop at the
results and check it all
the time.
66. @nochillfilter
But every time you check it...
You’re performing another
statistical test on the results.
This is called “repeat significance
testing” and can cause us to find
the one moment in time that results
just happen to be significant.
If we had stopped our experiment
after seeing the “significant” results
after a few days, we would have
missed that the test variant was
actually the loser.
68. @nochillfilter
@nochillfilter
This means that if we repeat the
experiment over and over again to the
same group of people under the same
conditions, 19 out of 20 times, we can
expect the same outcome will occur.
There is some risk that random chance will
favor one version over another, but by
using the 95% confidence interval, we are
reducing our risk that we are picking the
wrong condition.
Statistical Significance:
95% Confidence
71. @nochillfilter
DATA DRIVEN
OPTIMIZATION
AUDIENCE SIZE
supports the tests
you want to run
TAILOR the tests to
your program; make
sure it’s a good fit
ADJUST based
on learnings
BENEFIT: Can this test
reasonably have a big
impact and can we
measure it?
COST: How long/$$ will
it take to build and
design?
EMPHASIZE
program goals
(KPIs)
73. @nochillfilter
Remember that you shouldn’t run multiple tests
on the same people at the same time, otherwise
you won’t know which test is having the impact
you’re seeing.
Headline
Test Default
Payment
Options
CTA copy
Hero
Image
June July August September
74. @nochillfilter
Every test should
have:
+ A unique name
+ A judgement on how
difficult it would be to
implement the changes
+ The number of users or
clicks you’ll test on
before evaluating
+ The main metric(s) for
success - usually CTR
and Conversion Rate.
@nochillfilter
76. @nochillfilter
#HEROCONF
Discovery Hypotheses and Testing
Make observations (with data!) based on what
you know about your marketing and your
audiences and line them up with your goals.
Research
Learn more about why people
behave in general and use that to
inform your hypotheses, and to
generate ideas for ways to
improve performance.
Prioritize and Plan
Document and schedule your ideas,
and begin testing.
4.) Evaluate and Repeat
Validate your hypotheses, build on
what works, and go back to the
observation and ideation stage as
necessary.
Generate Ideas