O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.

O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.

O slideshow foi denunciado.

Gostou da apresentação? Compartilhe-a!

- Mastering UX Design: Learning the ... by Alex Shirazi 342 views
- Why bad design is your fault - UXPA... by Gregory Raiz 11453 views
- January 2015 Digital SIG - Marketin... by NAMA 1746 views
- Field Research for User Experience by Danielle Cooley 5258 views
- What's on your mind? Exploring new ... by UXPA International 1915 views
- UXPA 2015 UX Strategy Tutorial by Stephen Denning 810 views

779 visualizações

Publicada em

Publicada em:
Dados e análise

Sem downloads

Visualizações totais

779

No SlideShare

0

A partir de incorporações

0

Número de incorporações

75

Compartilhamentos

0

Downloads

25

Comentários

0

Gostaram

2

Nenhuma incorporação

Nenhuma nota no slide

How many of you have a Data Science group within your organization that is starting to look at customer outcomes?

Keep your hand up if you’re completely satisfied with your working relationship with those Data Scientists, otherwise put your hand down.

I’ve gotten a handle on three groups in the audience. For some of you, Data Science hasn’t entered into the space of customer understanding in your organizations. I’ll speak to that briefly – but as a forecast, that day is coming up fast.

A bunch of you do have Data Scientists in your organization working on customer outcomes, but you aren’t satisfied with your working relationship. That’s what the bulk of this presentation will be about. The future of Mixed Methods Research requires the best of what both disciplines bring to the table, so we need to start building stronger relationships with each other.

Before I dive in, I’ll give a nod to the folks who are happy with their connection with Data Scientists. This is a new space, and what may seem like second nature to you is a challenging space for many. It’s a challenging space for me, frankly. The way forward is to have multiple points of view about how we collaborate, but I’m consistently struck by the scarcity of Data Science topics at UX conferences and vice versa. So today I’m going to be adding my perspective to the discussion, and I hope you’ll all consider doing the same at future events.

Not going in depth into my title – showing this viz to communicate that I consider myself a qualitative researcher at heart.

Made the switch – not because I was tired of qual research, or because I think Data Science is better. I’m just passionate about how these two disciplines collaborate, and formalizing a hybrid role seemed like the best position for me to start bringing that passion to life.

So I’ve been talking about “Data Science” for a couple minutes now, but I want to spend a bit more time unpacking what’s happening in the Data Science space. And to tee up this discussion, I want to reference a comment that Susan made during the past, present, future panel on Wednesday – there’s a lot of buzz around UX, and we need to make sure that UX work is being represented well, and that people who lack the rigor of the discipline aren’t defining what it means for everyone. That point really resonated with me, though in terms of buzz right now, my perception is that Data Science has a lot more of it.

There’s an interesting dynamic with these programs, because the draw for them – the promise for someone who is already quantitatively minded – is that there’s so much data, you’ll be able to answer any question you have. The appropriate nuance shows up later, in the details of the course, if you’re paying attention. So these graduates will be exploding onto the scene, ready to tackle every question in their way, and quite frankly most of them may not think that we have a role to play in the questions that they’re asking.

So what do we do about that?

The good news is that this role of educating stakeholders about what we do is very familiar to us – and I know this has already been discussed at length. My goal here isn’t to educate about THE education approach, but more to discuss this process from the Data Science perspective, and how I’ve addressed this education challenge while I was a qualitative user researcher.

Check out Principle #6 – what is this saying? If your Data Scientist is using the p-value by itself as a mic drop moment, that’s an opportunity for you to hook in. There’s a lot of context missing if all you’re working off of is the p-value, so don’t be afraid to jump in there.

Let’s tie some thoughts together. Remember what I said up front about the recent hype around citizen data science – the idea that anyone can adapt a model and make it work for their context. These principles were a reaction to seasoned practitioners abusing some of the fundamental concepts of the discipline. What do you think will happen when there are folks engaging who don’t even have a rigorous background in the space?

The bottom line is that statistics is hard. Most statistics classes have a unit up front that emphasizes how bad our brains are at thinking in probabilistic and statistical terms. And to reinforce the point – it took the foremost experts in Statistics over a year to release a statement about this fundamental point in the discipline.

What can we do about this? I really appreciate Principle #4 – offering full reporting and transparency.

We also need to be aware of the context of the data – what the data is representing and what it isn’t. The best machine learning model can only perform on the basis of the data that it’s given, and making sure the right data is included is a tough problem that we need to be a part of. There’s a growing body of research on how machine learning models can be biased, and it stems back to this point – the person building the model has intrinsic biases that prevent him or her from pursuing the type of data that’s appropriate to speak about a particular space.

A/B testing is a common example, so I’ll close out with some ideas around how to start a conversation around that.

- 1. Mixed Methods Research in the Age of Big Data A Primer for UX Professionals
- 2. Zachary Sam Zaiss UX Data Scientist | Microsoft Cloud @zszaiss 2006 2012 2016 UX Researcher UX DS Berkeley MIDS
- 3. Gartner Hype Cycle for Emerging Technologies: 2014 http://www.gartner.com/newsroom/id/2819918
- 4. Gartner Hype Cycle for Emerging Technologies: 2015 http://www.gartner.com/newsroom/id/3114217
- 5. The Education Perspective https://whatsthebigdata.com/2012/08/09/graduate-programs-in-big-data-and-data-science/ http://uxmastery.com/resources/ux-degrees/ 84 78Graduate Degree Programs in Data Science Graduate Degree Programs in UX
- 6. http://radar.oreilly.com/2013/10/design-thinking-and-data-science.html
- 7. We need to make collaboration with Data Scientists a priority… … and it starts with a conversation
- 8. Tip #1 Stake Your Claim
- 9. Qualitative Evaluation Criteria Talking Points Quantitative basis for n values https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/ http://www.measuringu.com/blog/five-history.php http://www.measuringu.com/blog/five-for-five.php
- 10. Qualitative Evaluation Criteria Talking Points Quantitative basis for n values Existence Proof https://www.youtube.com/watch?v=3uqZPnxG4_w
- 11. Qualitative Evaluation Criteria Talking Points Quantitative basis for n values Existence Proof Grounded Theory Inductive vs. Deductive Reasoning http://www.slideshare.net/traincroft/hcic-muller-guha-davis-geyer-shami-2015-0629 Theory from Data Data from Theory
- 12. Qualitative Evaluation Criteria Talking Points Quantitative basis for n values Existence Proof Grounded Theory Inductive vs. Deductive Reasoning Constructivism vs. Determinism https://us.sagepub.com/en-us/nam/research-design/book237357
- 13. Discussing evaluation criteria for qualitative research needs to be second nature.
- 14. What is your mic drop moment?
- 15. Tip #1 Stake Your Claim
- 16. Tip #2 Speak the Language
- 17. vs
- 18. Models + Key Aspects of Analysis Descriptive Model Descriptive Statistics Statistical Significance What is the probability of obtaining this result given the null hypothesis is true? Practical Significance Is the effect on the outcome large enough to be considered relevant?
- 19. http://fivethirtyeight.com/features/statisticians-found-one-thing-they-can-agree-on-its-time-to-stop-misusing-p-values/ The statement process was lengthier and more controversial than anticipated.
- 20. 6 Principles for p-values from ASA’s Statement 1. P-values can indicate how incompatible the data are with a specified statistical model. 2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone. 3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value crosses a specific threshold. 4. Proper inference requires full reporting and transparency. 5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result. 6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis. http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108
- 21. Models + Key Aspects of Analysis Descriptive Model Descriptive Statistics Statistical Significance What is the probability of obtaining this result given the null hypothesis is true? Practical Significance Is the effect on the outcome large enough to be considered relevant? Predictive Model Super vised Machine Learning Accuracy How well does the model predict the outcome for new data cases?
- 22. https://www.captionbot.ai/
- 23. Models + Key Aspects of Analysis Descriptive Model Descriptive Statistics Statistical Significance What is the probability of obtaining this result given the null hypothesis is true? Practical Significance Is the effect on the outcome large enough to be considered relevant? Predictive Model Super vised Machine Learning Accuracy How well does the model predict the outcome for new data cases? Representation Model Unsuper vised Machine Learning Optimization Criteria How will we determine that we’ve built a reasonable and appropriate representation model for our data?
- 24. vs
- 25. None of these measures get at the contextual meaning behind the model.
- 26. A Diagram for Product Manager… Source: Martin Eriksson, Mind the Product. http://www.mindtheproduct.com/2011/10/what-exactly-is-a-product-manager/
- 27. … And a Framework for Attributes UX Business Tech Experience Attributes Customer attributes that can explain how that customer will experience a product. Technology Attributes Customer attributes that can explain whether customers will have technical issues with a product. Business Attributes Customer attributes that can explain the extent to which the customer will contribute to business outcomes.
- 28. Example: Developer Tools X B T Prog Language Target Platform Project Complexity Project Audience Type of App Educational Background Keyboard Proclivity Project Complexity
- 29. Example: Freemium Games X B T Platform Used Facebook Connected Whale Status Completionist Tendencies Game Session Time
- 30. Example: Fitness Bands X B T Connected Devices Type / Version Frequency of Exercise Friends with Same Band Finger Shape (Fat Fingers) Farsightedness Skin Irritation
- 31. We are uniquely qualified to articulate the experience attributes of our products.
- 32. Tip #2 Speak the Language
- 33. Tip #3 Get Involved
- 34. A Metaphor for A/B Experiments
- 35. A Better Metaphor for A/B Experiments
- 36. How can we provide greater context to A/B test findings?
- 37. Heterogeneous Treatment Effects control treatment some kpi 0.71 0.72 product experts product novices control treatment converted didn‘t convert converted didn‘t convert control treatment converted didn‘t convert converted didn‘t convert Heterogeneous Treatment Effect
- 38. https://datadialogs.ischool.berkeley.edu/2014/schedule/experiments-action
- 39. Tip #1: Stake Your Claim Tip #2: Speak the Language Tip #3: Get Involved
- 40. Mixed Methods Research in the Age of Big Data A Primer for UX Professionals http://www.uxpa.org/sessionsurvey?sessionid=113

Nenhum painel de recortes público que contém este slide

Parece que você já adicionou este slide ao painel

Criar painel de recortes

Seja o primeiro a comentar