O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

From Sentiment to Persuasion Analysis: A Look at Idea Generation Tools

1.508 visualizações

Publicada em

Talk given at NLP Day Texas.

Note: the first section is largely the same as the talk "From Sentiment to Persuasion Analysis." The following sections, making up the vast majority of the content, present new information.

Publicada em: Software
  • Seja a primeira pessoa a gostar disto

From Sentiment to Persuasion Analysis: A Look at Idea Generation Tools

  1. 1. From Sentiment to Persuasion Analysis: A Look at Idea Generation Tools Jason Kessler Data Scientist, CDK Global @jasonkessler www.jasonkessler.com
  2. 2. Outline • Idea generation tools – Use large corpora to generate hypotheses about questions like: – How do you make a persuasive ad? – How can presidential candidates improve their rhetoric? – How do ethnicity and gender correlate to language use in online dating profiles? – How do movie reviews predict box-office success? • Technical content: – Ways of extracting category-associated words and phrases from corpora – UX around displaying and provided context to associated words and phrases
  3. 3. Customer-Written Product Reviews Good Ad Content
  4. 4. Naïve Approach: Indicators of Positive Sentiment "If you ask a Subaru owner what they think of their car, more times than not they'll tell you they love it," -Alan Bethke, director of marketing communications for Subaru of America (via Adweek)
  5. 5. Positive sentiment. Engaging language.
  6. 6. Finding Engaging Content …I was very skeptical giving up my truck and buying an "Economy Car." I'm 6' 215lbs, but my new career has me driving a personal vehicle to make sales calls. I am overly impressed with my Cruze… Rating: 4.4/5 Stars Example Review Appearing on a 3rd Party Automotive Site # of users who read review: 20 Text: Car Reviewed: Chevy Cruze
  7. 7. Finding Engaging Content …I was very skeptical giving up my truck and buying an "Economy Car." I'm 6' 215lbs, but my new career has me driving a personal vehicle to make sales calls. I am overly impressed with my Cruze… Rating: 4.4/5 Stars Example Review Appearing on a 3rd Party Automotive Site # of users who read review: # who went on to visit a Chevy dealer’s website: 15 20 Text: Car Reviewed: Chevy Cruze
  8. 8. Finding Engaging Content …I was very skeptical giving up my truck and buying an "Economy Car." I'm 6' 215lbs, but my new career has me driving a personal vehicle to make sales calls. I am overly impressed with my Cruze… Rating: 4.4/5 Stars Example Review Appearing on a 3rd Party Automotive Site # of users who read review: # who went on to visit a Chevy dealer’s website: 15 20 Review Engagement Rate: 15/20=75% Text: Car Reviewed: Chevy Cruze
  9. 9. Finding Engaging Content …I was very skeptical giving up my truck and buying an "Economy Car." I'm 6' 215lbs, but my new career has me driving a personal vehicle to make sales calls. I am overly impressed with my Cruze… Rating: 4.4/5 Stars Example Review Appearing on a 3rd Party Automotive Site # of users who read review: # who went on to visit a Chevy dealer’s website: 15 20 Review Engagement Rate: 15/20=75% Text: Car Reviewed: Chevy Cruze Median Review Engagement Rate: 22%
  10. 10. Positive Sentiment High Engagement Love Comfortable Comfortable Front [Seats] Features Acceleration Solid Free [Car Wash, Oil Change] Amazing Quiet Sentiment vs. Persuasiveness: SUV-Specific
  11. 11. Positive Sentiment High Engagement Love Comfortable Comfortable Front [Seats] Features Acceleration Solid Free [Car Wash, Oil Change] Amazing Quiet Sentiment vs. Persuasiveness: SUV-Specific Negative Sentiment Low Engagement Transmission Money [spend my, save] Problem Features Issue Dealership Dealership Amazing Times Build Quality [typically positive]
  12. 12. • We’ll discuss algorithms later in the talk • Basically, we rank words and phrases based on their classifier produced feature weights • Techniques and technologies used – Unigram and bigram features (bigrams must pass a simple key- phrase test) – Ridge classifier Algorithm for finding word lists
  13. 13. High Sentiment Terms Love Awesome Fantastic Handled Perfect
  14. 14. Engagement Terms Blind (spot, alert) Contexts from high engagement reviews - “The techno safety features (blind spot, lane alert, etc.) are reason for buying car...” - “Side blind Zone Alert is truly wonderful…” - …
  15. 15. Can better science improve messaging? Engagement Terms Blind White (paint, diamond) Contexts - “White with cornsilk interior.” - “My wife fell in love with the Equinox in White Diamond” - “The white diamond paint is to die for”
  16. 16. Can better science improve messaging?
  17. 17. Can better science improve messaging? Engagement Terms Blind White Climate (geography, a/c) Contexts - “Love the front wheel drive in this northern Minn. Climate” - “We do live in a cold climate (Ontario)” - …climate control…
  18. 18. Just recently, VW has produced very similar commercials.
  19. 19. Process
  20. 20. Process Corpus collection Label documents with class of interest Find linguistic elements that are associated with class Explain why linguistic elements are associated. Identify documents of interest. • For CDK’s usage: • Persuasive • High engagement rate. • Positive • High star rating. - Show representative contexts. - Human generated explanation. - Statistics supporting association. - Ideation. Complicated! Will be a major focus of this talk.
  21. 21. Case Study 1: Language of Politics
  22. 22. NYT: 2012 Political Convention Word Use by Party Mike Bostock et al., http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html
  23. 23. 2012 Political Convention Word Use by Party Source: http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html,
  24. 24. Mike Bostock et al., http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html Corpus has a class size imbalance: - Democrats: 79k words across 123 speeches - Republicans: 60k words across 66 speeches “Number of mentions by spoken words” - Normalizes imbalance (282 vs. 182) - More understandable than P(jobs|Democrat) vs. P(jobs|Republican), which are both extremely low numbers (0.36% vs. 0.30%)
  25. 25. • Corpus: Political Convention Speeches • Class labels: Political Party of Speaker • Linguistic elements: – Words and phrases – Manually chosen • Explanation: – Cool bubble diagram – Selective topic narration – Click-to-view topic contexts organized by speaker and party • We’ll get back to this in a minute Summary: NYT 2012 Conventions
  26. 26. Case Study 2: Language of Self- Representation
  27. 27. OKCupid: How does gender and ethnicity affect self- presentation on online dating profiles? Christian Rudder: http://blog.okcupid.com/index.php/page/7/ Which words and phrases statistically distinguish ethnic groups and genders? hobos almond butter 100 Years of Solitude Bikram yoga
  28. 28. Source: http://blog.okcupid.com/index.php/page/7/ (Rudder 2010) Words and phrases that distinguish white men. OKCupid: How do ethnicities’ self-presentation differ on a dating site?
  29. 29. Source: http://blog.okcupid.com/index.php/page/7/ (Rudder 2010) Words and phrases that distinguish Latino men. Explanation OKCupid: How do ethnicities’ self-presentation differ on a dating site?
  30. 30. Source: http://blog.okcupid.com/index.php/page/7/ Words and phrases that distinguish Latino men. OKCupid: How do ethnicities’ self-presentation differ on a dating site? The explanation suggests a topic modeling may help to identify latent themes that are driving these word and phrase distinctiveness.
  31. 31. Source: http://blog.okcupid.com/index.php/page/7/ Words and phrases that distinguish Latino men. OKCupid: How do ethnicities’ self-presentation differ on a dating site? The explanation suggests a topic modeling may help to identify latent themes that are driving these word and phrase distinctiveness.
  32. 32. What can we do with this? • Genre of insurance or investment ads – Montage of important events in the life of a person. • With these phrase sets, the ads practically write themselves: • What if you wanted to target Latino men? – Grows up boxing – Meets girlfriend salsa dancing – Becomes a Marine – Tells a joke at his wedding – Etc…
  33. 33. The linguistic elements were found “statistically.” The exact method is unclear, but Rudder (2014) describes a novel method to identify statistically associated terms. - Let’s look closely at the algorithm and see: - how it works - and how it performs on the political convention data set. OKCupid: How do ethnicities’ self-presentation differ on a dating site?
  34. 34. * Not drawn to scale Rankingwith democrats Ranking with republicans top middle bottom bottom middle top giraffe✚ olympics ✚ ann ✚ bipartisan ✚ people✚ stand ✚ election ✚auto✚ wealthy✚ bin laden✚ regulatory ✚✚ pelosi ✚ rancher grandfather ✚ public✚ worker✚ regulation ✚ profit ✚ Source: Christian Rudder. Dataclysm. 2014.
  35. 35. * Not drawn to scale Rankingwith democrats Ranking with republicans top middle bottom bottom middle top giraffe✚ olympics ✚ ann ✚ bipartisan ✚ people✚ stand ✚ election ✚auto✚ wealthy✚ bin laden✚ regulatory ✚✚ pelosi ✚ rancher grandfather ✚ public✚ worker✚ regulation ✚ profit ✚ Association between democrats and “worker” is the Euclidean distance between word and top left corner Source: Christian Rudder. Dataclysm. 2014.
  36. 36. * Not drawn to scale Rankingwith democrats Ranking with republicans top middle bottom bottom middle top giraffe✚ olympics ✚ ann ✚ bipartisan ✚ people✚ stand ✚ election ✚auto✚ wealthy✚ bin laden✚ regulatory ✚✚ pelosi ✚ rancher grandfather ✚ public✚ worker✚ regulation ✚ profit ✚ Association between republicans and “regulation” is the Euclidean distance between word and bottom right corner Source: Christian Rudder. Dataclysm. 2014.
  37. 37. Another look at the 2012 political convention data Mike Bostock et al., http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html - The conventions let political parties reach a broad audience, and both energize their bases and reach undecided voters. - How well do these terms capture rhetorical differences between parties?
  38. 38. Applying the Rudder algorithm to the 2012 data reveals a number of terms associated with a party that weren’t covered in the NYT viz. These can uncover party talking points. Another look at the 2012 political convention data Republican Top Terms Included in Visualization? Comment olympics no Gov. Romney was CEO of the Organizing Committee for the 2002 Winter Olympics. ann no Ann Romney big government no 16 [trillion] no Size of national debt oklahoma no Speech by Mary Fallin, OK governor, mentioned state numerous times. elect mitt yes next president no the constitution no Mostly referring to allegedly unconstitutional actions by Pres. Obama mitt 's yes our founding no Founding fathers. Talk of restoring values of founding fathers. jack no Republicans just seem to talk about people named Jack more. 8 [percent] no 8% unemployment. The term “unemployment” was used in the visualization, but Democrats didn’t mention the percentage. they just no “Just don’t get it” was a refrain of a Repub. speaker. patient no Discussions of US being “patient,” as well as how the ACA affects the doctor- patient relationship pipeline no Keystone pipeline
  39. 39. How well do these terms capture linguistic differences between parties? Mike Bostock et al., http://www.nytimes.com/interactive/2012/09/06/us/politics/convention-word-counts.html Before After
  40. 40. Now let’s look at the Democrats. • The auto bailout is Pres. Obama’s 2012 Olympics. • Government is seen as a collection of programs (Pell grants, Medicare Vouchers, etc…) to help middle class families, vs. “big government”. • Attacks on wealthy • No appeals to fundamental principles (“constitution,” “founding fathers”) • Women explicitly mentioned, while Repubs. talk about Ms. Romney. Another look at the 2012 political convention data Democratic Top Terms Included in Visualization? Comment auto [industry] yes Provided in NYT. Pres. Obama was credited with auto industry recovery. [move] america forward yes *only “forward” was included in visualization. insurance company no woman 's yes [the] wealthy no Never used by Repubs. pell [grant] no Never used by Repubs. last week no Used to talk about RNC that happened the pervious week. grandmother no 6:1 ratio of Dem vs. republican usage. Dovetails with discussion of women. access no Access to gov’t services or health care millionaire yes platform no Repubs never mentioned party platform voucher no Accusing Republicans of turning Medicare into “voucher.” class family yes “Middle class” was included. register no Voter registration. Only used once by Repubs.
  41. 41. • Democrats had an advantage in having their convention last – They could refute Republican talking points – The Republicans made Gov. Romney's role in the 2002 Olympics a major selling point • It went virtually unmentioned by Democrats • Republicans may be using numbers to their detriment: – 8% unemployment • Often “for 42 months” was added – $16 trillion deficit – These numbers are tough to interpret without a lot context • Romney’s “47%… …are dependent on the government, believe they are victims” comment may have been the death-nail in his presidential bid • Jeb Bush’s campaign point of “4% GDP growth” has been ineffective – His polling numbers are at about 4% at the time of this talk How can this aid in messaging?
  42. 42. Case Study 3: Movie reviews and revenue
  43. 43. - Data: - 1,718 movie reviews from 2005-2009 7 different publications (e.g., Austin Chronicle, NY Times, etc.) - Various movie metadata like rating and director - Gross revenue - Task: - Predict revenue from text, couched as a regression problem - Regressor used: Elastic Net - l1 and l2 penalized linear regression - 2009 reviews were held-out as test data - Linguistic elements: - Ngrams: unigrams, bigrams and trigrams - Dependency relation triples: <dependent, relation, head> - Versions of features labeled for each publication (i.e. domain) - “Ent. Weekly: comedy_for”, “Variety: comedy_for” - Essentially the same algo as Daume III (2007) - Performed better than naïve baseline, but worse than metadata Predicting Box-Office Revenue From Movie Reviews Joshi et al. Movie Reviews and Revenues: An Experiment in Text Regression. NAACL 2010 Daume III. Frustratingly Easy Domain Adaptation. ACL 2007.
  44. 44. Predicting Box-Office Revenue From Movie Reviews Joshi et al. Movie Reviews and Revenues: An Experiment in Text Regression. NAACL 2010 manually labeled feature categories Feature weight (“Weight ($M)”) in linear model indicates how much features are “worth” in millions of dollars. The learned coefficients.
  45. 45. - 2015 follow-up work: - Using convolutional neural network in place of Elastic Net Bitvai and Cohn: Non-Linear Text Regression with a Deep Convolutional Neural Network. ACL 2015
  46. 46. Predicting Box-Office Revenue From Movie Reviews Bitvai and Cohn: Non-Linear Text Regression with a Deep Convolutional Neural Network. - Word association for convolutional neural network regressor - Algorithm: - Compare the prediction of the regressor with phrase zeroed out in input to original output. - Impact is the difference in outputs. - Impact for “Hong Kong” will involve running regressor with “Hong Kong” zeroed out in movie representation, but unigrams “Hong” and “Kong” are unaffected. Impact = predict({…, “Hong Kong”: 1, …}) – predict({…, “Hong Kong”: 0, …})
  47. 47. Predicting Box-Office Revenue From Movie Reviews Bitvai and Cohn: Non-Linear Text Regression with a Deep Convolutional Neural Network. - Explanation - ‘#’s reflect count of movies in test set having a review that used phrase - Min is lowest impact across movies classified, max is highest, used for ordering top positive and negative. - Many “top” terms only appear in one movie. - Manually selected phrases are ordered by the average impact. - Open questions: - Does the increase or decrease in the prediction actually improve the regressor’s performance? - Including the average decrease in MAE among movies with phrase would address this.
  48. 48. • The corpus used in Joshi et al. 2010 is freely available. • Can we use the Rudder algorithm to find interesting associated terms? How does it compare? – Rudder algorithm requires two or more classes. – We can partition the the dataset into high and low revenue partitions. • High being movies in the upper third of revenue • Low in the bottom third – Find words that are associated with high vs. low (throwing out the middle third) and vice versa Univariate approach to predicting revenue from text
  49. 49. • Observation definition is really important! – Recall that the same movie may have multiple reviews. – We can treat an observation as • a single review • a single movie – The response variable remains the same– movie revenue Univariate approach to predicting revenue-category from text
  50. 50. • Observation definition is really important! – Recall that the same movie may have multiple reviews. – We can treat an observation as • a single review • a single movie – The response variable remains the same– movie revenue Univariate approach to predicting revenue-category from text Top 5 high revenue terms (Rudder algorithm) Review-level observations Movie-level observations Batman Computer generated Borat Superhero Rodriguez The franchise Wahlberg Comic book Comic book Popcorn
  51. 51. • Observation definition is really important! – Recall that the same movie may have multiple reviews. – We can treat an observation as • a single review • a single movie – The response variable remains the same– movie revenue Univariate approach to predicting revenue-category from text Top 5 high revenue terms (Rudder algorithm) Review-level observations Movie-level observations Batman Computer generated Borat Superhero Rodriguez The franchise Wahlberg Comic book Comic book Popcorn
  52. 52. Univariate approach to predicting revenue-category from text Top 5 Computer generated Superhero The franchise Comic book Popcorn Bottom 5 exclusively [Phone number] Festival Tribeca With English Failed to produce term associations around content ratings (e.g., PG-13, “strong language”). Rating is strongly correlated to revenue. Let’s look exclusively at PG-13 movies
  53. 53. Only PG-13-rated movies Selected Top Terms Franchise Computer generated Installment The first two The ultimate Selected Bottom Terms [Theater specific terms like phone numbers] A friend Her mother Parent One day Siblings Top terms are very similar. Franchises and sequels remain indicator of success. Bottom terms tell us something new! Movies about friendship or family dynamics don’t seem to perform well. Idea generation tools can also be idea rejection tools. - Producers looking for a movie to pull in a lot of revenue, a PG-13 family melodrama isn’t a great idea. Lesson: Corpus selection is important in getting actionable, interpretable results!
  54. 54. Language use and age
  55. 55. Language use over time in Facebook statuses Best topic for each age group listed. LOESS regression line for prevalence by age group Schwartz HA, Eichstaedt JC, Kern ML, Dziurzynski L, Ramones SM, et al. (2013) Personality, Gender, and Age in the Language of Social Media: The Open Vocabulary Approach. PLoS ONE 8(9) Nod to James Pennebaker
  56. 56. Word cloud pros and cons Alternative to word cloud is list, ranked by phrase frequency or phrase precision. Pro • Word clouds force you to hunt for the most impactful terms • You end up examining the long tail in the process • Compactly represent many phrases Con • Longer words are more prominent. • “Mullet of the Internet” • Hard to show phrase annotations. • Ranking is unclear. Schwartz HA, Eichstaedt JC, Kern ML, Dziurzynski L, Ramones SM, et al. (2013) Personality, Gender, and Age in the Language of Social Media: The Open Vocabulary Approach. PLoS ONE 8(9)
  57. 57. CDK Global’s Language Visualization Tool
  58. 58. • Suppose you are selling a car to a typical person, how would you describe the car’s performance? • Should you say – This car has 162 ft-lbs of torque. – OR – This car makes passing on two lane roads easy. • Having an idea generation (and rejection) tool makes this very easy. Informing dealer talk tracks.
  59. 59. • Corpus and document selection are important – Documents: movie-level instead of review-level – Corpus: rating-specific – Subsets of corpus can be particularly interesting: e.g., PG-13 movies • Don’t always look at extreme terms – The Rudder algorithm on the NYT visualization lacked many important issues like Medicar • Use a variety of approaches – Univariate and multivariate approaches can highlight different terms • More phrase context is better than less • Phrase lists are most understandable when presented with a narrative, even if it’s a bit speculative Recommendations
  60. 60. • Thank you! • We’re hiring – talk to me (best) or, if you can’t, go to CDKJobs.com • Special thanks Joel Collymore (the concept of “idea generation tool”), Michael Mabale (thoughts on word clouds), Michael Eggerling, Ray Littell-Herrick, Peter Huang, Peter Kahn, Iris Laband, Kyle Lo, Chris Mills, Dengyao Mo, Keith Zackarone Acknowledgements
  61. 61. Questions? (Yes, we’re hiring!!) • Data Scientist • UI/UX Development & Design • Software Engineer – all levels • Product Manager Is this you? • Find “Jobs by Category” • Click Technology • Have your Resume ready • Click “Apply”! Head to CDKJobs.com -or- talk to me @jasonkessler

×