Deception detection, or Deceptive opinion detection, is the task of inferring and deciding whether a given text, that carries some opinion is deceptive (or “false”). To further clear what this means, take, for instance, an hotel review site - an “adversary” may post a review that was deliberately written to sound authentic and to deceive the reader that this review is indeed truthful. The `deception` we will be referring to in this summary will be of user reviews / opinions..
2. Part 1 - Topics of Interest
1. Sentiment Analysis
2. Sign Language (capture and recognition)
3. Computational Creative Naming
4. Computerized Deception Detection
5. NLP Approaches for Multiword Expressions
6. Answer Extraction
7. Natural Language Generation
8. Automatic Text Summarization
9. NLP-based Bibliometrics
10. Natural Language User Interfaces for Relational Databases
11. Truecasing - Restoring Case Information for badly/non-cased text
12. The Web as a Corpus
2
3. Part 2 - Extension on 4 Selected Topics
1 - Sign Language (capture and recognition)
The American Sign language is the primary means of communication for around 1.5 million
deaf people in the United States [2].
It is a visual-gestural language using upper body
gestures. There is no written form of sign language - currently corpora take the form of
videos [13]- and the NLP eld may need to adapt for this research eld. There were a few
attempts to create a sign language corpora ([13]) , but they have yet to be learned from a
linguistic / NLP perspective. Tools and adaptation of existing tools need to be developed
in order to face this challenge - in regards to timing, spatial reference, inection and new
methods of unied motion capture for use in sign language analysis.
2 - Truecasing
Truecasing is the problem of determining proper capitalization for a sentence/document
when it is uncapitlized / wrongly capitalized. This is mainly for use in English and any
language whose script includes a distinction between lower and upper case letters.
The
problem is irrelevant for languages that are not written in Latin, Cyrillic, Greek or Armenian alphabet. Truecasing is an aid for many tasks (besides readability, of course) such as
entity recognition, translation and content extraction. The process main aim is to restore
case information to raw text. ([11, 14])
3 -Natural Language User Interfaces for Relational Databases
A Natural language interface for a database allows the user to type in natural language
queries (such as : what buses leave on 16:00 from Tel-Aviv?) - and are the transformed
to an SQL query. This translation phase poses as an NLP challenge in some regards - it
requires a morphological and syntactic analysis, followed by a semantic analysis in order to
transform the user's question input to a few intermediate-language representations - that
correlate to the possible options for the user's question, before choosing the one that will be
transformed to an SQL query. This architecture is formally known as Natural Language
Interface for Databases (NLIDB). A popular implementation of such an NLIDB is called
Edite and is widely available. ([10, 15])
4 - Computerized Deception Detection
Computerized detection of deception is the process of detecting authenticity and truthfulness in a given text (for example, someone writing false reviews). Methods for doing
so can be simply lexical (in a sense that they simply use dictionary word count), or using
POS tagging and n-grams for higher rate of success. Some previous insights include, for
example, that deceivers use verbs and pronouns more often.
More complex approaches
to yield better detection rate include referring to the syntactic stylometry of the text, by
using CFG trees. Uses for this detection can be implemented for detecting fake reviews
(Opinion Spam) ([4, 16, 17])
3
4. Part 3 - 2-Page Survey - Computerized Deception Detection
Deception detection, or Deceptive opinion detection, is the task of inferring and deciding
whether a given text, that carries some opinion is deceptive (or false). To further clear
what this means, take, for instance, an hotel review site - an adversary may post a review
that was deliberately written to sound authentic and to deceive the reader that this review
is indeed truthful. The `deception` we will be referring to in this summary will be of user
reviews / opinions..
The task at hand therefore is, given some text (or review), to decide whether the review
is truthful. The need for this is rather clear - preventing deceptive opinion spam [17] in
mediums where reviews or opinions are written or posted. Nowdays, where crowdsourcing
platforms such Amazon Mechanical Turk exists, deceiving opinions can easily be generated
and can bias a user for the better (or for worst).
The task poses as quite a challenge - since we do want to reach as few false-positives
as possible, and the task itself involves many aspects from the eld of natural language
processing.
As with many other natural-language-based tasks, this tasks also requires some data - for
example, from some review websites (in [4], for example, data from tripadvisor was taken).
We need some `reviews` that are guaranteed to be truthful and some that are guaranteed
to be deceptive - that is , that can be used as a gold data-set that we can compare our
evaluation against. It is worth noting that even without an applicable gold dataset, there
exists an heuristic approach for evaluation ([19]).
In turning to evaluate deceiving reviews - we shall regard the case that such a gold-set exists
(in [17], the gold-sets for deceiving reviews were generated by using Amazon Mechanical
Turk).
As for the 'truthful' part of the gold set, that is, truthful reviews - that can be
collected from authenticated and well-reputated users (that was also done in [17]). Such
datasets, that can be domain-specic, are publicly available ([20])
Before attempting to do a machine based evaluation, it is interesting to inspect the performance of human evaluation. In [17] it is summarized that humans judgement/detection of
deceit is poor, and according to their test a maximum average accuracy of 61% of correctly
telling truth from deceit - concluding that the correlation between same/dierent decisions
by dierent people regarding a given review is almost at-chance.
As for an automated, NLP-approch for the issue - There exist several approaches :
One approach is based on analyzing the frequency of POS tagging as a comparison basis
for deciding whether a given text is deceitful or truthful. In the analysis of this method in
[17] it was shown to have the lowest accuracy from all machine-based methods.
A second approach is based on psycholinguistics in order to be able to detect personality
traits. such tool widely exists (LIWC , [21]) . It is basically a bit more socially-oriented
approach to the previous POS tagging mechanism. Analysis of this method yielded a bit
better results than the POS-based one.
A third approach introduces n-grams to the model, and categorization of the text. Using
this type of classication dramatically increased the success of detection and yielded an
accuracy of ~88%. This signies the fact that the context of words in the sentence (that
is, n-grams based detection) is a major contributor when detecting deceiving opinions.
4
5. Finally, a lately published article [4] suggested an even more novel approach - taking into
account the `syntactic stylometry` (that is, evaluating the similarity of dierent opinions
based on the 'style of writing'.
According to [22], Similar work in regards to syntactic
stylometry has been made in regards to authorship attribution and even age attribution
for blogs [23].
This more novel method can be achieved with techniques based on Probabilistic Context
Free Grammar (PCFG) parse trees - as this is the most prominent technique for analysis
of syntactic stylometry[17, 22, 23].
Previously mentioned methods are based only on
shallow lexico-syntactic features. In [4], analysis of this method yielded very high statistical
evidence of deep syntactic patterns that allow us to detect deceitful texts with very high
accuracy (91.2%)
It is also worth noting that in all machine-models suggested above, the precision and recall
parameters were very close to each other, as can be seen in the comparison table in [17].
Further research has also been made in regards to duplicate opinion detection (in a sense
that the same writer wrote duplicate reviews, but wrote each in a `dierent way`), and
specic deception detection techniques that can be model-specic ([18])
As a quick test to the reader and to signify the (lack-of ) human evaluation skills of deceit have a look at gure 1 and see if you can tell which review is truthful and which is deceitful
(this was taken from[17]).
Figure 1: Truthful and Deceitful Reviews/Opinions
1. I have stayed at many hotels traveling for both business and pleasure and I can honestly stay that The
James is tops. The service at the hotel is rst class. The rooms are modern and very comfortable. The
location is perfect within walking distance to all of the great sights and restaurants. Highly recommend
to both business travelers and couples.
2. My husband and I stayed at the James Chicago Hotel for our anniversary. This place is fantastic! We knew as soon as we arrived we made the right choice! The rooms are BEAUTIFUL and the
sta very attentive and wonderful!! The area of the hotel is great, since I love to shop I couldn't ask for
more!! We will denatly be back to Chicago and we will for sure be back to the James Chicago.
Future work obviously includes adapting the above methods to other problem domains,
for example, reviews of other kinds, or any platform where user feedback and opinion is
possible. Deception is a rather prevalent phenomenon ([24]) - in many mediums where users
can express their opinions. Another interesting direction would be to analyze deception and
truthfulness on combined data from many dierent data sets (for example, hotel reviews,
movie reviews, products, etc.) and seeing whether we can come up with a valid deception
criteria for some text from the aforementioned domains, and not from a specic domain
based on that domain training.
Personally and to conclude - I found the deception detection topic and its regards to NLP
quite fascinating, and very much enjoyed reading the relevant papers on the subject. It
seems like we are `almost-there` on creating and streamlining a product that will be able
to detect deceiving opinions on the web (or anywhere else)
5
6. Part 4 - References
[1] Becky Sue Parton, Sign Language Recognition and Translation:
A Multidisciplined
Approach From the Field of Articial Intelligence, Journal of Deaf Studies, 2011
[2] Lu and Huenerfauth, Collecting a Motion-Capture Corpus of American Sign Language
for Data-Driven Generation Research, NAACL HLT, 2010
[3] Ozbal and Strapparava, A Computational Approach to the Automation of Creative
Naming, ACL 2012
[4] Feng, Banerjee and Choi, Syntactic Stylometry for Deception Detection, ACL 2012
[5] Sag, Baldwin et al. , Multiword Expressions: A Pain in the Neck for NLP, Stanford
University LinGO Project, 2001
[6] Abney, Collins and Singhal, Answer Extraction, ATT Shannon Labs, ANLC 2000
[7] Reiter and Dale, Building Natural Language Generation Systems, Cambridge Press,
2000
[8] Hahn and Reimer, Advances in automatic text summarization, MIT Press, 1999
[9] Abu-Jbara, Ezra and Radev, Purpose and Polarity of Citation: Towards NLP-based
Bibliometrics, NAACL-HLT 2013
[10] Filipe and Mamede, Databases and Natural Language Interfaces, CSTC Portugal,
2007
[11] Lita, Roukos et al., tRuEcasIng, ACL 2003
[12] Kilgarri and Grefenstette, Introduction to the Special Issue on the Web as Corpus,
ACL 2003
[13] Segouat and Braort, Toward Categorization of Sign Language Corpora, AFNLP 2009
[14] English Wikipedia, Truecasing
[15] Stratica, Kosseim and Desai, NLIDB Templates for Semantic Parsing, Concordia University, Canada
[16] Argamon, Koppel and Avneri, Style-based Text Categorization: What Newspaper Am
I Reading?, AAAI 1998
[17] Ott et al., Finding Deceptive Opinion Spam by Any Stretch of the Imagination, ACL
2011
[18] Jindal and Liu, Opinion Spam and Analysis, WSDM 2008
[19] Wu et.
al, Distortion as a Validation Criterion in the Identication of Suspicious
Reviews, SOMA 2010
[20] TripAdvisor Ireland Dataset, http://mlg.ucd.ie/datasets/trip
[21] Linguistic Inquiry and Word Count (LIWC) - http://www.liwc.net/
[22] Hollingsworth, Syntactic Stylometry: Using Sentence Structure for Authorship Attribution, University of Georgia, 2012
[23] Jaget Sastry, Blogger Age Attribution Using Syntactic Stylometry, https://bitbucket.org/jagatsastry/
[24] Ott, Cardie and Hancock, Estimating the prevalence of deception in online review
communities, WWW 2012
6