Georgiou, K. & Nikolaou, I. (2017). Gamification in recruitment and selection. In I. Nikolaou (2017): European Network of Selection Researchers (ENESER) Symposium; Recruitment in the Digital Era. 18th congress of the European Association of Work and Organizational Psychology (EAWOP), Dublin Ireland.
2. Gamification - a
top trend in the
field of
recruitment &
selection!
Game elements
are applied to
non-game
contexts to evoke
game like
experiences &
behaviors
“You can
discover more
about a person
in an hour of
play than in a
year of
conversation”
Plato
Effectiveness in
the selection
process?
EAWOP 2017
3. Improve recruitment process
applicant pool, organizational image, positive behavioral
intentions (Chow & Chapman, 2013)
Efficient in candidates skills assessment
difficult for test-takers to fake, elicit job relevant behaviors,
prediction of job performance (Armstrong, Landers & Collmus,
2016)
EAWOP 2017
4. Address calls for research on the main psychometric properties of
serious games development (Armstrong et al., 2016)
Identify new avenues of selection methods by developing a gamified
assessment method
An empirical analysis of the efficacy of a gamified selection method
based on a Situational Judgment Test (SJT)
Construct validity
EAWOP 2017
5. SJTs: a popular personnel selection method (Weekley & Ployhart, 2006)
assess job-related skills, predict future job performance
low adverse impact, positive applicant reactions
video technology successfully applied
SJT development to assess candidates’ soft skills: resilience, adaptability,
flexibility and decision-making
Guidelines of Motowidlo, Dunnette, & Carter (1990)
EAWOP 2017
6. A) 8 Subject Matter Experts:
Cohen’s Kappa (over .75)
7 Scenarios per skill, Scoring key
B) 130 test takers:
Hit ratio analysis
Scores per scenario from .60 to .85
(appropriate levels of congruence)
Content Validity
EAWOP 2017
8. SJT’s Construct Validity:
Convergent validity
Regression coefficients for SJT’s dimensions: resilience (β=.350), adaptability
(β=.166), decision-making (β=.389), flexibility (β=.366) are significant (p<.001) and
moderately correlated to well-established measures
Discriminant validity
Inter-correlations among SJT’s dimensions at a very low level (e.g. resilience is
related to decision-making (β=.104) and flexibility (β=.-140), p<.001)
EAWOP 2017
9. EAWOP 2017
The SJT was
converted into an
online adventure
gamified assessment
adapting the
scenarios into
fictional dilemmas
4 Islands: 4 skills
https://www.owiwi.co.uk/
12. Construct validity
Convergent validity (N=97)
Regressing with well-established measures (from β= .447 to β= .570, p<.001)
Discriminant validity
CFA: Satorra‐Bentler Scaled χ2 (N=410) = 306.94, p=.05; CFI= .81, NNFI= .79; IFI =
.80; RMSEA= .046, 90% interval (.037, .085)
Paths among game’s dimensions are low to medium (from .35 to .50)
indication of discrimination between the facets (Bagozzi, Yi, & Phillips, 1991)
Congruence between the game & SJT (N=97): Pearson r from .541 to .595, p<.001
EAWOP 2017
13. Preliminary support of construct validity of a gamified selection method based on
a SJT
Game elements can be applied to SJTs to effectively assess candidates’ soft skills
Extend selection methods and gamification’s literature
exploring a major psychometric property of a gamified selection method
emphasizing the use of serious games that focus on behavior, an important criterion in
employee selection
Further explore whether gamified assessments methods are better able to elicit
behaviors than traditional selection methods
EAWOP 2017
15. Organizations might improve their selection process
Benefits of gamified selection methods (ease of administration, test a large
group at once/on various locations, automatically record answers)
Applicants perceive the multimedia tests as more valid and enjoyable and are
more satisfied with the selection process
Increase organizational attractiveness and positive behavioral intentions (e.g.,
accepting a job offer)
Recruiters might minimize the “cost” of bad hires
Obtain higher quality candidates: serious games are more difficult to fake and
better able to elicit behaviors than traditional selection methods
EAWOP 2017
17. Armstrong, M. B., Landers, R. N., & Collmus, A. B. (2016). Gamifying Recruitment, Selection, Training, and Performance Management.
Bagozzi, R. P., Yi, Y., & Phillips, L. W. (1991). Assessing Construct Validity in Organizational Research. Administrative Science Quarterly, 36, 421-458.
Chow, S., & Chapman, D. (2013). Gamifying the employee recruitment process. Paper presented at the Proceedings of the First International Conference
on Gameful Design, Research, and Applications.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.
Lee, K., & Ashton, M. C. (2004). Psychometric properties of the HEXACO personality inventory. Multivariate Behavioral Research, 39(2), 329-358.
Martin, A. J., Nejad, H., Colmar, S., & Liem, G. A. D. (2012). Adaptability: Conceptual and empirical perspectives on responses to change, novelty and
uncertainty. Australian Journal of Guidance and Counselling, 22(01), 58-81.
Mincemoyer, C. C., & Perkins, D. F. (2003). Assessing decision-making skills of youth. Paper presented at the The Forum for Family and Consumer
Issues.
Motowidlo, S. J., Dunnette, M. D., & Carter, G. W. (1990). An alternative selection procedure: The low-fidelity simulation. Journal of Applied Psychology,
75(6), 640.
Wagnild, G. M., & Young, H. (1993). Development and Psychometric Evaluation of the Resilience Scale. Journal of nursing measurement, 1(2).
Weekley, J. A., & Ployhart, R. E. (2006). An introduction to situational judgement testing. Situational judgment tests: Theory, measurement, and
application, 1-12.
EAWOP 2017
Editor's Notes
I will present you a study about the role of serious gaming in the selection process, and in particular the development of a situational judgement test which was converted into a gamified assessment.
Our motivation for this topic comes from a new trend in the recruitment and selection process, that is, gamification.
Gamification, refers to the use of game elements in non-game contexts to evoke game like experiences and behaviors.
Why is important to evoke game like experiences and behaviors during the selection process? Well, as the Greek Philosopher Plato said ‘You can discover more about a person in an hour of play than in a year of conversation’. So while in a conversation or an interview, candidates might be cautious and say the things that the other person wants to hear, in a play might be much more their real selves.
The question that arises here is if the use of game elements in the selection process is effective.
Game-thinking has mainly been conceptualized as gamification and serious games. Gamification refers to the use of game elements / attributes in non-game contexts to evoke game like experiences and behaviors (Deterding, Sicart et al. 2011, Chow and Chapman 2013, Hamari, Koivisto et al. 2014). Serious games, are games that are designed and used for a primary goal other than entertainment or fun (Michael and Chen 2005). They are called “serious” as they encompass video games that are used for “serious” purposes, such as education, health care and scientific exploration (Djaouti, Alvarez et al. 2011).
To the best of our knowledge, no empirical research has established the effectiveness of gamification in the recruitment and selection process.
There are some theoretical frameworks supporting that gamification can be used effectively in the recruitment process to attract a large number of candidates, improve organizational image and attractiveness and as a result, positively affect job pursuit behaviors towards an organization.
It is also suggested that serious games might improve the selection process by being more difficult for test-takers to fake since desirable behaviors may be less obvious to individuals playing the serious game, and as a result, improve prediction of job performance and hiring decisions.
Since, no empirical research has established the basic psychometric properties of effective serious games in the selection process.
The purpose of our study is to identify new avenues of selection methods by developing a gamified assessment tool
In our study we examine the efficacy of a gamified selection method based on a Situational Judgment Test (SJT) by exploring first, the construct validity of the SJT and second, the construct validity of the gamified selection method we developed.
The reason we chose a SJT to form a basis for the development of the gamified assessment is that SJTs are a popular selection method assessing job related skills, predicting applicants’ future job performance and creating positive applicant reactions. Whereas, video technology has been successfully applied to them.
Taking into consideration all this evidence, a SJT was decided to be developed and tested in order to form the basis for the development of the gamified assessment.
We developed a SJT assessing 4 soft skills that employers look for when recruiting: resilience, adaptability, flexibility and decision-making.
To develop the SJT we followed the guidelines suggested by Motowidlo, Dunnette, and Carter (1990).
SJTs are a popular personnel selection method, designed to assess an applicant's judgment regarding a situation encountered in the workplace (Weekley & Ployhart, 2006). Their popularity is based on the assertion that they assess soft skills and job-related skills not tapped by other measures, whereas they have low adverse impact and create positive applicant reactions (Weekley & Ployhart, 2006). SJTs also predict performance across a range of different organizational contexts (Mcdaniel, Morgeson, Finnegan, Campion & Braverman, 2001). SJT items present respondents with work-related situations and a list of plausible courses of action. Respondents are asked to evaluate each course of action for either the likelihood that they would perform the action or the effectiveness of the action (Whetzel & Mcdaniel, 2009).
I briefly comment that we conducted a series of interviews and developed a series of scenarios with four response options for each skill assessed.
To test SJT’s content validity 8 Subject Matter Experts reviewed the developed scenarios and scored the test (indicating the most, and the least appropriate response to each scenario). SME reached a congruence as indicated by Cohen’s Kappa (Cohen, 1988), that was over .75, and we ended up with 7 scenarios per skill and a scoring key for the best and the worst response per scenario. (this procedure, formally prescribed by Weekley, Ployhart, and Holtz (2006) ).
To further content validate the SJT, we recruited 130 business school students to complete the test and through a Q sort methodology (Block, 1961; Stephenson, 1953) that is called hit ratio analysis (Moore & Benbasat, 1991), we went through 2 rounds of corrections to get acceptable scores per scenario (over .60 and below .85) and reach appropriate levels of congruence, without concerns about the easiness or inappropriateness of the scenario.
Research has so far underlined that a score per scenario of over .60 and below .85 is fairly acceptable. Scores beyond .85 give way to concern about the easiness of the scenario; while response which score below .60 or .55 are considered unacceptable and need to be revised or discarded. To achieve appropriate levels of congruence we went through 3 rounds of completion and corrections to get the most appropriate scoring key and finalize the procedure of content validation.
As a next step, we conducted a construct validity survey. Participants (N=321) in this survey were business schools students and graduates.
Along with the SJT scenarios that survived the content validity process, participants completed well-established measures of resilience, adaptability, flexibility, decision-making
To establish construct validity, we explored both convergent and divergent validity.
Regression coefficients for the SJT’s dimensions indicate that SJT’s resilience, decision-making and flexibility were significant and moderately correlated to well-established measures, ensuring convergent validity, however SJT’s adaptability seems to correlate at a very low level though significant with adaptability measure. (This is a caveat of our research-maybe not the best selection of measurement)
Moreover, we found that the inter-correlations among SJT’s dimensions were at a very low level. So, there are signs of discriminant validity of SJT encouraging us to move to next steps, that is the development of the gamified assessment.
Convergent validity refers to the degree to which two measures of constructs that theoretically should be related, are in fact related. Discriminant validity or divergent validity tests whether concepts or measurements that are not supposed to be related are, in fact, unrelated. In short we need to challenge the test as it has been already developed against well established tools measuring resilience, flexibility, adaptability and decision making to ensure low to mediocre correlations (to achieve construct validity). Moreover, we need to identify whether the test diverges from other scales that are not relevant with the SJT dimension in focus, i.e. they are not measuring the same constructs, therefore we need low correlation scores. Content validity refers to what the test actually measures
The structural validity of the SJT questionnaire (Nunnally & Bernstein, 1994; Robinson, Shaver & Wrightsman, 1991) was confirmed, along with convergent and divergent validity (Campbell & Fiske, 1959), confirming the psychometric qualities of the questionnaire that was developed.
Convergent validity: resileinece SJT na sxetizetai me to res Owiwi se mediocre level .30-.50, px ola vghkan β= .170 - .389, p<.001
Discriminant: παρατηρήσαμε κάποιες συσχετίσεις μεταξύ (cross intercorrelations) sjt facets όμως αυτές ήταν πολύ μικρές, οπότε κατά κάποιον τρόπο υπάρχουν roughly have some signs of discriminant validity of SJT that somehow encourages us to move to next step
Poio polu mas noiazei to game ara theloume mia vash gia na proxwrhsoume, to discriminant validity tou game kata vash
An rwthsoun gt de xorigisame ena akomh ergaleio gt to sample den antexe allo ena ergaleio kathws xreiazontan polu wra na to sumplhrwsoun kai tha eixe adverse impact
Ta epivevaiwsame at a very high level ta vasika psychometric tou eragleiou
Subsequently, The SJT was converted into an online adventure gamified assessment
Creative writers were employed to adapt the scenarios into fictional dilemmas while game designers and web developers constructed the functions of the game.
We can visit the website of Owiwi and try the game, as you can see there is an an island for every skill assessed where players go through several adventures.
Scenarios incorporated in the gamified assessment followed the response format of the SJT.
There is a scenario, a what would you do question and four response options.
Candidates may read or listen to the scenario and choose the best and the worst answer.
Upon completion of the game the system instantly generates an in-depth report stating the candidate’s soft skills profile
With this report in hand, managers can make faster and more informed hiring decisions while candidates get feedback to potentially work on their soft skills.
To examine whether the fact that we fictionalized the scenarios affected the basic psychometric properties of the test we are currently exploring the construct validity of the gamified assessment.
A sub-sample of the individuals (Ν=97) who completed the SJT questionnaire, has also completed the game.
To explore the convergent validity we regressed Game’s dimensions with the well-established measures of resilience, flexibility, etc., and results indicate that game’s dimensions are all related with the well established measures at a significant level and mediocre level, indicating that the game has convergent validity, for example resilience game with measure was at .570
We performed a CFA over 410 test-takes who completed only the game, results indicate that our model is a relatively good fit for the data
Moreover, Paths among game’s dimensions are low to medium and according to Bagozzi, Yi, and Phillips (1991) is an indication of discriminant validity, (i.e. resilience is not measuring the same things as flexibility etc.)
Finally, after regressing game’s dimensions with SJT dimensions, the game indicated significant correlations with the SJTs dimensions from .541 to .595. Findings are indicative of equivalence between the two measurements, meaning that more or less the two measurements are planned to measure the same thing.
Discriminant validity: based on Bagozzi… said that paths in cfa among facets if are low to medium (from .20 to .50) there is discriminant validity-we perform cfa first order and check games’ facets and their correlations
CFA The initial format of 6 scenarios per facet and 7 for adaptability was tested against the data; and showed a relatively good fit (Satorra‐Bentler Scaled (katallilo gia non normal data) χ2 [N = 410] = 306.94, p=.05; CFI= .81, NNFI= .79; IFI = .80; RMSEA= .046, RMSEA 90% interval [.037, .085]) but several standardized coefficient loadings were low at same cases (.03 to .09).
Το γεγονός ότι οι συσχετίσεις στο convergent ήταν υψηλότερες στο παιχνίδι από ότι στο SJT σημαίνει ότι το παιχνίδι ήταν πιο «καλό» ίσως λόγω του adventure-game context that graduates are more familiar with than the SJT’s business context. Krumm, Lievens et al, 2014 recently supported that tests should be context non-specific
Our results bring preliminary support of construct validity of a gamified selection method based on a SJT.
It seems that game elements can be applied to SJTs to effectively assess candidates’ soft skills
Our study contributes to research on gamification and selection methods exploring a major psychometric property of a gamified selection method emphasizing the use of serious games that focus on behavior which is an important criterion in employee selection.
Moreover, preliminary results encourages us to explore whether serious games are better able to elicit behaviors than traditional selection methods
To do so, we aim to explore both predictive validity and incremental validity of the gamified selection method. In the future we will examine whether the developed assessment tool predicts job performance and we will challenge the tool against well established personality measures to identify whether it predicts job performance beyond traditional selection methods. We are also on the process of examining the reliability of the serious game through a test – retest process.
Limitations: sample size, administration problems
An rwthsoun gia reliability tha pw oti bash vivliografias to alpha den mporei na anaferthei (Weekly and ployhart, 2005), majority leei na xrhsimopoihoume test – rest, we are on the process…very difficult to administer in different times with at least 6 weeks the game to same people
The current research has important practical implications for HR professionals and organisations as well.
Organizations might improve their selection process using gamified selection methods which share several benefits that other multimedia tests have (ease of administration, testing a large group of applicants at once and on various locations, automatically recording candidates’ answers)
Game elements properties might positively affect applicants’ reactions as well. Applicants perceive the multimedia tests as more valid and enjoyable and as a result they are more satisfied with the selection process (Richman-Hirsch, Olson-Buchanan & Drasgow, 2000).
Employers might also be benefited using a gamified selection method in increasing their organizational attractiveness and positive behavioral intentions such as accepting a job offer.
By establishing the validity of the gamified selection method, recruiters might use a new selection tool that effectively predicts job performance minimizing thus the “cost” of bad hires. Serious games might be used to obtain higher quality since they are more difficult for test-takers to fake and better able to elicit behaviors than traditional selection methods (Armstrong et al., 2016).