SlideShare uma empresa Scribd logo
1 de 72
Surveys, Response Rates, and NonresponseJeffrey StantonSyracuse University 1
Abstract The survey method is one of the most popular methods in Information Systems research. One problem that plagues most survey researchers is nonresponse. As theories get more complex and more constructs must be measured, surveys tend to get longer, and this can reduce response rates. Some traditions have developed around acceptable response rates, such as these: “In my area a 45% response rate is considered quite good...” or “that research can’t possibly be valid given a response rate under 20%.” Research shows that these and other myths about response rate are incorrect. In this tutorial, participants will learn four things: 1) The important difference between response rate and nonresponse bias, and why it is more important to minimize the latter rather than maximizing the former; 2) the full range of response enhancing techniques that have had their efficacy documented in the methods literature; 3) a focus on a particular method of response enhancement – survey and scale shortening; and 4) survey design methods that allow for the detection of the presence and magnitude of nonresponse bias. At the end of the tutorial, participants will have the skills and knowledge to build assessment and control of nonresponse into their survey methods. 2
Why do we care if we our study has a low response rate?
Low Response Rates …cause smaller data samples which decrease statistical power, increase confidence intervals, and may limit the types of statistical techniques that can effectively be applied to the collected data.  …undermine the perceived credibility of the collected data …undermine the actualgeneralizability of the collected data because of nonresponse bias. Where nonresponse bias exists, survey results can produce misleading conclusions that do not generalize to the entire population
Research History 1939 F. Stanton (1939) wrote one of the first empirical pieces on the topic in the Journal of Applied Psychology entitled, “Notes on the validity of mail questionnaire returns.” Suchmanand McCandless’s (1940) Journal of Applied Psychology article titled, “Who answers questionnaires?”  A significant early event to draw interest in this topic occurred in 1948. 1948 2010
In the 1948 U.S. presidential election, pre-election polls by major newspapers and polling organizations predicted a victory by New York State Governor, Thomas E. Dewey, ranging between 5 to 15 percentage points. Instead, the victory by incumbent president Harry S. Truman was an embarrassment for the emerging public opinion polling community.  What caused the failure?
Research Today Extensive literature on techniques to increase response rates: response enhancing strategies Statistical methods of compensating for nonresponse through imputation of missing data  Rubin (1987) developed a book length treatment of methods for imputing data in sample surveys.  Characteristics of Nonrespondents & Nonresponse Bias Rogelberg, S.G., Spitzmüller, C., Little, I.S., & Reeve, C.L. (2006). Understanding Response Behavior to an Online Special Topics Organizational Satisfaction Survey.  Personnel Psychology, 59, 903-923
Organizational Survey Response Rates Youssefinia (2003) examined 58 organizational surveys conducted over five years by two consulting firms.   Anseel, Lievens, Schollaert(2008) Analyzed 2037 surveys, covering 1,251,651 individual respondents, published in 12 journals in I/O Psychology, Management, and Marketing during the period 1995-2008. You predict: What is a typical response rate in an organizational survey? What is the trend over recent years for response rates in organizations?
9
What is an acceptable response rate for your study?
Trick Question? Industry and academic standards only put a response rate into context  The fact that everyone else also achieves 30%, 50%, or 70% response does not help to demonstrate that the reported research is free from nonresponse bias.  In the absence of good information about presence, magnitude, and direction of nonresponse bias, ignoring the results of a study with a 10% response rate -- particularly if the research question explores a new and previously unaddressed issue -- is just as foolish as assuming that one with a response rate of 80% is unassailable.
The Nature of Nonresponse Bias  where ‘PNR’ refers to the proportion of non-respondents, ‘Xres’ is the respondent mean on a survey relevant variable and ‘Xpop’ is the population mean on the corresponding survey relevant variable, if it were actually known.  Overall, the impact of nonresponse on survey statistics depends on the percentage not responding and the extent to which those not responding systematically different from the whole population on survey relevant variables.
Error/Bias due to Non-Response(http://www.idready.org/courses/2005/spring/survey_SamplingFrames.pdf) Non-Respondentsµ0, α0, r0, F0 Respondentsµ1, α1, r1, F1 HowMany? HowDifferent? May 15-17, 2008
Sample:N=100 If non-respondents resemble respondents, then low response rate is not a problem. n=5 say YES 10% Responsen=10 n=5 say NO
Sample:N=100 Even when response rates are “high” substantial potential for error still exists. n=35 say YES 70% Responsen=70 n=35 say NO
Worst Case Scenario Exercise What’s are the worst two things that could happen? Choose one scenario below and run the numbers… 1. Sample N=100; Response rate 30%; Percent YES for respondents = 40% (12 YES votes) 2. Sample N=100; Response rate 30%; Percent YES for respondents = 90% (27 YES votes) 3. Sample N=100; Response rate: 90%; Percent YES for respondents = 50% (45 YES votes) 4. Sample N=100; Response rate: 90%; Percent YES for respondents = 80% (72 YES votes) If you counted all non-respondents as YES votes or NO votes, what would be the range of results in each of these scenarios?
Previous Examples: Proportions Of course, rating scale means can be similarly impacted by non-response error What about correlations? Word on the street is that correlations are fairly robust against non-response error We ran a simulation – 300 runs of random samples of n=500 from a larger population where rho=0.284 between a rating scale and a criterion scale Half of the samples had biased nonresponse where those favorable on a rating scale were twice as likely to respond Results showed that the unbiased samples had slightly suppressed correlations: a decline of r=0.038 Biased samples had more suppression: a decline of r=0.066 Difference in suppression was statistically significant, p<.001 No sign reversals of correlations in any of the 300 samples
Case Study Exercise Instructions: Read brief case, make some notes Discuss with others at your table  Generate as many ideas as you can, write ideas on sheets Prepare to report back to complete group  Case Overview: You are an preparing a climate survey Unlikely to get response rate much above 45% How can you prepare for possible criticism of results?
N-BIAS Response rate alone is an inaccurate and unreliable proxy for study quality.  While improving response rates is a worthy goal, researchers’ major efforts and resources should go into understanding the magnitude and direction of bias caused by non-response, if it exists.  Rogelberg and Stanton (2006) advocate that researchers should conduct a nonresponse bias impact assessment (N-BIAS), regardless of how high a response rate is achieved.
N-BIAS Methods N-BIAS is presently composed of eight techniques Archival Analysis Follow-up Approach Wave Analysis Passive Nonresponse Analysis Interest Level Analysis Active Nonresponse Analysis Worst Case Resistance Benchmarking/Norms Demonstrate Generalizability
N-BIAS: How it Works Similar to a test validation strategy.  In amassing evidence for validity, each of several different validation methods (e.g., concurrent validity) provides a variety of insights into validity.  Each assessment approach has strengths and limitations.  There is no one conclusive approach and no particular piece of evidence that is sufficient to ward off all threats.  Assessing the impact of nonresponse bias requires development and inclusion of different types of evidence, and the case for nugatory impact of nonresponse bias is built on multiple pieces of evidence that converge with one another.
Technique 1: Archival Analysis Most common technique The researcher identifies an archival database that contains the members of the whole survey sample (e.g. personnel records). That data set, usually containing demographic data, can be described: 50% Female; 40% Supervisors, etc After data collection, code numbers on the returned surveys (or access passwords) can be used to identify respondents, and by extension nonrespondents.  Using this information, the archival database can be partitioned into two segments: 1) data concerning respondents; and 2) data concerning nonrespondents.
So, if you found the above do you have nonresponse bias?
Technique 2: Follow-up Approach Using identifiers attached to returned surveys (or access passwords), respondents can be identified and by extension nonrespondents. The follow-up approach involves randomly selecting and resurveying a small segment of nonrespondents, using an alternative modality and often by phone. The full or abridged survey is then administered. In the absence of identifiers, telephone a small random sample and ask whether they responded or not to the initial survey.  Follow-up with survey relevant questions
Technique 3: Wave Analysis By noting in the data set whether each survey was returned before the deadline, after an initial reminder note, after the deadline, and so on, responses from pre-deadline surveys can be compared with the late responders on actual survey variables (e.g. compare job satisfaction levels).
Technique 4: Passive Nonresponse Analysis Rogelberg et al. (2003) found that the vast majority of nonresponse can be classified as being passive in nature (approx. 85%). Passive nonresponse does not appear to be planned. When asked (upon receipt of the survey), these individuals indicate a general willingness to complete the survey – if they have the time.  Given this, it is not surprising that they generally do not differ from respondents with regard to job satisfaction or related variables.
Technique 5: Interest Level Analysis Researchers have repeatedly identified that interest level in the survey topic is one of the best predictors of a respondent’s likelihood of completing the survey. As a result, if interest level is related to attitudinal standing on the topics making up the survey, the survey results are susceptible to bias. E.g., if low interest individuals tend to be more dissatisfied on the survey constructs in question, results will be biased “high”
Technique 6: Active Nonresponse Analysis Active nonrespondents, in contrast to passive nonrespondents, are those that overtly choose not to respond to a survey effort.  The nonresponse is volitional and a priori (i.e. it occurs when initially confronted with a survey solicitation). Active nonrespondents tend to differ from respondents on a number of dimensions typically relevant to the organizational survey researcher (e.g. job satisfaction)
Technique 7: Worst Case Resistance Given the data collected from study respondents in an actual study, one can empirically answer the question of what proportion of nonrespondents would have to exhibit the opposite pattern of responding to adversely influence sample results. Similar philosophy as what occurs in meta-analyses when considering the “file-drawer problem” By adding simulated data to an existing data set, one can explore how resistant the dataset is to worst case responses from non-respondents.
Technique 8: Benchmarking Using measures with norms for the population under examination, compare means and standard deviations of the collected sample to the norms
Technique 9: Demonstrate Generalizability By definition, nonresponse bias is a phenomenon that is peculiar to a given sample under particular study conditions. Triangulating with a sample collected using a different method, or varying the conditions under which the study is conducted should also have effects on the composition of the nonrespondents group.
N-BIAS: Conclusion Nonresponse can be problematic on a number of fronts Do what you can to facilitate response In the inevitable case of nonresponse, engage in the N-BIAS approach in an attempt to accumulate information to provide insight into the presence and absence of problematic nonresponse bias Engage in as many techniques as feasible.  1, is better than 0, 2 is better than 1.  Most published literature has none! Each approach has a different purpose, each has positives and negatives. Use N-BIAS information collected to decide on next steps and educate your audience
Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys Journal of Marketing Research, 14 (Special Issue: Recent Developments in Survey Research), 396-402. Baruch, Y. (1999). Response rate in academic studies – A comparative Analysis. Human Relations, 52 (4), 421-438.  Bosnjak M., Tuten, T.L., & Wittman , W. W. (2005).  Unit (non) response in web-based access panel surveys: An extended planned-behavior approach.  Psychology and Marketing, 22, 489-505. Dillman, Don A. (2000). Mail and Internet Surveys: The Tailored Design Method. New York, NY, US: John Wiley & Sons, Inc. Groves, R., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68, 2-31.  Rogelberg, S.G., & Luong, A. (1998). Nonresponse to mailed surveys: A review and guide. Current Directions in Psychological Science, 7, 60-65. Rogelberg, S.G., Luong, A., Sederburg, M.E., & Cristol, D.S. (2000). Employee Attitude Surveys: Examining the Attitudes of Noncompliant Employees. Journal of Applied Psychology, 85(2), 284-293. Rogelberg, S. G., Fisher, G. G., Maynard, D, Hakel, M.D., & Horvath, M. (2001). Attitudes Toward Surveys: Development of a Measure and its Relationship to Respondent Behavior. Organizational Research Methods, 4, 3-25. Rogelberg, S. G., Conway, J. M.., Sederburg, M. E., Spitzmuller, C., Aziz, S., Knight, W. E. (2003). Profiling Active and Passive-non-respondents to an Organizational Survey. Journal of Applied Psychology, 88 (6), 1104-1114. Rubin, D. (1987). Multiple imputation for nonresponse in surveys. New York, NY, US: John Wiley & Sons, Inc. Tomaskovic-Devey, D., Leiter, J., & Thompson, S. (1994). Organizational survey response. Administrative Science Quarterly, 39, 439-457 Weiner, S.P. & Dalessio, A.T. (2006).   Oversurveying: Causes, consequences, and cures.  In A.I. Kraut (Ed.), Getting Action From Organizational Surveys: New Concepts, Methods, and Applications. (pp 294-311) San Francisco, California: Jossey-Bass Yammarino, F.J., Skinner, S.J., & Childers, T.L. (1991). Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly, 55, 613–639. Key References
Segment 2:Methods to Facilitate Response Quick brainstorm: I have thought of 12 response facilitation techniques.  How many can you as a group come up with in three minutes? 34
Methods To Facilitate Response Actively publicize the survey.  Personally notify your potential participants that they will be receiving a survey in the near future.   Provide incentives, if appropriate.  Inexpensive items such as pens, key chains, or certificates for free food/drink can increase responses.
Keep the survey to a reasonable length. A theory-driven approach to survey design helps determine what is absolutely necessary to include in the survey instrument. Do not use the “kitchen sink” approach.What is a reasonable length? Be sensitive to the actual physical design of your survey.  For example, how questions are ordered may impact respondent participation.   A study by Roberson and Sundstrom (1990) suggests placing the more interesting and easy questions first and demographic questions last.
Send reminder notes. Response rates may bump up 3-7% with each reminder note, but keep in mind that there's a point of diminishing returns when you irritate people who have chosen not to participate.   Give everyone the opportunity to participate (e.g., paper surveys where required, scheduling time off the phone in the call centers, etc.).  At one company for example, most surveys run for 10 business days and span across three work weeks.  Track response rates so that the survey coordinators can identify units with low response rates and contact the responsible manager to  increase responses. 
Foster commitment to the survey effort.  For example, you can involve a wide range of employees (across many levels) in the survey development process.  Link the content of the survey to important business outcomes.  Provide respondents with survey feedback after the project is completed. Be careful not to abandon your participants once getting the data you wanted from them. You are paving the way for future survey efforts. Personalization of the survey invitation. Personal signature as part of cover letter. Topic salience
Even when controlling for the presence of other techniques, advance notice, personalization, identification numbers, and salience, are associated with higher response rates. Because of survey fatigue and declining response rates we need to do more just to get the same results as in the past. Target facilitation strategy to who you are surveying. For top executives, Anseel found that salience of the survey topic was most key. Incentives were counterproductive Incentives worked for unemployed individuals
Segment 3:Survey reduction techniques in detail Quick Brainstorm: I have thought of seven ways of reducinga survey. How many reduction methods can you think of in three minutes? 40
Primary Goal: Reduce Administration Time Secondary goals Reduce perceived administration time Increase the engagement of the respondent with the experience of completing instrument  lock in interest and excitement from the start Reduce the extent of missing and erroneous data due to carelessness, rushing, test forms that are hard to use, etc. Increase the respondents’ ease of experience (maybe even enjoyment!) so that they will persist to the end AND that they will respond again next year (or whenever the next survey comes out) Conclusions? Make the survey SEEM as short and compact as possible Streamline the WHOLE EXPERIENCE from the first call for participation all the way to the end of the final page of the instrument Focus test-reduction efforts on the easy stuff before diving into the nitty-gritty statistical stuff 41
Instruction Reduction ,[object Object]
Comprehension of instructions only influences novice performance on surveysCatrambone (1990; HCI)
Instructions on average are written five grade levels above average grade level of respondent; 23% of respondents failed to understand at least one element of instructionsSpandorfer et al. (1993; Annals of EM)42
Instruction Reduction Conclusions ,[object Object]
Most people don’t read instructions anyway.  When they do, the instructions often don’t help them respond any better!
If your response format is so novel that people require instructions, then you have a substantial burden to pilot test, in order to ensure that people comprehend the instructions and respond appropriately. Otherwise, do not take the risk!43
Archival Demographics ,[object Object],gender, race/ethnicity, age managers and non-managers exempt and non-exempt part time and full time unit and departmental affiliations Demographic data are critical ,[object Object]
Self-completed demographic data frequently containing missing fields or intentional mistakes44
Archival Demographics ,[object Object]
Respondents should feel like demographics are not serving to identify them in their survey responses.
You could offer respondents two choices: match (or automatically fill in) some/all demographic data using the code number provided in your invitation email (or on a paper letter);  they fill in the demographic data (on web-based surveys, a reveal can branch respondents to the demographics page) 45
From Don Dillman’sTailored Design Method: Key Form/Interface Design Goals –  Non-subordinating language, No embarrassment, No drudgery, Readability, Simplicity Drudgery – Questions that require data lookup, calculation, interpolation, recall of specific events from distant past; response process should give a sense of forward momentum and achievement Readability – Grade level should match respondent population reading capability Simplicity – Layout should draw the eye directly to the items and response fields; response method should fit respondents’ experience and expectations  Discuss: Any particularly frustrating surveys?  Particularly easy/streamlined ones?  46 Forms/Interface Design
47
Eligibility If a survey has eligibility requirements, the screening questions should be placed at the earliest possible point in the survey. (eligibility requirements can appear in instructions, but this should not be the sole method of screening out ineligible respondents) Skip Logic Skip logic actually shortens the survey by setting aside questions for which the respondent is ineligible. Branching Branching may not shorten, but can improve the user experience by offering questions specifically focused to the respondent’s demographic or reported experience. 48 Illustration credit: Vovici.com Eligibility, Skip Logic, and Branching
Discuss: Ever answer a survey where you knew that your answer would predict how many questions you would have to answer after that?e.g., “How many hotel chains have you been to in the last year?” If users can predict that their eligibility, the survey skip logic, or survey branching will lead to longer responses, more complex responses, or more difficult or tedious responses, they may: Abandon the survey Backup and change their answer to the conditional with less work (if the interface permits it). 49 Implications: Eligibility, Skip Logic, and Branching Illustration credit: Vovici.com
Branch design should  try not to imply what the user would have experienced in another branch. Paths through the survey should avoid causing considerably more work for some respondents than for others– if at all possible. 50 Implications: Eligibility, Skip Logic, and Branching Illustration credit: Vovici.com
Panel Designs and Multiple Administration Panel designs measure the same respondents on multiple occasions. Typically either  predictors are gathered at an early point in time, and outcomes gathered at a later point in time, or  both predictors and outcomes are measured at every time point.   (There are variations on these two themes). Panel designs are based on maturation and/or intervention processes that require the passage of time.  Examples: career aspirations over time, person-organization fit over time, training before/after – discuss others? Minimally, panel designs can help mitigate (though not solve) the problem of common method bias; e.g., responding to a criterion at time 2, respondents tend to forget how they responded at time 1. 51
Panel Designs and Multiple Administration Survey designers can apply the logic of panel designs to their own surveys: Sometimes, you have to collect a large number of variables (no measure shortening), and it is impractical to do so in a single administration. Generally speaking: Better to have a many short, pleasant survey administrations with a cumulative “work time lost” of an hour vs. long and grinding one hour-long survey. The former can get you happier and less fatigued respondents and better data, hopefully. In the limit, consider the implications of a “Today’s Poll” approach to measuring climate, stress, satisfaction, or other attitudinal variables: One question per day, every day…. 52
Unobtrusive Behavioral Observation Surveys appear convenient and relatively inexpensive in and of themselves…however, the cumulative work time lost across all respondents may be quite large.  Methods that assess social variables through observations of overt behavior rather than self report can provide indications of stress, satisfaction, organizational citizenship, intent to quit, and other psychologically and organizationally relevant variables. Examples  Cigarette breaks over time (frequency, # of incumbents per day);  Garbage (weight of trash before/after a recycling program);  Social media usage (tweets, blog posts, Facebook);  Wear of floor tiles Absenteeism or tardiness records;  Incumbent, team and department production quality and quantity measures 53
Unobtrusive Behavioral Observation Most unobtrusive observations must be conducted over time: Establish a baseline for the behavior. Examine subsequent time periods to examine changes/trends over time.  Generally, much more labor intensive data collection than surveys. Results should be cross-validated with other types of evidence. 54
Scale Reduction and One-item Measures Standard scale construction calls for “sampling the construct domain” with items that tap into different aspects of the construct with items that refer to various content areas. Scales with more items can include a larger sample of the behaviors or topics relevant to the construct.  55 RELEVANT measuring what you want measure Construct Domain Item Content CONTAMINATED measuring what you don’t want to measure DEFICIENT not measuring what you want to measure
Scale Reduction and One-item Measures When fewer items are used, by necessity they must be either more general in wording to obtain full coverage (hopefully) more narrow to focus on a subset of behaviors/topics Internal consistency reliability reinforces this trade-off: As the number of items gets smaller, inter-item correlation must rise to maintain a given level of internal consistency.  However, scales with fewer than 3-5 items rarely achieve acceptable internal consistency without simply becoming alternative wordings of the same questions. Discussion: How many of you have taken a measure where you were being asked the same question again and again?  Your reactions?  Why was this done? The one-item solution: A one-item measure usually “covers” a construct only if is highly non-specific. A one item measure has a measurable reliability (see Wanous & Hudy; ORM, 2001), but the concept of internal consistency is meaningless. Discuss: A one-item knowledge measure vs. a one-item job satisfaction measure. 56
One-item Measure Literature Research using single item measures of each of the five JDI job satisfaction facets and found correlations between .60 and .72 to the full length versions of the JDI scalesNagy (2002) Review of single-item graphical representation scales; so called “faces” scales Patrician (2004) Single item graphic scale for organizational identificationShamir & Kark (2004) Research finding that single item job satisfaction scales systematically overestimate workers’ job satisfactionOshagbemi(1999) Single item measures work best on “homogeneous” constructsLoo (2002) 57
Scale Reduction: Technical Considerations Items can be struck from a scale based on three different sets of qualities:  1. Internal item qualities refer to properties of items that can be assessed in reference to other items on the scale or the scale's summated scores.  2. External item qualities refer to connections between the scale (or its individual items) and other constructs or indicators.  3. Judgmental item qualities refer to those issues that require subjective judgment and/or are difficult to assess in isolation of the context in which the scale is administered  The most widely used method for item selection in scale reduction is some form of internal consistency maximization. Corrected item-total correlations provide diagnostic information about internal consistency. In scale reduction efforts, item-total correlations have been employed as a basis for retaining items for a shortened scale version. Factor analysis is another technique that, when used for scale reduction, can lead to increased internal consistency, assuming one chooses items that load strongly on a dominant factor 58
Scale Reduction II Despite their prevalence, there are important limitations of scale reduction techniques that maximize internal consistency.  Choosing items to maximize internal consistency leads to item sets highly redundant in appearance, narrow in content, and potentially low in validity  High internal consistency often signifies a failure to adequately sample content from all parts of the construct domain  To obtain high values of coefficient alpha, a scale developer need only write a set of items that paraphrase each other or are antonyms of one other. One can expect an equivalent result (i.e., high redundancy) from using the analogous approach in scale reduction, that is, excluding all items but those highly similar in content. 59
Scale Reduction III IRT provides an alternative strategy for scale reduction that does not focus on maximizing internal consistency.  One should retain items that are highly discriminating (i.e., moderate to large values of a) and one should attempt to include items with a range of item thresholds (i.e., b) that adequately cover the expected range of the trait in measured individuals  IRT analysis for scale reduction can be complex and does not provide a definitive answer to the question of which items to retain; rather, it provides evidence for which items might work well together to cover the trait range Relating items to external criteria provides a viable alternative to internal consistency and other internal qualities  Because correlations vary across different samples, instruments, and administration contexts, an item that predicts an external criterion best in one sample may not do so in another.   Choosing items to maximize a relation with an external criterion runs the risk of a decrease in discriminant validity between the measures of the two constructs. 60
Scale Reduction IV The overarching goal of any scale reduction project should be to closely replicate the pattern of relations established within the construct's nomological network.   In evaluating any given item's relations with external criteria, one should seek moderate correlations with a variety of related scales (i.e., convergent validity) and low correlations with a variety of unrelated measures Researchers may also need to examine other criteria beyond statistical relations to determine which items should remain in an abbreviated scale. Clarity of expression, its relevance to a particular respondent population, the semantic redundancy of an item's content with other items, the perceived invasiveness of an item, and an item's "face" validity. Items lacking apparent relevance, or that are highly redundant with other items on the scale, may be viewed negatively by respondents. To the extent that judgmental qualities can be used to select items with face validity, both the reactions of constituencies and the motivation of respondents maybe enhanced Simple strategy for retention that does not require IRT analysis: Stepwise regression  Rank ordered item inclusion in an "optimal" reduced-length scale that accounts for a nearly maximal proportion of variance in its own full-length summated scale score.   Order of entry into the stepwise regression is a rank order proxy indicating item goodness  Empirical results show that this method performs as well as a brute force combinatorial scan of item combinations; method can also be combined with human judgment to pick items from among the top ranked items (but not in strict ranking order) 61
Segment 4:Pitfalls, trade-offs, and justifications Quick Brainstorm: What complaints have you heard frommanagers when you ask them if you can survey theiremployees? 62
Evaluating Surveying Costs and Benefits Wang and Strong’s Data Quality Framework (1996; JoMIS) Top Five Data Quality Concerns of N = 355 Managers Bottom Five Data Quality Concerns Inference: A survey effort is seen as more valuable to the extent that it completed quickly and cost effectively, such that results make sense to managers and seem unbiased. SIOP  XXVI  -  Workshop #7 - Put Your Survey on a Diet 63
Trade-offs with Reduced Surveys The shorter the survey… ,[object Object]
the less work time that is lost

Mais conteúdo relacionado

Mais procurados

Inferential statistics
Inferential statisticsInferential statistics
Inferential statisticsAshok Kulkarni
 
Talk on reproducibility in EEG research
Talk on reproducibility in EEG researchTalk on reproducibility in EEG research
Talk on reproducibility in EEG researchDorothy Bishop
 
P-values the gold measure of statistical validity are not as reliable as many...
P-values the gold measure of statistical validity are not as reliable as many...P-values the gold measure of statistical validity are not as reliable as many...
P-values the gold measure of statistical validity are not as reliable as many...David Pratap
 
Sampling, Statistics and Sample Size
Sampling, Statistics and Sample SizeSampling, Statistics and Sample Size
Sampling, Statistics and Sample Sizeclearsateam
 
Research method EMBA chapter 3
Research method EMBA chapter 3Research method EMBA chapter 3
Research method EMBA chapter 3Mazhar Poohlah
 
Network meta-analysis with integrated nested Laplace approximations
Network meta-analysis with integrated nested Laplace approximationsNetwork meta-analysis with integrated nested Laplace approximations
Network meta-analysis with integrated nested Laplace approximationsBurak Kürsad Günhan
 
Refresher in statistics and analysis skill
Refresher in statistics and analysis skillRefresher in statistics and analysis skill
Refresher in statistics and analysis skillSantam Chakraborty
 
Sample determinants and size
Sample determinants and sizeSample determinants and size
Sample determinants and sizeTarek Tawfik Amin
 
Presentation research- chapter 10-11 istiqlal
Presentation research- chapter 10-11 istiqlalPresentation research- chapter 10-11 istiqlal
Presentation research- chapter 10-11 istiqlalIstiqlalEid
 
Adler clark 4e ppt 05
Adler clark 4e ppt 05Adler clark 4e ppt 05
Adler clark 4e ppt 05arpsychology
 
Inferential statistics
Inferential statisticsInferential statistics
Inferential statisticsMaria Theresa
 
Research Method for Business chapter 8
Research Method for Business chapter  8Research Method for Business chapter  8
Research Method for Business chapter 8Mazhar Poohlah
 
D. Mayo: Replication Research Under an Error Statistical Philosophy
D. Mayo: Replication Research Under an Error Statistical Philosophy D. Mayo: Replication Research Under an Error Statistical Philosophy
D. Mayo: Replication Research Under an Error Statistical Philosophy jemille6
 
Data Gathering, Sampling, Data Management and Data Integrity
Data Gathering, Sampling, Data Management and Data IntegrityData Gathering, Sampling, Data Management and Data Integrity
Data Gathering, Sampling, Data Management and Data IntegrityRyan Cloyd Villanueva
 
Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Daniel Quintana
 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testingSohail Patel
 

Mais procurados (20)

Inferential statistics
Inferential statisticsInferential statistics
Inferential statistics
 
Talk on reproducibility in EEG research
Talk on reproducibility in EEG researchTalk on reproducibility in EEG research
Talk on reproducibility in EEG research
 
P-values the gold measure of statistical validity are not as reliable as many...
P-values the gold measure of statistical validity are not as reliable as many...P-values the gold measure of statistical validity are not as reliable as many...
P-values the gold measure of statistical validity are not as reliable as many...
 
Qualitative data analysis
Qualitative data analysisQualitative data analysis
Qualitative data analysis
 
Sampling, Statistics and Sample Size
Sampling, Statistics and Sample SizeSampling, Statistics and Sample Size
Sampling, Statistics and Sample Size
 
Research method EMBA chapter 3
Research method EMBA chapter 3Research method EMBA chapter 3
Research method EMBA chapter 3
 
Network meta-analysis with integrated nested Laplace approximations
Network meta-analysis with integrated nested Laplace approximationsNetwork meta-analysis with integrated nested Laplace approximations
Network meta-analysis with integrated nested Laplace approximations
 
Refresher in statistics and analysis skill
Refresher in statistics and analysis skillRefresher in statistics and analysis skill
Refresher in statistics and analysis skill
 
Marketing Research
Marketing ResearchMarketing Research
Marketing Research
 
Sample determinants and size
Sample determinants and sizeSample determinants and size
Sample determinants and size
 
Presentation research- chapter 10-11 istiqlal
Presentation research- chapter 10-11 istiqlalPresentation research- chapter 10-11 istiqlal
Presentation research- chapter 10-11 istiqlal
 
Adler clark 4e ppt 05
Adler clark 4e ppt 05Adler clark 4e ppt 05
Adler clark 4e ppt 05
 
Methodology
MethodologyMethodology
Methodology
 
Inferential statistics
Inferential statisticsInferential statistics
Inferential statistics
 
Research Method for Business chapter 8
Research Method for Business chapter  8Research Method for Business chapter  8
Research Method for Business chapter 8
 
D. Mayo: Replication Research Under an Error Statistical Philosophy
D. Mayo: Replication Research Under an Error Statistical Philosophy D. Mayo: Replication Research Under an Error Statistical Philosophy
D. Mayo: Replication Research Under an Error Statistical Philosophy
 
Data Gathering, Sampling, Data Management and Data Integrity
Data Gathering, Sampling, Data Management and Data IntegrityData Gathering, Sampling, Data Management and Data Integrity
Data Gathering, Sampling, Data Management and Data Integrity
 
Mm23
Mm23Mm23
Mm23
 
Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?
 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testing
 

Destaque

Survey Research (SOC2029). Seminar 10: non-response and missing data
Survey Research (SOC2029). Seminar 10: non-response and missing dataSurvey Research (SOC2029). Seminar 10: non-response and missing data
Survey Research (SOC2029). Seminar 10: non-response and missing dataDavid Rozas
 
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...David Rozas
 
Survey (Primer on Questions, Sampling + Case Study)
Survey (Primer on Questions, Sampling + Case Study)Survey (Primer on Questions, Sampling + Case Study)
Survey (Primer on Questions, Sampling + Case Study)Dada Veloso-Beltran
 
Sampling methods PPT
Sampling methods PPTSampling methods PPT
Sampling methods PPTVijay Mehta
 
Sampling design, sampling errors, sample size determination
Sampling design, sampling errors, sample size determinationSampling design, sampling errors, sample size determination
Sampling design, sampling errors, sample size determinationVishnupriya T H
 
Introduction to Survey Research
Introduction to Survey ResearchIntroduction to Survey Research
Introduction to Survey ResearchJames Neill
 
RESEARCH METHOD - SAMPLING
RESEARCH METHOD - SAMPLINGRESEARCH METHOD - SAMPLING
RESEARCH METHOD - SAMPLINGHafizah Hajimia
 

Destaque (8)

Survey Research (SOC2029). Seminar 10: non-response and missing data
Survey Research (SOC2029). Seminar 10: non-response and missing dataSurvey Research (SOC2029). Seminar 10: non-response and missing data
Survey Research (SOC2029). Seminar 10: non-response and missing data
 
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...
Talk is silver, code is gold? Contribution beyond source code in Free/Libre O...
 
Survey (Primer on Questions, Sampling + Case Study)
Survey (Primer on Questions, Sampling + Case Study)Survey (Primer on Questions, Sampling + Case Study)
Survey (Primer on Questions, Sampling + Case Study)
 
Sampling survey - Intro to Quantitative
Sampling survey - Intro to Quantitative Sampling survey - Intro to Quantitative
Sampling survey - Intro to Quantitative
 
Sampling methods PPT
Sampling methods PPTSampling methods PPT
Sampling methods PPT
 
Sampling design, sampling errors, sample size determination
Sampling design, sampling errors, sample size determinationSampling design, sampling errors, sample size determination
Sampling design, sampling errors, sample size determination
 
Introduction to Survey Research
Introduction to Survey ResearchIntroduction to Survey Research
Introduction to Survey Research
 
RESEARCH METHOD - SAMPLING
RESEARCH METHOD - SAMPLINGRESEARCH METHOD - SAMPLING
RESEARCH METHOD - SAMPLING
 

Semelhante a PACIS Survey Workshop

Stat11t Chapter1
Stat11t Chapter1Stat11t Chapter1
Stat11t Chapter1gueste87a4f
 
How NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataHow NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataDataCards
 
April Heyward Research Methods Class Session - 7-29-2021
April Heyward Research Methods Class Session - 7-29-2021April Heyward Research Methods Class Session - 7-29-2021
April Heyward Research Methods Class Session - 7-29-2021April Heyward
 
Statistical and critical thinking
Statistical and critical thinkingStatistical and critical thinking
Statistical and critical thinkingRamiroGarcia103
 
What is Up With Lower Respone Rates?
What is Up With Lower Respone Rates?What is Up With Lower Respone Rates?
What is Up With Lower Respone Rates?soder145
 
SURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptSURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptRavi Kumar
 
Marketing research
Marketing researchMarketing research
Marketing researchNasir Uddin
 
A Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyA Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyJim Jimenez
 
Introduction to statistics 2013
Introduction to statistics 2013Introduction to statistics 2013
Introduction to statistics 2013Mohammad Ihmeidan
 
Drug Addiction docx
Drug Addiction docxDrug Addiction docx
Drug Addiction docxSafeer Ali
 
Survey procedures in dentistry
Survey procedures in dentistrySurvey procedures in dentistry
Survey procedures in dentistrydeepthiRagasree
 
Experimental Psychology
Experimental PsychologyExperimental Psychology
Experimental PsychologyElla Mae Ayen
 
SURVEY RESEARCH- Advance Research Methodology
SURVEY RESEARCH- Advance Research MethodologySURVEY RESEARCH- Advance Research Methodology
SURVEY RESEARCH- Advance Research MethodologyRehan Ehsan
 
Research Activity 1.docx
Research Activity 1.docxResearch Activity 1.docx
Research Activity 1.docxAsheFritz
 
practical reporting.pptx
practical reporting.pptxpractical reporting.pptx
practical reporting.pptxprimoboymante
 
LASA 1 Final Project Early Methods Section3LASA 1.docx
LASA 1 Final Project Early Methods Section3LASA 1.docxLASA 1 Final Project Early Methods Section3LASA 1.docx
LASA 1 Final Project Early Methods Section3LASA 1.docxDIPESH30
 

Semelhante a PACIS Survey Workshop (20)

1.1 statistical and critical thinking
1.1 statistical and critical thinking1.1 statistical and critical thinking
1.1 statistical and critical thinking
 
Stat11t chapter1
Stat11t chapter1Stat11t chapter1
Stat11t chapter1
 
Stat11t Chapter1
Stat11t Chapter1Stat11t Chapter1
Stat11t Chapter1
 
How NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling DataHow NOT to Aggregrate Polling Data
How NOT to Aggregrate Polling Data
 
April Heyward Research Methods Class Session - 7-29-2021
April Heyward Research Methods Class Session - 7-29-2021April Heyward Research Methods Class Session - 7-29-2021
April Heyward Research Methods Class Session - 7-29-2021
 
Statistical and critical thinking
Statistical and critical thinkingStatistical and critical thinking
Statistical and critical thinking
 
What is Up With Lower Respone Rates?
What is Up With Lower Respone Rates?What is Up With Lower Respone Rates?
What is Up With Lower Respone Rates?
 
SURVEY_RESEARCH.ppt
SURVEY_RESEARCH.pptSURVEY_RESEARCH.ppt
SURVEY_RESEARCH.ppt
 
Marketing research
Marketing researchMarketing research
Marketing research
 
A Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research MethodologyA Guide To Using Qualitative Research Methodology
A Guide To Using Qualitative Research Methodology
 
Introduction to statistics 2013
Introduction to statistics 2013Introduction to statistics 2013
Introduction to statistics 2013
 
Drug Addiction docx
Drug Addiction docxDrug Addiction docx
Drug Addiction docx
 
Survey procedures in dentistry
Survey procedures in dentistrySurvey procedures in dentistry
Survey procedures in dentistry
 
Unit 2
Unit 2Unit 2
Unit 2
 
Experimental Psychology
Experimental PsychologyExperimental Psychology
Experimental Psychology
 
SURVEY RESEARCH- Advance Research Methodology
SURVEY RESEARCH- Advance Research MethodologySURVEY RESEARCH- Advance Research Methodology
SURVEY RESEARCH- Advance Research Methodology
 
Methodology Essay
Methodology EssayMethodology Essay
Methodology Essay
 
Research Activity 1.docx
Research Activity 1.docxResearch Activity 1.docx
Research Activity 1.docx
 
practical reporting.pptx
practical reporting.pptxpractical reporting.pptx
practical reporting.pptx
 
LASA 1 Final Project Early Methods Section3LASA 1.docx
LASA 1 Final Project Early Methods Section3LASA 1.docxLASA 1 Final Project Early Methods Section3LASA 1.docx
LASA 1 Final Project Early Methods Section3LASA 1.docx
 

Mais de Syracuse University

Basic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University FacultyBasic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University FacultySyracuse University
 
Why R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformWhy R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformSyracuse University
 
Carma internet research module scale development
Carma internet research module   scale developmentCarma internet research module   scale development
Carma internet research module scale developmentSyracuse University
 
Carma internet research module getting started with question pro
Carma internet research module   getting started with question proCarma internet research module   getting started with question pro
Carma internet research module getting started with question proSyracuse University
 
Carma internet research module visual design issues
Carma internet research module   visual design issuesCarma internet research module   visual design issues
Carma internet research module visual design issuesSyracuse University
 
Introduction to Advance Analytics Course
Introduction to Advance Analytics CourseIntroduction to Advance Analytics Course
Introduction to Advance Analytics CourseSyracuse University
 
Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)Syracuse University
 
Carma internet research module: Future data collection
Carma internet research module: Future data collectionCarma internet research module: Future data collection
Carma internet research module: Future data collectionSyracuse University
 

Mais de Syracuse University (20)

Discovery informaticsstanton
Discovery informaticsstantonDiscovery informaticsstanton
Discovery informaticsstanton
 
Basic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University FacultyBasic SEVIS Overview for U.S. University Faculty
Basic SEVIS Overview for U.S. University Faculty
 
Why R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformWhy R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics Platform
 
Chapter9 r studio2
Chapter9 r studio2Chapter9 r studio2
Chapter9 r studio2
 
Basic Overview of Data Mining
Basic Overview of Data MiningBasic Overview of Data Mining
Basic Overview of Data Mining
 
Strategic planning
Strategic planningStrategic planning
Strategic planning
 
Carma internet research module scale development
Carma internet research module   scale developmentCarma internet research module   scale development
Carma internet research module scale development
 
Carma internet research module getting started with question pro
Carma internet research module   getting started with question proCarma internet research module   getting started with question pro
Carma internet research module getting started with question pro
 
Carma internet research module visual design issues
Carma internet research module   visual design issuesCarma internet research module   visual design issues
Carma internet research module visual design issues
 
Siop impact of social media
Siop impact of social mediaSiop impact of social media
Siop impact of social media
 
Basic Graphics with R
Basic Graphics with RBasic Graphics with R
Basic Graphics with R
 
R-Studio Vs. Rcmdr
R-Studio Vs. RcmdrR-Studio Vs. Rcmdr
R-Studio Vs. Rcmdr
 
Getting Started with R
Getting Started with RGetting Started with R
Getting Started with R
 
Moving Data to and From R
Moving Data to and From RMoving Data to and From R
Moving Data to and From R
 
Introduction to Advance Analytics Course
Introduction to Advance Analytics CourseIntroduction to Advance Analytics Course
Introduction to Advance Analytics Course
 
Installing R and R-Studio
Installing R and R-StudioInstalling R and R-Studio
Installing R and R-Studio
 
Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)Mining tweets for security information (rev 2)
Mining tweets for security information (rev 2)
 
What is Data Science
What is Data ScienceWhat is Data Science
What is Data Science
 
Reducing Response Burden
Reducing Response BurdenReducing Response Burden
Reducing Response Burden
 
Carma internet research module: Future data collection
Carma internet research module: Future data collectionCarma internet research module: Future data collection
Carma internet research module: Future data collection
 

Último

Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfJemuel Francisco
 
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...DhatriParmar
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operationalssuser3e220a
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxDhatriParmar
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWQuiz Club NITW
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research DiscourseAnita GoswamiGiri
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4JOYLYNSAMANIEGO
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptxDhatriParmar
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQuiz Club NITW
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxkarenfajardo43
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Projectjordimapav
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptxmary850239
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQuiz Club NITW
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 

Último (20)

Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
 
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operational
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptxMan or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
 
Mythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITWMythology Quiz-4th April 2024, Quiz Club NITW
Mythology Quiz-4th April 2024, Quiz Club NITW
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research Discourse
 
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
 
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITWQ-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
Q-Factor HISPOL Quiz-6th April 2024, Quiz Club NITW
 
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptxGrade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Project
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 

PACIS Survey Workshop

  • 1. Surveys, Response Rates, and NonresponseJeffrey StantonSyracuse University 1
  • 2. Abstract The survey method is one of the most popular methods in Information Systems research. One problem that plagues most survey researchers is nonresponse. As theories get more complex and more constructs must be measured, surveys tend to get longer, and this can reduce response rates. Some traditions have developed around acceptable response rates, such as these: “In my area a 45% response rate is considered quite good...” or “that research can’t possibly be valid given a response rate under 20%.” Research shows that these and other myths about response rate are incorrect. In this tutorial, participants will learn four things: 1) The important difference between response rate and nonresponse bias, and why it is more important to minimize the latter rather than maximizing the former; 2) the full range of response enhancing techniques that have had their efficacy documented in the methods literature; 3) a focus on a particular method of response enhancement – survey and scale shortening; and 4) survey design methods that allow for the detection of the presence and magnitude of nonresponse bias. At the end of the tutorial, participants will have the skills and knowledge to build assessment and control of nonresponse into their survey methods. 2
  • 3. Why do we care if we our study has a low response rate?
  • 4. Low Response Rates …cause smaller data samples which decrease statistical power, increase confidence intervals, and may limit the types of statistical techniques that can effectively be applied to the collected data. …undermine the perceived credibility of the collected data …undermine the actualgeneralizability of the collected data because of nonresponse bias. Where nonresponse bias exists, survey results can produce misleading conclusions that do not generalize to the entire population
  • 5. Research History 1939 F. Stanton (1939) wrote one of the first empirical pieces on the topic in the Journal of Applied Psychology entitled, “Notes on the validity of mail questionnaire returns.” Suchmanand McCandless’s (1940) Journal of Applied Psychology article titled, “Who answers questionnaires?” A significant early event to draw interest in this topic occurred in 1948. 1948 2010
  • 6. In the 1948 U.S. presidential election, pre-election polls by major newspapers and polling organizations predicted a victory by New York State Governor, Thomas E. Dewey, ranging between 5 to 15 percentage points. Instead, the victory by incumbent president Harry S. Truman was an embarrassment for the emerging public opinion polling community. What caused the failure?
  • 7. Research Today Extensive literature on techniques to increase response rates: response enhancing strategies Statistical methods of compensating for nonresponse through imputation of missing data Rubin (1987) developed a book length treatment of methods for imputing data in sample surveys. Characteristics of Nonrespondents & Nonresponse Bias Rogelberg, S.G., Spitzmüller, C., Little, I.S., & Reeve, C.L. (2006). Understanding Response Behavior to an Online Special Topics Organizational Satisfaction Survey. Personnel Psychology, 59, 903-923
  • 8. Organizational Survey Response Rates Youssefinia (2003) examined 58 organizational surveys conducted over five years by two consulting firms. Anseel, Lievens, Schollaert(2008) Analyzed 2037 surveys, covering 1,251,651 individual respondents, published in 12 journals in I/O Psychology, Management, and Marketing during the period 1995-2008. You predict: What is a typical response rate in an organizational survey? What is the trend over recent years for response rates in organizations?
  • 9. 9
  • 10. What is an acceptable response rate for your study?
  • 11. Trick Question? Industry and academic standards only put a response rate into context The fact that everyone else also achieves 30%, 50%, or 70% response does not help to demonstrate that the reported research is free from nonresponse bias. In the absence of good information about presence, magnitude, and direction of nonresponse bias, ignoring the results of a study with a 10% response rate -- particularly if the research question explores a new and previously unaddressed issue -- is just as foolish as assuming that one with a response rate of 80% is unassailable.
  • 12. The Nature of Nonresponse Bias where ‘PNR’ refers to the proportion of non-respondents, ‘Xres’ is the respondent mean on a survey relevant variable and ‘Xpop’ is the population mean on the corresponding survey relevant variable, if it were actually known. Overall, the impact of nonresponse on survey statistics depends on the percentage not responding and the extent to which those not responding systematically different from the whole population on survey relevant variables.
  • 13. Error/Bias due to Non-Response(http://www.idready.org/courses/2005/spring/survey_SamplingFrames.pdf) Non-Respondentsµ0, α0, r0, F0 Respondentsµ1, α1, r1, F1 HowMany? HowDifferent? May 15-17, 2008
  • 14. Sample:N=100 If non-respondents resemble respondents, then low response rate is not a problem. n=5 say YES 10% Responsen=10 n=5 say NO
  • 15. Sample:N=100 Even when response rates are “high” substantial potential for error still exists. n=35 say YES 70% Responsen=70 n=35 say NO
  • 16. Worst Case Scenario Exercise What’s are the worst two things that could happen? Choose one scenario below and run the numbers… 1. Sample N=100; Response rate 30%; Percent YES for respondents = 40% (12 YES votes) 2. Sample N=100; Response rate 30%; Percent YES for respondents = 90% (27 YES votes) 3. Sample N=100; Response rate: 90%; Percent YES for respondents = 50% (45 YES votes) 4. Sample N=100; Response rate: 90%; Percent YES for respondents = 80% (72 YES votes) If you counted all non-respondents as YES votes or NO votes, what would be the range of results in each of these scenarios?
  • 17. Previous Examples: Proportions Of course, rating scale means can be similarly impacted by non-response error What about correlations? Word on the street is that correlations are fairly robust against non-response error We ran a simulation – 300 runs of random samples of n=500 from a larger population where rho=0.284 between a rating scale and a criterion scale Half of the samples had biased nonresponse where those favorable on a rating scale were twice as likely to respond Results showed that the unbiased samples had slightly suppressed correlations: a decline of r=0.038 Biased samples had more suppression: a decline of r=0.066 Difference in suppression was statistically significant, p<.001 No sign reversals of correlations in any of the 300 samples
  • 18. Case Study Exercise Instructions: Read brief case, make some notes Discuss with others at your table Generate as many ideas as you can, write ideas on sheets Prepare to report back to complete group Case Overview: You are an preparing a climate survey Unlikely to get response rate much above 45% How can you prepare for possible criticism of results?
  • 19. N-BIAS Response rate alone is an inaccurate and unreliable proxy for study quality. While improving response rates is a worthy goal, researchers’ major efforts and resources should go into understanding the magnitude and direction of bias caused by non-response, if it exists. Rogelberg and Stanton (2006) advocate that researchers should conduct a nonresponse bias impact assessment (N-BIAS), regardless of how high a response rate is achieved.
  • 20. N-BIAS Methods N-BIAS is presently composed of eight techniques Archival Analysis Follow-up Approach Wave Analysis Passive Nonresponse Analysis Interest Level Analysis Active Nonresponse Analysis Worst Case Resistance Benchmarking/Norms Demonstrate Generalizability
  • 21. N-BIAS: How it Works Similar to a test validation strategy. In amassing evidence for validity, each of several different validation methods (e.g., concurrent validity) provides a variety of insights into validity. Each assessment approach has strengths and limitations. There is no one conclusive approach and no particular piece of evidence that is sufficient to ward off all threats. Assessing the impact of nonresponse bias requires development and inclusion of different types of evidence, and the case for nugatory impact of nonresponse bias is built on multiple pieces of evidence that converge with one another.
  • 22. Technique 1: Archival Analysis Most common technique The researcher identifies an archival database that contains the members of the whole survey sample (e.g. personnel records). That data set, usually containing demographic data, can be described: 50% Female; 40% Supervisors, etc After data collection, code numbers on the returned surveys (or access passwords) can be used to identify respondents, and by extension nonrespondents. Using this information, the archival database can be partitioned into two segments: 1) data concerning respondents; and 2) data concerning nonrespondents.
  • 23. So, if you found the above do you have nonresponse bias?
  • 24. Technique 2: Follow-up Approach Using identifiers attached to returned surveys (or access passwords), respondents can be identified and by extension nonrespondents. The follow-up approach involves randomly selecting and resurveying a small segment of nonrespondents, using an alternative modality and often by phone. The full or abridged survey is then administered. In the absence of identifiers, telephone a small random sample and ask whether they responded or not to the initial survey. Follow-up with survey relevant questions
  • 25. Technique 3: Wave Analysis By noting in the data set whether each survey was returned before the deadline, after an initial reminder note, after the deadline, and so on, responses from pre-deadline surveys can be compared with the late responders on actual survey variables (e.g. compare job satisfaction levels).
  • 26. Technique 4: Passive Nonresponse Analysis Rogelberg et al. (2003) found that the vast majority of nonresponse can be classified as being passive in nature (approx. 85%). Passive nonresponse does not appear to be planned. When asked (upon receipt of the survey), these individuals indicate a general willingness to complete the survey – if they have the time. Given this, it is not surprising that they generally do not differ from respondents with regard to job satisfaction or related variables.
  • 27. Technique 5: Interest Level Analysis Researchers have repeatedly identified that interest level in the survey topic is one of the best predictors of a respondent’s likelihood of completing the survey. As a result, if interest level is related to attitudinal standing on the topics making up the survey, the survey results are susceptible to bias. E.g., if low interest individuals tend to be more dissatisfied on the survey constructs in question, results will be biased “high”
  • 28. Technique 6: Active Nonresponse Analysis Active nonrespondents, in contrast to passive nonrespondents, are those that overtly choose not to respond to a survey effort. The nonresponse is volitional and a priori (i.e. it occurs when initially confronted with a survey solicitation). Active nonrespondents tend to differ from respondents on a number of dimensions typically relevant to the organizational survey researcher (e.g. job satisfaction)
  • 29. Technique 7: Worst Case Resistance Given the data collected from study respondents in an actual study, one can empirically answer the question of what proportion of nonrespondents would have to exhibit the opposite pattern of responding to adversely influence sample results. Similar philosophy as what occurs in meta-analyses when considering the “file-drawer problem” By adding simulated data to an existing data set, one can explore how resistant the dataset is to worst case responses from non-respondents.
  • 30. Technique 8: Benchmarking Using measures with norms for the population under examination, compare means and standard deviations of the collected sample to the norms
  • 31. Technique 9: Demonstrate Generalizability By definition, nonresponse bias is a phenomenon that is peculiar to a given sample under particular study conditions. Triangulating with a sample collected using a different method, or varying the conditions under which the study is conducted should also have effects on the composition of the nonrespondents group.
  • 32. N-BIAS: Conclusion Nonresponse can be problematic on a number of fronts Do what you can to facilitate response In the inevitable case of nonresponse, engage in the N-BIAS approach in an attempt to accumulate information to provide insight into the presence and absence of problematic nonresponse bias Engage in as many techniques as feasible. 1, is better than 0, 2 is better than 1. Most published literature has none! Each approach has a different purpose, each has positives and negatives. Use N-BIAS information collected to decide on next steps and educate your audience
  • 33. Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys Journal of Marketing Research, 14 (Special Issue: Recent Developments in Survey Research), 396-402. Baruch, Y. (1999). Response rate in academic studies – A comparative Analysis. Human Relations, 52 (4), 421-438. Bosnjak M., Tuten, T.L., & Wittman , W. W. (2005). Unit (non) response in web-based access panel surveys: An extended planned-behavior approach. Psychology and Marketing, 22, 489-505. Dillman, Don A. (2000). Mail and Internet Surveys: The Tailored Design Method. New York, NY, US: John Wiley & Sons, Inc. Groves, R., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68, 2-31. Rogelberg, S.G., & Luong, A. (1998). Nonresponse to mailed surveys: A review and guide. Current Directions in Psychological Science, 7, 60-65. Rogelberg, S.G., Luong, A., Sederburg, M.E., & Cristol, D.S. (2000). Employee Attitude Surveys: Examining the Attitudes of Noncompliant Employees. Journal of Applied Psychology, 85(2), 284-293. Rogelberg, S. G., Fisher, G. G., Maynard, D, Hakel, M.D., & Horvath, M. (2001). Attitudes Toward Surveys: Development of a Measure and its Relationship to Respondent Behavior. Organizational Research Methods, 4, 3-25. Rogelberg, S. G., Conway, J. M.., Sederburg, M. E., Spitzmuller, C., Aziz, S., Knight, W. E. (2003). Profiling Active and Passive-non-respondents to an Organizational Survey. Journal of Applied Psychology, 88 (6), 1104-1114. Rubin, D. (1987). Multiple imputation for nonresponse in surveys. New York, NY, US: John Wiley & Sons, Inc. Tomaskovic-Devey, D., Leiter, J., & Thompson, S. (1994). Organizational survey response. Administrative Science Quarterly, 39, 439-457 Weiner, S.P. & Dalessio, A.T. (2006). Oversurveying: Causes, consequences, and cures. In A.I. Kraut (Ed.), Getting Action From Organizational Surveys: New Concepts, Methods, and Applications. (pp 294-311) San Francisco, California: Jossey-Bass Yammarino, F.J., Skinner, S.J., & Childers, T.L. (1991). Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly, 55, 613–639. Key References
  • 34. Segment 2:Methods to Facilitate Response Quick brainstorm: I have thought of 12 response facilitation techniques. How many can you as a group come up with in three minutes? 34
  • 35. Methods To Facilitate Response Actively publicize the survey. Personally notify your potential participants that they will be receiving a survey in the near future. Provide incentives, if appropriate. Inexpensive items such as pens, key chains, or certificates for free food/drink can increase responses.
  • 36. Keep the survey to a reasonable length. A theory-driven approach to survey design helps determine what is absolutely necessary to include in the survey instrument. Do not use the “kitchen sink” approach.What is a reasonable length? Be sensitive to the actual physical design of your survey. For example, how questions are ordered may impact respondent participation. A study by Roberson and Sundstrom (1990) suggests placing the more interesting and easy questions first and demographic questions last.
  • 37. Send reminder notes. Response rates may bump up 3-7% with each reminder note, but keep in mind that there's a point of diminishing returns when you irritate people who have chosen not to participate. Give everyone the opportunity to participate (e.g., paper surveys where required, scheduling time off the phone in the call centers, etc.). At one company for example, most surveys run for 10 business days and span across three work weeks. Track response rates so that the survey coordinators can identify units with low response rates and contact the responsible manager to increase responses. 
  • 38. Foster commitment to the survey effort. For example, you can involve a wide range of employees (across many levels) in the survey development process. Link the content of the survey to important business outcomes. Provide respondents with survey feedback after the project is completed. Be careful not to abandon your participants once getting the data you wanted from them. You are paving the way for future survey efforts. Personalization of the survey invitation. Personal signature as part of cover letter. Topic salience
  • 39. Even when controlling for the presence of other techniques, advance notice, personalization, identification numbers, and salience, are associated with higher response rates. Because of survey fatigue and declining response rates we need to do more just to get the same results as in the past. Target facilitation strategy to who you are surveying. For top executives, Anseel found that salience of the survey topic was most key. Incentives were counterproductive Incentives worked for unemployed individuals
  • 40. Segment 3:Survey reduction techniques in detail Quick Brainstorm: I have thought of seven ways of reducinga survey. How many reduction methods can you think of in three minutes? 40
  • 41. Primary Goal: Reduce Administration Time Secondary goals Reduce perceived administration time Increase the engagement of the respondent with the experience of completing instrument  lock in interest and excitement from the start Reduce the extent of missing and erroneous data due to carelessness, rushing, test forms that are hard to use, etc. Increase the respondents’ ease of experience (maybe even enjoyment!) so that they will persist to the end AND that they will respond again next year (or whenever the next survey comes out) Conclusions? Make the survey SEEM as short and compact as possible Streamline the WHOLE EXPERIENCE from the first call for participation all the way to the end of the final page of the instrument Focus test-reduction efforts on the easy stuff before diving into the nitty-gritty statistical stuff 41
  • 42.
  • 43. Comprehension of instructions only influences novice performance on surveysCatrambone (1990; HCI)
  • 44. Instructions on average are written five grade levels above average grade level of respondent; 23% of respondents failed to understand at least one element of instructionsSpandorfer et al. (1993; Annals of EM)42
  • 45.
  • 46. Most people don’t read instructions anyway. When they do, the instructions often don’t help them respond any better!
  • 47. If your response format is so novel that people require instructions, then you have a substantial burden to pilot test, in order to ensure that people comprehend the instructions and respond appropriately. Otherwise, do not take the risk!43
  • 48.
  • 49. Self-completed demographic data frequently containing missing fields or intentional mistakes44
  • 50.
  • 51. Respondents should feel like demographics are not serving to identify them in their survey responses.
  • 52. You could offer respondents two choices: match (or automatically fill in) some/all demographic data using the code number provided in your invitation email (or on a paper letter); they fill in the demographic data (on web-based surveys, a reveal can branch respondents to the demographics page) 45
  • 53. From Don Dillman’sTailored Design Method: Key Form/Interface Design Goals – Non-subordinating language, No embarrassment, No drudgery, Readability, Simplicity Drudgery – Questions that require data lookup, calculation, interpolation, recall of specific events from distant past; response process should give a sense of forward momentum and achievement Readability – Grade level should match respondent population reading capability Simplicity – Layout should draw the eye directly to the items and response fields; response method should fit respondents’ experience and expectations Discuss: Any particularly frustrating surveys? Particularly easy/streamlined ones? 46 Forms/Interface Design
  • 54. 47
  • 55. Eligibility If a survey has eligibility requirements, the screening questions should be placed at the earliest possible point in the survey. (eligibility requirements can appear in instructions, but this should not be the sole method of screening out ineligible respondents) Skip Logic Skip logic actually shortens the survey by setting aside questions for which the respondent is ineligible. Branching Branching may not shorten, but can improve the user experience by offering questions specifically focused to the respondent’s demographic or reported experience. 48 Illustration credit: Vovici.com Eligibility, Skip Logic, and Branching
  • 56. Discuss: Ever answer a survey where you knew that your answer would predict how many questions you would have to answer after that?e.g., “How many hotel chains have you been to in the last year?” If users can predict that their eligibility, the survey skip logic, or survey branching will lead to longer responses, more complex responses, or more difficult or tedious responses, they may: Abandon the survey Backup and change their answer to the conditional with less work (if the interface permits it). 49 Implications: Eligibility, Skip Logic, and Branching Illustration credit: Vovici.com
  • 57. Branch design should try not to imply what the user would have experienced in another branch. Paths through the survey should avoid causing considerably more work for some respondents than for others– if at all possible. 50 Implications: Eligibility, Skip Logic, and Branching Illustration credit: Vovici.com
  • 58. Panel Designs and Multiple Administration Panel designs measure the same respondents on multiple occasions. Typically either predictors are gathered at an early point in time, and outcomes gathered at a later point in time, or both predictors and outcomes are measured at every time point. (There are variations on these two themes). Panel designs are based on maturation and/or intervention processes that require the passage of time. Examples: career aspirations over time, person-organization fit over time, training before/after – discuss others? Minimally, panel designs can help mitigate (though not solve) the problem of common method bias; e.g., responding to a criterion at time 2, respondents tend to forget how they responded at time 1. 51
  • 59. Panel Designs and Multiple Administration Survey designers can apply the logic of panel designs to their own surveys: Sometimes, you have to collect a large number of variables (no measure shortening), and it is impractical to do so in a single administration. Generally speaking: Better to have a many short, pleasant survey administrations with a cumulative “work time lost” of an hour vs. long and grinding one hour-long survey. The former can get you happier and less fatigued respondents and better data, hopefully. In the limit, consider the implications of a “Today’s Poll” approach to measuring climate, stress, satisfaction, or other attitudinal variables: One question per day, every day…. 52
  • 60. Unobtrusive Behavioral Observation Surveys appear convenient and relatively inexpensive in and of themselves…however, the cumulative work time lost across all respondents may be quite large. Methods that assess social variables through observations of overt behavior rather than self report can provide indications of stress, satisfaction, organizational citizenship, intent to quit, and other psychologically and organizationally relevant variables. Examples Cigarette breaks over time (frequency, # of incumbents per day); Garbage (weight of trash before/after a recycling program); Social media usage (tweets, blog posts, Facebook); Wear of floor tiles Absenteeism or tardiness records; Incumbent, team and department production quality and quantity measures 53
  • 61. Unobtrusive Behavioral Observation Most unobtrusive observations must be conducted over time: Establish a baseline for the behavior. Examine subsequent time periods to examine changes/trends over time. Generally, much more labor intensive data collection than surveys. Results should be cross-validated with other types of evidence. 54
  • 62. Scale Reduction and One-item Measures Standard scale construction calls for “sampling the construct domain” with items that tap into different aspects of the construct with items that refer to various content areas. Scales with more items can include a larger sample of the behaviors or topics relevant to the construct. 55 RELEVANT measuring what you want measure Construct Domain Item Content CONTAMINATED measuring what you don’t want to measure DEFICIENT not measuring what you want to measure
  • 63. Scale Reduction and One-item Measures When fewer items are used, by necessity they must be either more general in wording to obtain full coverage (hopefully) more narrow to focus on a subset of behaviors/topics Internal consistency reliability reinforces this trade-off: As the number of items gets smaller, inter-item correlation must rise to maintain a given level of internal consistency. However, scales with fewer than 3-5 items rarely achieve acceptable internal consistency without simply becoming alternative wordings of the same questions. Discussion: How many of you have taken a measure where you were being asked the same question again and again? Your reactions? Why was this done? The one-item solution: A one-item measure usually “covers” a construct only if is highly non-specific. A one item measure has a measurable reliability (see Wanous & Hudy; ORM, 2001), but the concept of internal consistency is meaningless. Discuss: A one-item knowledge measure vs. a one-item job satisfaction measure. 56
  • 64. One-item Measure Literature Research using single item measures of each of the five JDI job satisfaction facets and found correlations between .60 and .72 to the full length versions of the JDI scalesNagy (2002) Review of single-item graphical representation scales; so called “faces” scales Patrician (2004) Single item graphic scale for organizational identificationShamir & Kark (2004) Research finding that single item job satisfaction scales systematically overestimate workers’ job satisfactionOshagbemi(1999) Single item measures work best on “homogeneous” constructsLoo (2002) 57
  • 65. Scale Reduction: Technical Considerations Items can be struck from a scale based on three different sets of qualities:  1. Internal item qualities refer to properties of items that can be assessed in reference to other items on the scale or the scale's summated scores.  2. External item qualities refer to connections between the scale (or its individual items) and other constructs or indicators.  3. Judgmental item qualities refer to those issues that require subjective judgment and/or are difficult to assess in isolation of the context in which the scale is administered The most widely used method for item selection in scale reduction is some form of internal consistency maximization. Corrected item-total correlations provide diagnostic information about internal consistency. In scale reduction efforts, item-total correlations have been employed as a basis for retaining items for a shortened scale version. Factor analysis is another technique that, when used for scale reduction, can lead to increased internal consistency, assuming one chooses items that load strongly on a dominant factor 58
  • 66. Scale Reduction II Despite their prevalence, there are important limitations of scale reduction techniques that maximize internal consistency.  Choosing items to maximize internal consistency leads to item sets highly redundant in appearance, narrow in content, and potentially low in validity  High internal consistency often signifies a failure to adequately sample content from all parts of the construct domain  To obtain high values of coefficient alpha, a scale developer need only write a set of items that paraphrase each other or are antonyms of one other. One can expect an equivalent result (i.e., high redundancy) from using the analogous approach in scale reduction, that is, excluding all items but those highly similar in content. 59
  • 67. Scale Reduction III IRT provides an alternative strategy for scale reduction that does not focus on maximizing internal consistency.  One should retain items that are highly discriminating (i.e., moderate to large values of a) and one should attempt to include items with a range of item thresholds (i.e., b) that adequately cover the expected range of the trait in measured individuals  IRT analysis for scale reduction can be complex and does not provide a definitive answer to the question of which items to retain; rather, it provides evidence for which items might work well together to cover the trait range Relating items to external criteria provides a viable alternative to internal consistency and other internal qualities  Because correlations vary across different samples, instruments, and administration contexts, an item that predicts an external criterion best in one sample may not do so in another.  Choosing items to maximize a relation with an external criterion runs the risk of a decrease in discriminant validity between the measures of the two constructs. 60
  • 68. Scale Reduction IV The overarching goal of any scale reduction project should be to closely replicate the pattern of relations established within the construct's nomological network.  In evaluating any given item's relations with external criteria, one should seek moderate correlations with a variety of related scales (i.e., convergent validity) and low correlations with a variety of unrelated measures Researchers may also need to examine other criteria beyond statistical relations to determine which items should remain in an abbreviated scale. Clarity of expression, its relevance to a particular respondent population, the semantic redundancy of an item's content with other items, the perceived invasiveness of an item, and an item's "face" validity. Items lacking apparent relevance, or that are highly redundant with other items on the scale, may be viewed negatively by respondents. To the extent that judgmental qualities can be used to select items with face validity, both the reactions of constituencies and the motivation of respondents maybe enhanced Simple strategy for retention that does not require IRT analysis: Stepwise regression  Rank ordered item inclusion in an "optimal" reduced-length scale that accounts for a nearly maximal proportion of variance in its own full-length summated scale score.  Order of entry into the stepwise regression is a rank order proxy indicating item goodness  Empirical results show that this method performs as well as a brute force combinatorial scan of item combinations; method can also be combined with human judgment to pick items from among the top ranked items (but not in strict ranking order) 61
  • 69. Segment 4:Pitfalls, trade-offs, and justifications Quick Brainstorm: What complaints have you heard frommanagers when you ask them if you can survey theiremployees? 62
  • 70. Evaluating Surveying Costs and Benefits Wang and Strong’s Data Quality Framework (1996; JoMIS) Top Five Data Quality Concerns of N = 355 Managers Bottom Five Data Quality Concerns Inference: A survey effort is seen as more valuable to the extent that it completed quickly and cost effectively, such that results make sense to managers and seem unbiased. SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 63
  • 71.
  • 72. the less work time that is lost
  • 73. the higher chance that one or more constructs will perform poorly if the measures are not well established/developed
  • 74. less information might be obtained about each respondent and their score on a given construct
  • 75. have to sell its meaningfulness to decision makers who will act on the resultsSIOP XXVI - Workshop #7 - Put Your Survey on a Diet 64
  • 76.
  • 77. Loss of validity relationships
  • 78. Difficulty or inability to compare to or equate with prior time periods of data collection (e.g., if the items or measures cannot be matched)
  • 79.
  • 80. Reduced administration time saves employee frustration, increases response rates, fosters good will
  • 81. When reduction process is careful and systematic, validity and usability of results are preserved for many applicationsSIOP XXVI - Workshop #7 - Put Your Survey on a Diet 65
  • 82. 66 For brief surveys, lost productivity is nugatory, even for highly paid employees. As administration time goes up, lost time cost becomes excessive for highly paid employees. SIOP XXVI - Workshop #7 - Put Your Survey on a Diet
  • 83. IRT Information Function, 18 item scale vs. 6 item scale SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 67
  • 84. Caveats on Interpretation Validity relations do not tell the whole story. A validity coefficient will decline to the extent that there is extensive reordering of score levels between the predictor and the criterion. When comparing individual scores to a cut score or other standard, a short form can create localized mixing that is not reflected in diminished validity. SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 68
  • 85.
  • 86. Do assess appropriate reliability in short form scales (e.g., alpha)
  • 87. Do check response rate, abandoned forms, missing data levels, and intentional mal-response; compare with previous administration cycles
  • 88. Do compare correlations between short form and long form scales
  • 89. Do show off the elegance of the design of your reduced survey
  • 90. Don’tfacilitate year-over-year comparisons on scale scores or percentilesunless an equating study has been conducted
  • 91. Don’t allow decisions about individual respondents to occur using short-form scales without studies that re-assess cut score levels and related concerns
  • 92. Don’t assume that an inversion in the relative position of two specific respondents(or two departments) over time reflects a reliable changeSIOP XXVI - Workshop #7 - Put Your Survey on a Diet 69
  • 93. Bibliography Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74, 478-494. Catrambone, R. (1990). Specific versus general procedures in instructions. Human-Computer Interaction, 5, 49-93. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2008). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: Wiley. Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The Mini-IPIP scales: Tiny-yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18, 192-203. Emons, W. H. M., Sijtsma, K., & Meijer, R. R. (2007). On the consistency of classification using short scales. Psychological Methods, 12, 105-12. Girard, T. A., & Christiansen, B. K. (2008). Clarifying problems and offering solutions for correlated error when assessing the validity of selected-subtest short forms. Psychological Assessment, 20, 76-8. Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21, 967-988. Levy, P. (1968). Short-form tests: A methodological review. Psychological Bulletin, 6, 410-416. Loo, R. (2002). A caveat on using single-item versus multiple-item scales. Journal of Managerial Psychology, 17, 68-75. Lord, F. M. (1965). A strong true-score theory, with applications. Psychometrika, 3, 239-27. Nagy, M. S. (2002). Using a single item approach to measure facet job satisfaction. Journal of Occupational and Organizational Psychology, 75, 77-86. Novick, D. G., & Ward, K. (2006). Why don't people read the manual? Paper presented at the SIGDOC '06 Proceedings of the 24th Annual ACM International Conference on Design of Communication. Oshagbemi, T. (1999). Overall job satisfaction: how good are single versus multiple-item measures? Journal of Managerial Psychology, 14, 388-403. Patrician, P. A. (2004). Single-item graphic representational scales. Nursing Research, 53, 347-352. Shamir, B., & Kark, R. (2004). A single item graphic scale for the measurement of organizational identification. Journal of Occupational and Organizational Psychology, 77, 115-123. SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 70
  • 94. Bibliography Smith, G. T., McCarthy, D. M., & Anderson, K. G. (2000). On the sins of short form development. Psychological Assessment, 12, 102-111. S pandorfer, J. M., Karras, D. J., Hughes, L. A., & Caputo, C. (1995). Comprehension of discharge instructions by patients in an urban emergency department. Annals of Emergency Medicine, 25, 71-74. Stanton, J. M., Sinar, E., Balzer, W. K., Smith, P. C., (2002). Issues and strategies for reducing the length of self-report scale. Personnel Psychology, 55, 167-194. Wanous, J. P., & Hudy, M. J. (2001). Single-item reliability: A replication and extension. Organizational Research Methods, 4, 361-375. Widaman, K. F., Little, T. D., Preacher, K. J., Sawalani, G. M. (2011). On creating and using short forms of scales in secondary research. In K. H. Trzesniewski, M. B. Donnellan, & R. E. Lucas (Eds.). Secondary data analysis: An introduction for psychologists (pp. 39-61). Washington, DC: American Psychological Association. SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 71
  • 95. About the presenter Jeff Stanton, PhD Jeffrey Stanton is Associate Vice President for Research at Syracuse University. Dr. Stanton’s research focuses on organizational behavior and technology. He is the author of more than 40 peer reviewed journal articles as well as two books, The Visible Employee: Using Workplace Monitoring and Surveillance to Protect Information Assets – Without Compromising Employee Privacy or Trust and Information Nation: Educating the Next Generation of Information Professionals. Stanton’s methodological expertise is in psychometrics including the measurement of job satisfaction and job stress, as well as research on creating abridged versions of scales; he is on the editorial board of Organizational Research Methods and is an associate editor at Human Resource Management. Dr. Stanton's research has been supported through 15 grants and supplements including the National Science Foundation’s CAREER award. Dr. Stanton received his Ph.D. in Industrial and Organizational psychology from the University of Connecticut in 1997. Contact Information: Jeffrey M. Stanton, PhD Syracuse University, School of Information Studies 316 Hinds Hall, Syracuse NY 13244 Voice: (315)443-2979 Email: jmstanto@syr.edu http://ischool.syr.edu/facstaff/member.aspx?id=223 SIOP XXVI - Workshop #7 - Put Your Survey on a Diet 72