1. This Photo by Unknown Author is licensed under CC BY-SA
S. 6 Quantitative
Research
Education 5P92 - Collier - Oct. 10, 2019
cc: Samuel Zeller - https://unsplash.com/@samuelzeller?utm_source=haikudeck&utm_medium=referral&utm_campaign=api-credit
2. Last week
•This week in the news
•Activity – sharing – final paper
•Introduction to field notes and
other qualitative approaches
•Silverman discussion
•Analysis activity/check in re
interviews
3. If you are doing research
You have options
•Quantitative
•Qualitative
•Conceptual
•Combinations and variations
4. This week
This week in the news
Quantitative Research
Critical review of research paper
Coding/theming
5. Quantitative Research Overview
Values breadth, statistical descriptions, and
generalizability.
Quantitative approaches to research center on
achieving objectivity, control, and precise
measurement.
Methodologically: deductive designs aimed at
refuting or building evidence in favor of specific
theories and hypotheses.
Commonly used in explanatory research
investigating causal relationships, associations, and
correlations.
6. Theoretical Perspective (Positivism)
Historically, positivism guided quantitative
research: reality exists independently of the
research process and can be measured via the
objective application of the scientific method.
Laws that govern the social world can
therefore be tested and proven.
7. Theoretical Perspective (Positivism)
Today, postpositivism guides quantitative research: reality exists
independently of the research process and can be studied by
employing objective methods grounded in measurement, control,
and systematic observation. Laws that govern the social world can be
predicted and tested via hypotheses that investigate causal
relationships or associations between/among variables. However,
absolute truth claims cannot be made. Postpositivism is based on
probability testing and building evidence to reject or
support hypotheses (but not conclusively prove them) (Crotty,
1998; Phillips & Burbules, 2000).
8. Quantitative Research Questions
Quantitative research typically employs either
research questions or hypotheses.
Quantitative research questions are deductive.
Questions center on how the variables under
investigation relate to each other, affect different
groups, or how they might be defined.
They may employ directional language,
including words such as cause, effect, determine,
influence, relate, associate, and correlate.
10. Descriptive Statistics
Mean, median, mode, percentage
E.g. IQ tests and normal curves
https://courses.lumenlearning.com/suny-lifespandevelopment/chapter/extremes-of-intelligence-intellectual-disability-and-giftedness/
11. Descriptive Statistics
Descriptive statistics: describe and summarize the data (Babbie,
2013; Fallon, 2016).
Frequencies: Count the number of occurrences of a category;
reported as percentages.
Measures of central tendency: Use a single value to represent
the sample.
Mean: the average
Median: the “middle” value
Mode: the most frequent value in the sample
Measures of dispersion: illustrate how spread out the individual
scores are and how they differ from each other.
The standard deviation: the most commonly used measure of dispersion
lets you see “how individual scores relate to all scores within the
distribution” (Fallon, 2016, p. 18).
15. Survey Research
• The most widely used quantitative design in the social sciences.
• Surveys rely on asking people standardized questions that
can be analyzed statistically. They allow researchers to collect a
breadth of data from large samples and generalize to the larger
population from which the sample was drawn.
• Typically used for ascertaining individuals’ attitudes, beliefs,
opinions, or their reporting of their experiences and/or
behaviors.
• Subjective data can only be ascertained only from the
respondents (Vogt, Vogt, Gardner, & Haeffele, 2014).
• Objective data are facts that can be ascertained elsewhere
(e.g., age, place of birth) (Vogt et al., 2014).
16. Survey Research Designs
• Cross-sectional designs seek information from a
sample at one point in time.
• Longitudinal designs occur at multiple times in order
to measure change over time.
• Types of longitudinal designs: repeated cross-
sectional, fixed-sample panel design, and
cohort study (in which a sample that experienced the
same event or starting point completes the survey at
multiple times) (Ruel et al., 2016). In longitudinal
designs attrition (respondents pulling out of the
study) is potentially an issue.
17. Questionnaires
• Questionnaires (the survey instrument) are the
primary data collection tool in survey research.
• Use preexisting surveys when available.
• Survey items (questions in the questionnaire) are
designed to test your hypotheses or answer your
research questions. The questions you design
around each concept in the study are how you
operationalize your variables. They are the
indicators that a variable is or is not present.
18. Respondent Burden and Fatigue
• Respondent burden: occurs to the degree that
respondents experience their participation as too
stressful and/or time-consuming (Biemer & Lyberg,
2003; Ruel et al., 2016).
• High burden causes respondent fatigue, which
leads to a higher nonresponse rate and lower-
quality responses (Ruel et al., 2016).
19. Survey Delivery
• Balance response rate and pragmatic concerns (e.g., time, budget)
• In-person: generally occur in group settings; highest response rate; a
researcher administers the survey
• Online: via e-mail or Web-based software (e.g., Survey Monkey); self-
administered; allow the inclusion of geographically dispersed respondents
• Mail: self-administered; low response rate; appropriate when
respondents are geographically dispersed and do not have online access
(e.g., older respondents not active online); include a SASE
• Telephone: administered by a researcher; low response rate; appropriate
when respondents are geographically dispersed and not accessible online
21. Validity and Reliability
Validity: the extent to which a measure is
actually tapping what we think it is tapping.
Reliability: the consistency of results (the
results are dependable).
22. Types of Validity
Face validity: a judgment call we make that, at face value,
the measure is tapping what we claim it is tapping.
Content validity: a judgment call made by experts that the
measure is valid.
Construct validity: the measure is tapping into the
concept and the related concepts into which we propose that
it is tapping.
Statistical validity: whether the statistical analysis chosen
was appropriate and whether the conclusions drawn are
consistent with the statistical analysis and the rules of
statistical laws.
Ecological validity: the findings are generalizable to a
real-world setting.
23. Two Major Categories of
Validity
Internal validity: centers on “factors that
affect the internal links between the independent
and dependent variables that support alternative
explanations for variations in the dependent
variable” (Adler & Clark, 2011, p. 188).
External validity: centers on whether we have
generalized to populations beyond those that are
supported by our test.
24. Types of Reliability
Interitem reliability: the use of multiple questions or
indicators intended to measure a single variable (Fallon,
2016). Reliability tests commonly used to check the internal
consistency of scales in survey research are Cronbach’s
alpha and factor analysis.
Test–retest reliability: testing the measure twice with
the same subjects to see if the results are consistent (Fallon,
2016).
Interrater reliability: combats against the effect of the
particular researcher/observer on the results.
26. Hypothesis
A prediction about how variables
relate to each other (the main
variables in your study have
already been identified in your
research purpose statement).
27. Experimental Research
The oldest form of quantitative research.
After the scientific revolution in the 17th century,
experiments in research came to mean “taking a
deliberate action followed by systematic observation”
(Shadish, Cook, & Campbell, 2002, p. 2).
Experiments rely on hypothesis testing
The basic use of experiments is to test how introducing
an intervention (a variable) affects what happens.
In order to be certain that you are measuring the effect
of the intervention (variable) you need to control for
all other factors.
Causal logic
28. Experiment Settings
Natural environments: used for field
experiments (when your study relies on
observing people in their normal activities,
including those who may not know the study is
occurring)
Research or scientific labs: the most
common setting; allow researchers the greatest
measure of control
Online: appropriate when you are studying
people’s beliefs and attitudes (Babbie, 2013)
29. Key Features of Experiments
Experimental groups: receive the intervention.
Control groups: do not receive the intervention.
All experiments have at least one experimental group, but not all
experiments have control groups.
Some experiments involve pretests and/or posttests in addition
to the experimental intervention.
Pretest: determines a subject’s baseline prior to introducing
the experimental intervention.
Posttest: given after the experimental intervention to assess
the impact of the intervention.
30. Categories of Experiments
There are three primary
categories of experiments:
preexperiments,
true experiments,
and quasi-experiments
31. Preexperimental Designs
Preexperimental designs are focused on
studying a single group that is given the
experimental intervention (experimental
groups only).
Campbell and Stanley (1963) identified
three types of preexperiments: one-shot
case study, one-group pretest–posttest
design, static-group comparison.
32. True Experimental Designs
Based on randomization, subjects are
randomly assigned to experimental and
control groups. These are the strongest
form of experiments.
Campbell and Stanley (1963) identified
three types of true experiments: pretest–
posttest control-group design, Solomon
four-group design, posttest-only control-
group design.
Watch short video now
33. Quasi-Experimental Designs
Quasi-experimental designs involve taking
advantage of natural settings or groups, and thus
subjects are not randomly assigned. Quasi-
experimental designs may involve experimental
groups only or experimental and control groups.
Campbell and Stanley (1963) noted that quasi-
experiments are appropriate when “better
designs are not feasible” (p. 34). Three
commonly used designs are: time-series
experiment, multiple time-series experiment,
nonequivalent control-group design.
34. Double-Blind Experiments
Whereas research subjects do not know if
they have been placed in an experimental or
a control group, in double-blind
experiments neither the subjects nor the
researchers know which subjects are in the
experimental group and which are in the
control group.
Eliminates the possibility that researchers’
observations will be skewed based on their
desire to see changes in the experimental
group.
35. Single-Subject Designs
Single-subject designs (also known as N-of-
1 designs) involve multiple observations of
one individual. First, multiple observations
are recorded to determine the individual’s
baseline; then the experimental
intervention is introduced, and additional
observations are recorded (Creswell, 2014).
36. The Hawthorne Effect
The Hawthorne effect (or testing effect): how
participation in a research study may, on its own,
impact subjects’ responses.
.
37. Data Analysis
Prepare the data: enter it into a spreadsheet or
statistical software program
Note response bias in survey research (the
effect of nonresponses on the results) (Creswell,
2014; Fowler, 2009)
38. Inferential Statistics
Inferential statistics: test the research
questions or hypotheses and make
inferences about the population from which
the sample was selected (Adler & Clark,
2011).
Null hypothesis significance testing
(NHST) or statistical significance
tests test the null hypothesis (which states
that there is no relationship
between/among the variables).
39. Additional Issues in
Quantitative Research
Randomization: randomly assigning subjects to the
experimental and control groups
Matching: the process of creating pairs of research
subjects who are similar, based on either a list of
predetermined characteristics (e.g., gender, race, age)
or on their scores on a pretest. Pairs of similar subjects
are then split into different groups (one member of a
pair is placed in the experimental group and one in the
control group).
Sampling error: occurs in surveys when you have a
biased sample
40. Representation
Tables: good for clearly presenting data on any number of variables and
can be used for descriptive or inferential statistics.
Histograms: good for presenting distributions for a single variable.
Scatterplots: illustrate the relationship between two continuous
variables to help you see:
Whether the relationship is linear
The direction of the relationship
The strength of the relationship
Outlying points
If you have an intervening (moderating) variable
Bar and line graphs: good for illustrating one or more categorical
variables that are the independent variables and one continuous variable
that is the dependent variable. Typically the independent variable(s) is
on the x-axis (horizontal) and the dependent variable is on the y-axis
(vertical).
41. What should we include in a
critical review of an article?
Work with a partner to add ideas
to the online document about
critical review (10 min)
42. Back to coding and
thematizing
Let`s look at my document
Remember to review the tutorial I
posted
43. This Photo by Unknown Author is licensed under CC BY-SA
For next class:
• Reading
• Interview assignment
• Choose article for review
cc: Samuel Zeller - https://unsplash.com/@samuelzeller?utm_source=haikudeck&utm_medium=referral&utm_campaign=api-credit