SlideShare uma empresa Scribd logo
1 de 96
Lecture 6
Survey Research & Design in Psychology
James Neill, 2017
Creative Commons Attribution 4.0
Psychometric
Instrument Development
Image source: http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg, CC-by-SA 3.0
2
Overview
1. Recap: Exploratory factor analysis
2. Concepts & their measurement
3. Measurement error
4. Psychometrics
5. Reliability & validity
6. Composite scores
7. Writing up instrument development
Recap:
Exploratory Factor Analysis
4
What is factor analysis?
• Factor analysis is:
–a family of multivariate correlational
methods used to identify clusters of
covariance (called factors)
• Two main purposes:
–Theoretical (PAF)
–Data reduction (PC)
• Two main types (extraction methods):
–Exploratory factor analysis (EFA)
–Confirmatory factor analysis (CFA)
5
EFA steps
1. Test assumptions
– Sample size
• 5+ cases x no. of variables (min.)
• 20+ cases x no. of variables (ideal)
• Another guideline: Or N > 200
– Outliers & linearity
– Factorability
• Correlation matrix: Some > .3?
• Anti-image correlation matrix diags > .5
• Measures of Sampling Adequacy: KMO > ~ .5 to 6;
Bartlett's sig?
6
EFA steps
2. Select type of analysis
– Extraction
• Principal Components (PC)
• Principal Axis Factoring (PAF)
– Rotation
• Orthogonal (Varimax)
• Oblique (Oblimin)
7
Summary of EFA steps
3. Determine no. of factors
– Theory?
– Kaiser's criterion?
– Eigen Values and Scree plot?
– % variance explained?
– Interpretability of weakest factor?
8
Summary of EFA steps
4. Select items
– Use factor loadings to help identify which items
belong in which factor
– Drop items one at a time if they don't belong to any
factor e.g., consider any items for which
• primary (highest) loading is low? (< .5 ?)
• cross- (other) loading(s) are high? (> .3 ?)
• item wording doesn't match the meaning of the factor
9
Summary of EFA steps
5. Name and describe factors
6. Examine correlations amongst
factors
7. Analyse internal reliability
8. Compute composite scores
9. Check factor structure across
sub-groups
Covered in
this lecture
10
EFA example 4:
University student
motivation
11
● 271 UC students responded to 24
university student motivation
statements in 2008 using an 8-point
Likert scale (False to True)
● For example:
“I study at university … ”
– to enhance my job prospects.
– because other people have told me I should.
● EFA PC Oblimin revealed 5 factors
Example EFA:
University student motivation
12
ExampleEFA:
Patternmatrix
Primary loadings
for each item are
above .5
It shows a simple
factor structure.
Cross-loadings
are all below .3.
This is a pattern matrix
showing factor loadings.
13
Example EFA:
University student motivation
1. Career & Qualifications
(6 items; α = .92)
2. Self Development
(5 items; α = .81)
3. Social Opportunities
(3 items; α = .90)
4. Altruism
(5 items; α = .90)
5. Social Pressure
(5 items; α = .94)
14
Example EFA:
University student motivation factor
means and confidence intervals
(error-bar graph)
15
Example EFA:
University student motivation factor correlations
Motivation Self
Develo
pment
Social
Enjoy
ment
Altruis
m
Social
Pressure
Career &
Qualifications
.26 .25 .24 .06
Self Development .33 .55 -.18
Social Enjoyment .26 .33
Altruism .11
16
Exploratory factor analysis:
Q & A
Questions?
Image source:https://commons.wikimedia.org/wiki/File:Pleiades_large.jpg
Psychometric Instrument
Development
Image source: http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg, CC-by-SA 3.0
18
1. Bryman & Cramer (1997).
Concepts and their measurement. [eReserve]
2. DeCoster, J. (2000).
Scale construction notes. [Online]
3. Howitt & Cramer (2005).
Reliability and validity: Evaluating the value of tests and
measures. [eReserve]
4. Howitt & Cramer (2014).
Ch 37: Reliability in scales and measurement: Consistency
and measurement. [Textbook/eReserve]
5. Wikiversity.
Measurement error. [Online]
6. Wikiversity.
7. Reliability and validity. [Online]
Readings: Psychometrics
Concepts &
their measurement
Operationalising
fuzzy concepts
Image source: https://www.flickr.com/photos/lenore-m/441579944/in/set-72157600039854497
20
Concepts & their measurement:
Bryman & Cramer (1997)
Concepts
• express common elements in the world
(to which we give a name)
• form a linchpin in the process of social
research
21
Concepts & their measurement:
Bryman & Cramer (1997)
Hypotheses
• specify expected relations between
concepts
22
Concepts & their measurement:
Bryman & Cramer (1997)
Operationalisation
• A concept needs to be operationally
defined in order for systematic research
to be conducted in relation to it.
• “An operational definition specifies the
procedures (operations) that will permit
differences between individuals in respect of
the concept(s) concerned to be precisely
specified ..."
23
Concepts & their measurement:
Bryman & Cramer (1997)
“... What we are in reality talking
about here is measurement, that is,
the assignment of numbers to the
units of analysis - be they people,
organizations, or nations - to which a
concept refers."
Operationalisation
● The act of making a
fuzzy concept
measurable.
● Social science often
uses multi-item
measures to assess
related but distinct
aspects of a fuzzy
concept.
25
Operationalisation steps
1. Brainstorm indicators of a concept
2. Define the concept
3. Draft measurement items
4. Pre-test and pilot test
5. Examine psychometric properties
- how precise are the measures?
6. Redraft/refine and re-test
Operationalising a fuzzy concept:
Example (Brainstorming indicators)
Fuzzy concepts - MindmapNurse
empowerment
Factor analysis process
Image source: Figure 4.2
Bryman & Cramer (1997)
29
Measurement error
Image source: http://commons.wikimedia.org/wiki/File:Noise_effect.svg
30
Measurement error
Measurement error is statistical
deviation from the true value caused
by the measurement procedure.
• Observed score =
true score +/- measurement error
• Measurement error =
systematic error +/- random error
● Systematic error =
• sampling error +/- non-sampling error
31
Systemic and random
error
Measurement errorSystematic error – e.g., bathroom scales
aren't calibrated properly, and every measurement is
0.5kg too high. The error occurs for each
measurement.
Random error – e.g., measure your weight 3
times using the same scales but get three slightly
different readings. The error is different for each
measurement.
Image source: http://www.freestockphotos.biz/stockphoto/17079
32
Sources of systematic error
Test reliability &
validity
(e.g., unreliable
or invalid tests)
Sampling
(non-
representative
sample)
Researcher bias
(e.g., researcher
favours a hypothesis)
Paradigm
(e.g., focus on positivism)
Respondent bias
(e.g., social desirability)
Non-sampling
33
Measurement precision & noise
• The lower the precision, the more
participants are needed to make up for
the "noise" in the measurements.
• Even with a larger sample, noisy data
can be hard to interpret.
• Especially when testing and assessing
individual clients, special care is needed
when interpreting results of noisy tests.
http://www.sportsci.org/resource/stats/precision.html
34
Minimising measurement error
• Standardise administration
conditions with clear instructions
and questions
• Minimise potential demand
characteristics (e.g., train interviewers)
• Use multiple indicators for fuzzy
constructs
35
Minimising measurement error
• Obtain a representative sample:
–Use probability-sampling, if possible
–For non-probability sampling, use
strategies to minimise selection bias
• Maximise response rate:
–Pre-survey contact
–Minimise length / time / hassle
–Rewards / incentives
–Coloured paper
–Call backs / reminders
36
• Ensure administrative accuracy:
–Set up efficient coding, with well-
labelled variables
–Check data (double-check at least a
portion of the data)
Minimising measurement error
Psychometrics
Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg
38
Psychometrics: Goal
To validly measure differences
between individuals and groups in
psychosocial qualities such as
attitudes and personality.
39
Psychometric tasks
• Develop approaches and
procedures (theory and practice)
for measuring psychological
phenomena
• Design and test psychological
measurement instrumentation
(e.g., examine and improve reliability and
validity of psychological tests)
40
Psychometrics: As test-taking
grows, test-makers grow rarer
"Psychometrics, one of the most
obscure, esoteric and cerebral
professions in America, is now also
one of the hottest.”
- As test-taking grows, test-makers grow rarer, David M. Herszenhor, May
5, 2006, New York Times
Psychometricians are in demand due to
increased testing of educational and
psychological capacity and performance.
41
Psychometric methods
• Factor analysis
– Exploratory
– Confirmatory
• Classical test theory
–Reliability
–Validity
Reliability & Validity
Image source: http://www.flickr.com/photos/psd/17433783/in/photostream
43
Reliability and validity
(Howitt & Cramer, 2005)
Reliability and validity (“classical test
theory”) are ways of evaluating the
accuracy of psychological tests and
measures.
• Reliability is about consistency of
– items within the measure
– the measure over time
• Validity is about whether the measure
actually measures what it is intended to
measure.
Reliability vs. validity
In classical test theory, reliability is generally thought to be
necessary for validity, but it does not guarantee validity.
In practice, a test of a relatively changeable psychological
construct such as suicide ideation, may be valid (i.e.,
accurate), but not particularly reliable over time (because
suicide ideation is likely to fluctuate).
Reliability vs. validity
Reliability
• A car which starts every time is reliable.
• A car which does not start every time is unreliable.
Validity
• A car which always reaches the desired destination is valid.
• A car which misses the desired destination is not valid.
Image source: https://commons.wikimedia.org/wiki/File:Aiga_carrental_cropped.svg
46
Reliability and validity
(Howitt & Cramer, 2005)
• Reliability and validity are not
inherent characteristics of
measures. They are affected by the
context and purpose of the
measurement → a measure that is
valid for one purpose may not be
valid for another purpose.
Reliability
Reproducibility of a measurement
48
Types of reliability
• Internal consistency: Correlation
among multiple items in a factor
– Cronbach's Alpha (α)
• Test-retest reliability: Correlation
between test at one time and another
– Product-moment correlation (r)
• Inter-rater reliability: Correlation
between one observer and another:
– Kappa
49
Reliability rule of thumb
< .6 = Unreliable
.6 = OK
.7 = Good
.8 = Very good, strong
.9 = Excellent
> .95 = may be overly reliable or
redundant – this is subjective and depends
on the nature what is being measured
Reliability rule of thumb
Table 7 Fabrigar et al (1999).
Table 7 Fabrigar et al. (1999)
Rule of thumb - reliability coefficients should be over .70, up to approx. .95
51
Internal consistency
(or internal reliability)
Internal consistency refers to:
• How well multiple items combine as a
measure of a single concept
• The extent to which responses to multiple
items are consistent with one another
Internal consistency can measured by:
• Split-half reliability
• Odd-even reliability
• Cronbach's Alpha (α)
52
Internal consistency
(Recoding)
If dealing with a mixture of
positively and negatively scored
items, remember to ensure that
negatively-worded items are
recoded.
53
Types of internal consistency:
Split-half reliability
• Sum the scores for the first half
(e.g., 1, 2, 3) of the items.
• Sum the scores for the second
half (e.g., 4, 5, 6) of the items.
• Compute a correlation between
the sums of the two halves.
54
Types of internal consistency:
Odd-even reliability
• Sum the scores for items
(e.g.,1, 3, 5)
• Sum the scores for items
(e.g., 2, 4, 6)
• Compute a correlation between
the sums of the two halves.
55
Types of internal reliability:
Alpha (Cronbach's α)
• Averages all possible split-half
reliability coefficients.
• Akin to a single score which
represents the degree of
intercorrelation amongst the items.
• Most commonly used indicator of
internal reliability
56
• More items → greater reliability
(The more items, the more ‘rounded’ the
measure)
• Minimum items to create a factor is 2.
• There is no maximum, however the law
of diminishing returns is that each
additional item will add less and less to
the reliability.
• Typically ~ 4 to 12 items per factor are
used.
• Final decision is subjective and
depends on research context
How many items per factor?
57
Internal reliability example:
Student-rated
quality of maths teaching
• 10-item scale measuring students’
assessment of the educational
quality of their maths classes
• 4-point Likert scale ranging from:
strongly disagree to strongly agree
58
Quality of mathematics teaching
1. My maths teacher is friendly and cares
about me.
2. The work we do in our maths class is
well organised.
3. My maths teacher expects high
standards of work from everyone.
4. My maths teacher helps me to learn.
5. I enjoy the work I do in maths classes.
+ 5 more
Internal reliability example:
Quality of maths teaching
SPSS:
Corrected Item-total correlation
Item-total correlations
should be > ~.5
SPSS: Cronbach’s α
If Cronbach's α if item
deleted is higher than the
α, consider removing
item.
62
Item-total Statistics
Scale Scale Corrected
Mean Variance Item- Alpha
if Item if Item Total if Item
Deleted Deleted Correlation Deleted
MATHS1 25.2749 25.5752 .6614 .8629
MATHS2 25.0333 26.5322 .6235 .8661
MATHS3 25.0192 30.5174 .0996 .9021
MATHS4 24.9786 25.8671 .7255 .8589
MATHS5 25.4664 25.6455 .6707 .8622
MATHS6 25.0813 24.9830 .7114 .8587
MATHS7 25.0909 26.4215 .6208 .8662
MATHS8 25.8699 25.7345 .6513 .8637
MATHS9 25.0340 26.1201 .6762 .8623
MATHS10 25.4642 25.7578 .6495 .8638
Reliability Coefficients
N of Cases = 1353.0 N of Items = 10
Alpha = .8790
SPSS: Reliability output
Remove this item.
Maths3 does not correlate well with the
other items and the Cronbach's alpha
would increase without this item.
63
Item-total Statistics
Scale Scale Corrected
Mean Variance Item- Alpha
if Item if Item Total if Item
Deleted Deleted Correlation Deleted
MATHS1 22.2694 24.0699 .6821 .8907
MATHS2 22.0280 25.2710 .6078 .8961
MATHS4 21.9727 24.4372 .7365 .8871
MATHS5 22.4605 24.2235 .6801 .8909
MATHS6 22.0753 23.5423 .7255 .8873
MATHS7 22.0849 25.0777 .6166 .8955
MATHS8 22.8642 24.3449 .6562 .8927
MATHS9 22.0280 24.5812 .7015 .8895
MATHS10 22.4590 24.3859 .6524 .8930
Reliability Coefficients
N of Cases = 1355.0 N of Items = 9
Alpha = .9024
SPSS: Reliability output
Alpha improves.
Validity
Validity is the extent to
which an instrument actually
measures what it purports
to measure.
Validity = does the
test measure what its
meant to measure?
66
Validity
• Validity is multifaceted and includes:
– Comparing wording of the items
with theory and expert opinion
– Examining correlations with similar
and dissimilar measures
– Testing how well the measure
predicts the future
67
Types of validity
• Face validity
• Content validity
• Criterion validity
– Concurrent validity
– Predictive validity
• Construct validity
– Convergent validity
– Discriminant validity
68
Face validity
(low-level of importance overall)
• Asks:
"At face-value, do the questions
appear to measure what the test
purports to measure?"
• Important for:
Respondent buy-in
• How assessed:
Read the test items
69
Content validity
(next level of importance)
• Asks:
"Are questions measuring the
complete construct?"
• Important for:
Ensuring holistic assessment
• How assessed:
Diverse means of item generation
(lit. review, theory, interviews,
expert review)
70
Criterion validity
(high importance)
• Asks:
"Can a test score predict real
world outcomes?"
• Important for:
Test relevance and usefulness
• How assessed:
Concurrent validity: Correlate test scores with
recognised external criteria such as performance appraisal scores
Predictive validity: Correlate test scores with
future outcome e.g., offender risk rating with recidivism
71
Construct validity
(high importance)
• Asks:
Does the test assess the construct it
purports to?
("the truth, the whole truth and nothing but the truth.")
• Important for:
Making inferences from operationalisations
to theoretical constructs
• How assessed:
- Theoretical (is the theory about the construct valid?)
- Statistical
Convergent – correlation with similar measures
Discriminant – not correlated with other constructs
Construct validity
(high importance)
Composite Scores
Image source: https://commons.wikimedia.org/wiki/File:PEO-hamburger.svg
74
Composite scores
Combine item-scores into an
overall factor score which
represents individual differences in
each of the target constructs.
The new composite score can then
be used for:
• Descriptive statistics and histograms
• Correlations
• As IVs and/or DVs in inferential analyses
such as MLR and ANOVA
75
Composite scores
There are two ways of creating
composite scores:
• Unit weighting
• Regression weighting
76
Unit weighting
Average (or total) of item scores
within a factor.
(each variable is equally weighted)
X = mean(y1…yp) Unit
Weighting
.25
.25 .25
.25
77
Creating composite scores:
Dealing with missing data
To maximise the sample
size, consider computing
composite scores in a
way that allows for some
missing data.
78
SPSS syntax:
Compute X = mean (v1, v2, v3, v4, v5, v6)
Compute X = mean.4 (v1, v2, v3, v4, v5, v6)
Specifies a min. # of items. If the min. isn't
available, the composite score will be missing.
In this example, X will be computed for a case when the case has responses to at
least 4 of the 6 items.
How many items can be missed? Depends on
overall reliability. A rule of thumb:
● Allow 1 missing per 4 to 5 items
● Allow 2 missing per 6 to 8 items
● Allow 3+ missing per 9+ items
Composite scores:
Missing data
A researcher may decide to be more
or less conservative depending on
the factors’ reliability, sample size,
and the nature of the study.
79
Regression weighting
Factor score regression weighting
The contribution of each
item to the composite score
is weighted to reflect
responses to some items
more than other items.
X = .20*a + .19*b + .27*c + .34*d
X
.20
.19 .27
.34
a
b c
d
This is arguably more valid, but the
advantage may be marginal, and it
makes factor scores between
studies more difficult to compare.
80
Regression weighting
Two calculation methods:
• Manual (use Compute – New variable
name = MEAN.*(list of variable names
separated by commas)
• Automatic (use Factor Analysis –
Factor Scores – Save as variables -
Regression)
Regression weighting – SPSS data
Data view: Data are standardised, centred around 0
Variable view: Variables auto-calculated through
SPSS factor analysis
Writing up
instrument development
83
Writing up instrument development
• Introduction
–Review previous literature about the
construct's underlying factors – consider
both theory and research
–Generate a research question e.g., “What
are the underlying factors of X?” or make
a hypothesis about the number of factors
and what they will represent.
84
Writing up instrument development
• Method
–Materials/Instrumentation – summarise the
design and development of the measures
and the expected factor structure
e.g., present a table of the expected factors and
their operational definitions.
85
Writing up instrument development
• Results
–Factor analysis
• Assumption testing
• Extraction method & rotation
• # of factors, with names and definitions
• # of items removed and rationale
• Item factor loadings & communalities
• Factor correlations
–Reliability for each factor
–Composite scores for each factor
–Correlations between factors
86
Writing up instrument development
• Discussion
–Theoretical underpinning – Was it
supported by the data? What adaptations
should be made to the theory?
–Quality / usefulness of measure – Provide
an objective, critical assessment,
reflecting the measures' strengths and
weaknesses
–Recommendations for further improvement
• Writing up a factor analysis
–Download examples: http://goo.gl/fD2qby
87
Summary
88
Summary:
What is psychometrics?
1. Science of psychological
measurement
2. Goal: Validly measure individual
psychosocial differences
3. Design and test psychological
measures e.g., using
1. Factor analysis
2. Reliability and validity
89
Summary:
Concepts & their measurement
1. Concepts name common elements
2. Hypotheses identify relations between
concepts
3. Brainstorm indicators of a concept
4. Define the concept
5. Draft measurement items
6. Pre-test and pilot test
7. Examine psychometric properties
8. Redraft/refine and re-test
90
Summary: Measurement error
1. Deviation of measure from true score
2. Sources:
1. Non-sampling (e.g., paradigm, respondent
bias, researcher bias)
2. Sampling (e.g., non-representativeness)
3. How to minimise:
1. Well-designed measures
2. Representative sampling
3. Reduce demand effects
4. Maximise response rate
5. Ensure administrative accuracy
91
Summary: Reliability
1. Consistency or reproducibility
2. Types
1. Internal consistency
2. Test-retest reliability
3. Rule of thumb
1. > .6 OK
2. > .8 Very good
4. Internal consistency
1. Split-half
2. Odd-even
3. Cronbach's Alpha
92
Summary: Validity
1. Extent to which a measure measures
what it is intended to measure
2. Multifaceted
1. Compare with theory and expert opinion
2. Correlations with similar and dissimilar
measures
3. Predicts future
93
Summary: Composite scores
Ways of creating composite (factor)
scores:
1. Unit weighting
1.Total of items or
2. Average of items
(recommended for lab report)
2. Regression weighting
1. Each item is weighted by its
importance to measuring the underlying
factor (based on regression weights)
94
1. Introduction
1. Review constructs & previous structures
2. Generate research question or hypothesis
2. Method
1. Explain measures and their development
3. Results
1. Factor analysis
2. Reliability of factors
3. Descriptive statistics for composite scores
4. Correlations between factors
4. Discussion
1. Theory? / Measure? / Recommendations?
Summary: Writing up
instrument development
95
1. Allen, P. & Bennett, K. (2008). Reliability analysis (Ch 15) in SPSS
for the health & behavioural sciences (pp. 205-218). South
Melbourne, Victoria, Australia: Thomson.
2. Bryman, A. & Cramer, D. (1997). Concepts and their measurement
(Ch. 4). In Quantitative data analysis with SPSS for Windows: A
guide for social scientists (pp. 53-68). Routledge.
3. DeCoster, J. (2000). Scale construction notes.
http://www.stat-help.com/scale.pdf (pdf)
4. Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J.
(1999). Evaluating the use of exploratory factor analysis in
psychological research. Psychological Methods, 4(3), 272-299.
5. Fowler, F. (2002). Designing questions to be good measures. In
Survey research methods (3rd ed.)(pp. 76-103). Thousand Oaks,
CA: Sage. Ereserve.
6. Howitt, D. & Cramer, D. (2005). Reliability and validity: Evaluating
the value of tests and measures (Ch. 13). In Introduction to
research methods in psychology (pp. 218-231). Harlow, Essex:
Pearson. eReserve.
References
96
Open Office Impress
● This presentation was made using
Open Office Impress.
● Free and open source software.
● http://www.openoffice.org/product/impress.html

Mais conteúdo relacionado

Mais procurados

Qualitative research methods
Qualitative research methodsQualitative research methods
Qualitative research methodsAsad Omar
 
Research methods
Research methodsResearch methods
Research methodsNitinKochar
 
ethnography Research
 ethnography Research ethnography Research
ethnography ResearchChanda Jabeen
 
Content Analysis
Content AnalysisContent Analysis
Content Analysistonitones
 
Qualitative research, types, data collection and analysis
Qualitative research, types, data collection and analysisQualitative research, types, data collection and analysis
Qualitative research, types, data collection and analysisVijayalakshmi Murugesan
 
Variables And Measurement Scales
Variables And Measurement ScalesVariables And Measurement Scales
Variables And Measurement Scalesguesta861fa
 
Grounded Theory Presentation
Grounded Theory PresentationGrounded Theory Presentation
Grounded Theory PresentationLarry Weas
 
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...Bikash Sapkota
 
Validity, its types, measurement & factors.
Validity, its types, measurement & factors.Validity, its types, measurement & factors.
Validity, its types, measurement & factors.Maheen Iftikhar
 
Qualitative Research Method
 Qualitative Research  Method  Qualitative Research  Method
Qualitative Research Method Kunal Modak
 
Data analysis and Interpretation
Data analysis and InterpretationData analysis and Interpretation
Data analysis and InterpretationMehul Gondaliya
 
Mixed research-methods (1)
Mixed research-methods (1)Mixed research-methods (1)
Mixed research-methods (1)Rakibul islam
 
A Summary on "Using Thematic Analysis in Psychology"
A Summary on "Using Thematic Analysis in Psychology"A Summary on "Using Thematic Analysis in Psychology"
A Summary on "Using Thematic Analysis in Psychology"Russel June Ramirez
 
Braun, Clarke & Hayfield Thematic Analysis Part 2
Braun, Clarke & Hayfield Thematic Analysis Part 2Braun, Clarke & Hayfield Thematic Analysis Part 2
Braun, Clarke & Hayfield Thematic Analysis Part 2Victoria Clarke
 
Qualitative research
Qualitative researchQualitative research
Qualitative researchNimra zaman
 
Research proposal 1
Research proposal 1Research proposal 1
Research proposal 1Hina Honey
 

Mais procurados (20)

Qualitative research methods
Qualitative research methodsQualitative research methods
Qualitative research methods
 
Qualitative Research
Qualitative ResearchQualitative Research
Qualitative Research
 
Research methods
Research methodsResearch methods
Research methods
 
ethnography Research
 ethnography Research ethnography Research
ethnography Research
 
Qualitative Data Analysis
Qualitative Data Analysis  Qualitative Data Analysis
Qualitative Data Analysis
 
Content Analysis
Content AnalysisContent Analysis
Content Analysis
 
Sampling
SamplingSampling
Sampling
 
Qualitative research, types, data collection and analysis
Qualitative research, types, data collection and analysisQualitative research, types, data collection and analysis
Qualitative research, types, data collection and analysis
 
Variables And Measurement Scales
Variables And Measurement ScalesVariables And Measurement Scales
Variables And Measurement Scales
 
Grounded Theory Presentation
Grounded Theory PresentationGrounded Theory Presentation
Grounded Theory Presentation
 
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...
Data Collection (Methods/ Tools/ Techniques), Primary & Secondary Data, Quali...
 
Validity, its types, measurement & factors.
Validity, its types, measurement & factors.Validity, its types, measurement & factors.
Validity, its types, measurement & factors.
 
Qualitative Research Method
 Qualitative Research  Method  Qualitative Research  Method
Qualitative Research Method
 
Data analysis and Interpretation
Data analysis and InterpretationData analysis and Interpretation
Data analysis and Interpretation
 
Mixed research-methods (1)
Mixed research-methods (1)Mixed research-methods (1)
Mixed research-methods (1)
 
A Summary on "Using Thematic Analysis in Psychology"
A Summary on "Using Thematic Analysis in Psychology"A Summary on "Using Thematic Analysis in Psychology"
A Summary on "Using Thematic Analysis in Psychology"
 
Braun, Clarke & Hayfield Thematic Analysis Part 2
Braun, Clarke & Hayfield Thematic Analysis Part 2Braun, Clarke & Hayfield Thematic Analysis Part 2
Braun, Clarke & Hayfield Thematic Analysis Part 2
 
Research methodology
Research methodologyResearch methodology
Research methodology
 
Qualitative research
Qualitative researchQualitative research
Qualitative research
 
Research proposal 1
Research proposal 1Research proposal 1
Research proposal 1
 

Semelhante a Psychometric instrument development

The best research method طرق البحث
The best research method طرق البحثThe best research method طرق البحث
The best research method طرق البحثabdullah alhariri
 
Doing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative ResearchDoing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative ResearchAhmad Johari Sihes
 
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010TEST Huddle
 
1 complete research
1 complete research1 complete research
1 complete researchHemant Kumar
 
Exploratory factor analysis
Exploratory factor analysisExploratory factor analysis
Exploratory factor analysisJames Neill
 
Analytical thinking &amp; creativity
Analytical thinking &amp; creativityAnalytical thinking &amp; creativity
Analytical thinking &amp; creativityAbhishek Gupta
 
Management Research Methodology - Part 1 By Chen Hua.pdf
Management Research Methodology - Part 1 By Chen Hua.pdfManagement Research Methodology - Part 1 By Chen Hua.pdf
Management Research Methodology - Part 1 By Chen Hua.pdfyassermoulla
 
Analysis in Action 21 September 2021
Analysis in Action 21 September 2021Analysis in Action 21 September 2021
Analysis in Action 21 September 2021IIBA UK Chapter
 
OR chapter 1.pdf
OR chapter 1.pdfOR chapter 1.pdf
OR chapter 1.pdfAlexHayme
 
Business research methods
Business research methodsBusiness research methods
Business research methodsSaeed Akbar
 
Presentation on research methodologies
Presentation on research methodologiesPresentation on research methodologies
Presentation on research methodologiesBilal Naqeeb
 

Semelhante a Psychometric instrument development (20)

Methodology and IRB/URR
Methodology and IRB/URRMethodology and IRB/URR
Methodology and IRB/URR
 
TCI in primary care - SEM (2006)
TCI in primary care - SEM (2006)TCI in primary care - SEM (2006)
TCI in primary care - SEM (2006)
 
The best research method طرق البحث
The best research method طرق البحثThe best research method طرق البحث
The best research method طرق البحث
 
Doing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative ResearchDoing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative Research
 
Ressearch design - Copy.ppt
Ressearch design - Copy.pptRessearch design - Copy.ppt
Ressearch design - Copy.ppt
 
Action research in Applied Linguistics: An Introduction
Action research in Applied Linguistics: An IntroductionAction research in Applied Linguistics: An Introduction
Action research in Applied Linguistics: An Introduction
 
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
 
1 complete research
1 complete research1 complete research
1 complete research
 
Exploratory factor analysis
Exploratory factor analysisExploratory factor analysis
Exploratory factor analysis
 
Operational research ppt
Operational research pptOperational research ppt
Operational research ppt
 
Analytical thinking &amp; creativity
Analytical thinking &amp; creativityAnalytical thinking &amp; creativity
Analytical thinking &amp; creativity
 
Business research(Rubrics)
Business research(Rubrics)Business research(Rubrics)
Business research(Rubrics)
 
Management Research Methodology - Part 1 By Chen Hua.pdf
Management Research Methodology - Part 1 By Chen Hua.pdfManagement Research Methodology - Part 1 By Chen Hua.pdf
Management Research Methodology - Part 1 By Chen Hua.pdf
 
Analysis in Action 21 September 2021
Analysis in Action 21 September 2021Analysis in Action 21 September 2021
Analysis in Action 21 September 2021
 
Rree measurement-larry-d3
Rree measurement-larry-d3Rree measurement-larry-d3
Rree measurement-larry-d3
 
Work related stress 23 oct15
Work related stress 23 oct15Work related stress 23 oct15
Work related stress 23 oct15
 
OR chapter 1.pdf
OR chapter 1.pdfOR chapter 1.pdf
OR chapter 1.pdf
 
Research design
Research designResearch design
Research design
 
Business research methods
Business research methodsBusiness research methods
Business research methods
 
Presentation on research methodologies
Presentation on research methodologiesPresentation on research methodologies
Presentation on research methodologies
 

Mais de James Neill

Personal control beliefs
Personal control beliefsPersonal control beliefs
Personal control beliefsJames Neill
 
Goal setting and goal striving
Goal setting and goal strivingGoal setting and goal striving
Goal setting and goal strivingJames Neill
 
Implicit motives
Implicit motivesImplicit motives
Implicit motivesJames Neill
 
Psychological needs
Psychological needsPsychological needs
Psychological needsJames Neill
 
Extrinsic motivation
Extrinsic motivationExtrinsic motivation
Extrinsic motivationJames Neill
 
Physiological needs
Physiological needsPhysiological needs
Physiological needsJames Neill
 
Motivated and emotional brain
Motivated and emotional brainMotivated and emotional brain
Motivated and emotional brainJames Neill
 
Motivation in historical perspective
Motivation in historical perspectiveMotivation in historical perspective
Motivation in historical perspectiveJames Neill
 
Motivation and emotion unit outline
Motivation and emotion unit outlineMotivation and emotion unit outline
Motivation and emotion unit outlineJames Neill
 
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...Development and evaluation of the PCYC Catalyst outdoor adventure interventio...
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...James Neill
 
Individual emotions
Individual emotionsIndividual emotions
Individual emotionsJames Neill
 
How and why to edit wikipedia
How and why to edit wikipediaHow and why to edit wikipedia
How and why to edit wikipediaJames Neill
 
Going open (education): What, why, and how?
Going open (education): What, why, and how?Going open (education): What, why, and how?
Going open (education): What, why, and how?James Neill
 
Introduction to motivation and emotion 2013
Introduction to motivation and emotion 2013Introduction to motivation and emotion 2013
Introduction to motivation and emotion 2013James Neill
 
Summary and conclusion - Survey research and design in psychology
Summary and conclusion - Survey research and design in psychologySummary and conclusion - Survey research and design in psychology
Summary and conclusion - Survey research and design in psychologyJames Neill
 
The effects of green exercise on stress, anxiety and mood
The effects of green exercise on stress, anxiety and moodThe effects of green exercise on stress, anxiety and mood
The effects of green exercise on stress, anxiety and moodJames Neill
 
Multiple linear regression II
Multiple linear regression IIMultiple linear regression II
Multiple linear regression IIJames Neill
 
Conclusion and review
Conclusion and reviewConclusion and review
Conclusion and reviewJames Neill
 

Mais de James Neill (20)

Self
SelfSelf
Self
 
Personal control beliefs
Personal control beliefsPersonal control beliefs
Personal control beliefs
 
Mindsets
MindsetsMindsets
Mindsets
 
Goal setting and goal striving
Goal setting and goal strivingGoal setting and goal striving
Goal setting and goal striving
 
Implicit motives
Implicit motivesImplicit motives
Implicit motives
 
Psychological needs
Psychological needsPsychological needs
Psychological needs
 
Extrinsic motivation
Extrinsic motivationExtrinsic motivation
Extrinsic motivation
 
Physiological needs
Physiological needsPhysiological needs
Physiological needs
 
Motivated and emotional brain
Motivated and emotional brainMotivated and emotional brain
Motivated and emotional brain
 
Motivation in historical perspective
Motivation in historical perspectiveMotivation in historical perspective
Motivation in historical perspective
 
Motivation and emotion unit outline
Motivation and emotion unit outlineMotivation and emotion unit outline
Motivation and emotion unit outline
 
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...Development and evaluation of the PCYC Catalyst outdoor adventure interventio...
Development and evaluation of the PCYC Catalyst outdoor adventure interventio...
 
Individual emotions
Individual emotionsIndividual emotions
Individual emotions
 
How and why to edit wikipedia
How and why to edit wikipediaHow and why to edit wikipedia
How and why to edit wikipedia
 
Going open (education): What, why, and how?
Going open (education): What, why, and how?Going open (education): What, why, and how?
Going open (education): What, why, and how?
 
Introduction to motivation and emotion 2013
Introduction to motivation and emotion 2013Introduction to motivation and emotion 2013
Introduction to motivation and emotion 2013
 
Summary and conclusion - Survey research and design in psychology
Summary and conclusion - Survey research and design in psychologySummary and conclusion - Survey research and design in psychology
Summary and conclusion - Survey research and design in psychology
 
The effects of green exercise on stress, anxiety and mood
The effects of green exercise on stress, anxiety and moodThe effects of green exercise on stress, anxiety and mood
The effects of green exercise on stress, anxiety and mood
 
Multiple linear regression II
Multiple linear regression IIMultiple linear regression II
Multiple linear regression II
 
Conclusion and review
Conclusion and reviewConclusion and review
Conclusion and review
 

Último

ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...JojoEDelaCruz
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfTechSoup
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)cama23
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 

Último (20)

ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
ENG 5 Q4 WEEk 1 DAY 1 Restate sentences heard in one’s own words. Use appropr...
 
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdfInclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
Inclusivity Essentials_ Creating Accessible Websites for Nonprofits .pdf
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 

Psychometric instrument development

  • 1. Lecture 6 Survey Research & Design in Psychology James Neill, 2017 Creative Commons Attribution 4.0 Psychometric Instrument Development Image source: http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg, CC-by-SA 3.0
  • 2. 2 Overview 1. Recap: Exploratory factor analysis 2. Concepts & their measurement 3. Measurement error 4. Psychometrics 5. Reliability & validity 6. Composite scores 7. Writing up instrument development
  • 4. 4 What is factor analysis? • Factor analysis is: –a family of multivariate correlational methods used to identify clusters of covariance (called factors) • Two main purposes: –Theoretical (PAF) –Data reduction (PC) • Two main types (extraction methods): –Exploratory factor analysis (EFA) –Confirmatory factor analysis (CFA)
  • 5. 5 EFA steps 1. Test assumptions – Sample size • 5+ cases x no. of variables (min.) • 20+ cases x no. of variables (ideal) • Another guideline: Or N > 200 – Outliers & linearity – Factorability • Correlation matrix: Some > .3? • Anti-image correlation matrix diags > .5 • Measures of Sampling Adequacy: KMO > ~ .5 to 6; Bartlett's sig?
  • 6. 6 EFA steps 2. Select type of analysis – Extraction • Principal Components (PC) • Principal Axis Factoring (PAF) – Rotation • Orthogonal (Varimax) • Oblique (Oblimin)
  • 7. 7 Summary of EFA steps 3. Determine no. of factors – Theory? – Kaiser's criterion? – Eigen Values and Scree plot? – % variance explained? – Interpretability of weakest factor?
  • 8. 8 Summary of EFA steps 4. Select items – Use factor loadings to help identify which items belong in which factor – Drop items one at a time if they don't belong to any factor e.g., consider any items for which • primary (highest) loading is low? (< .5 ?) • cross- (other) loading(s) are high? (> .3 ?) • item wording doesn't match the meaning of the factor
  • 9. 9 Summary of EFA steps 5. Name and describe factors 6. Examine correlations amongst factors 7. Analyse internal reliability 8. Compute composite scores 9. Check factor structure across sub-groups Covered in this lecture
  • 10. 10 EFA example 4: University student motivation
  • 11. 11 ● 271 UC students responded to 24 university student motivation statements in 2008 using an 8-point Likert scale (False to True) ● For example: “I study at university … ” – to enhance my job prospects. – because other people have told me I should. ● EFA PC Oblimin revealed 5 factors Example EFA: University student motivation
  • 12. 12 ExampleEFA: Patternmatrix Primary loadings for each item are above .5 It shows a simple factor structure. Cross-loadings are all below .3. This is a pattern matrix showing factor loadings.
  • 13. 13 Example EFA: University student motivation 1. Career & Qualifications (6 items; α = .92) 2. Self Development (5 items; α = .81) 3. Social Opportunities (3 items; α = .90) 4. Altruism (5 items; α = .90) 5. Social Pressure (5 items; α = .94)
  • 14. 14 Example EFA: University student motivation factor means and confidence intervals (error-bar graph)
  • 15. 15 Example EFA: University student motivation factor correlations Motivation Self Develo pment Social Enjoy ment Altruis m Social Pressure Career & Qualifications .26 .25 .24 .06 Self Development .33 .55 -.18 Social Enjoyment .26 .33 Altruism .11
  • 16. 16 Exploratory factor analysis: Q & A Questions? Image source:https://commons.wikimedia.org/wiki/File:Pleiades_large.jpg
  • 17. Psychometric Instrument Development Image source: http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg, CC-by-SA 3.0
  • 18. 18 1. Bryman & Cramer (1997). Concepts and their measurement. [eReserve] 2. DeCoster, J. (2000). Scale construction notes. [Online] 3. Howitt & Cramer (2005). Reliability and validity: Evaluating the value of tests and measures. [eReserve] 4. Howitt & Cramer (2014). Ch 37: Reliability in scales and measurement: Consistency and measurement. [Textbook/eReserve] 5. Wikiversity. Measurement error. [Online] 6. Wikiversity. 7. Reliability and validity. [Online] Readings: Psychometrics
  • 19. Concepts & their measurement Operationalising fuzzy concepts Image source: https://www.flickr.com/photos/lenore-m/441579944/in/set-72157600039854497
  • 20. 20 Concepts & their measurement: Bryman & Cramer (1997) Concepts • express common elements in the world (to which we give a name) • form a linchpin in the process of social research
  • 21. 21 Concepts & their measurement: Bryman & Cramer (1997) Hypotheses • specify expected relations between concepts
  • 22. 22 Concepts & their measurement: Bryman & Cramer (1997) Operationalisation • A concept needs to be operationally defined in order for systematic research to be conducted in relation to it. • “An operational definition specifies the procedures (operations) that will permit differences between individuals in respect of the concept(s) concerned to be precisely specified ..."
  • 23. 23 Concepts & their measurement: Bryman & Cramer (1997) “... What we are in reality talking about here is measurement, that is, the assignment of numbers to the units of analysis - be they people, organizations, or nations - to which a concept refers."
  • 24. Operationalisation ● The act of making a fuzzy concept measurable. ● Social science often uses multi-item measures to assess related but distinct aspects of a fuzzy concept.
  • 25. 25 Operationalisation steps 1. Brainstorm indicators of a concept 2. Define the concept 3. Draft measurement items 4. Pre-test and pilot test 5. Examine psychometric properties - how precise are the measures? 6. Redraft/refine and re-test
  • 26. Operationalising a fuzzy concept: Example (Brainstorming indicators)
  • 27. Fuzzy concepts - MindmapNurse empowerment
  • 28. Factor analysis process Image source: Figure 4.2 Bryman & Cramer (1997)
  • 29. 29 Measurement error Image source: http://commons.wikimedia.org/wiki/File:Noise_effect.svg
  • 30. 30 Measurement error Measurement error is statistical deviation from the true value caused by the measurement procedure. • Observed score = true score +/- measurement error • Measurement error = systematic error +/- random error ● Systematic error = • sampling error +/- non-sampling error
  • 31. 31 Systemic and random error Measurement errorSystematic error – e.g., bathroom scales aren't calibrated properly, and every measurement is 0.5kg too high. The error occurs for each measurement. Random error – e.g., measure your weight 3 times using the same scales but get three slightly different readings. The error is different for each measurement. Image source: http://www.freestockphotos.biz/stockphoto/17079
  • 32. 32 Sources of systematic error Test reliability & validity (e.g., unreliable or invalid tests) Sampling (non- representative sample) Researcher bias (e.g., researcher favours a hypothesis) Paradigm (e.g., focus on positivism) Respondent bias (e.g., social desirability) Non-sampling
  • 33. 33 Measurement precision & noise • The lower the precision, the more participants are needed to make up for the "noise" in the measurements. • Even with a larger sample, noisy data can be hard to interpret. • Especially when testing and assessing individual clients, special care is needed when interpreting results of noisy tests. http://www.sportsci.org/resource/stats/precision.html
  • 34. 34 Minimising measurement error • Standardise administration conditions with clear instructions and questions • Minimise potential demand characteristics (e.g., train interviewers) • Use multiple indicators for fuzzy constructs
  • 35. 35 Minimising measurement error • Obtain a representative sample: –Use probability-sampling, if possible –For non-probability sampling, use strategies to minimise selection bias • Maximise response rate: –Pre-survey contact –Minimise length / time / hassle –Rewards / incentives –Coloured paper –Call backs / reminders
  • 36. 36 • Ensure administrative accuracy: –Set up efficient coding, with well- labelled variables –Check data (double-check at least a portion of the data) Minimising measurement error
  • 38. 38 Psychometrics: Goal To validly measure differences between individuals and groups in psychosocial qualities such as attitudes and personality.
  • 39. 39 Psychometric tasks • Develop approaches and procedures (theory and practice) for measuring psychological phenomena • Design and test psychological measurement instrumentation (e.g., examine and improve reliability and validity of psychological tests)
  • 40. 40 Psychometrics: As test-taking grows, test-makers grow rarer "Psychometrics, one of the most obscure, esoteric and cerebral professions in America, is now also one of the hottest.” - As test-taking grows, test-makers grow rarer, David M. Herszenhor, May 5, 2006, New York Times Psychometricians are in demand due to increased testing of educational and psychological capacity and performance.
  • 41. 41 Psychometric methods • Factor analysis – Exploratory – Confirmatory • Classical test theory –Reliability –Validity
  • 42. Reliability & Validity Image source: http://www.flickr.com/photos/psd/17433783/in/photostream
  • 43. 43 Reliability and validity (Howitt & Cramer, 2005) Reliability and validity (“classical test theory”) are ways of evaluating the accuracy of psychological tests and measures. • Reliability is about consistency of – items within the measure – the measure over time • Validity is about whether the measure actually measures what it is intended to measure.
  • 44. Reliability vs. validity In classical test theory, reliability is generally thought to be necessary for validity, but it does not guarantee validity. In practice, a test of a relatively changeable psychological construct such as suicide ideation, may be valid (i.e., accurate), but not particularly reliable over time (because suicide ideation is likely to fluctuate).
  • 45. Reliability vs. validity Reliability • A car which starts every time is reliable. • A car which does not start every time is unreliable. Validity • A car which always reaches the desired destination is valid. • A car which misses the desired destination is not valid. Image source: https://commons.wikimedia.org/wiki/File:Aiga_carrental_cropped.svg
  • 46. 46 Reliability and validity (Howitt & Cramer, 2005) • Reliability and validity are not inherent characteristics of measures. They are affected by the context and purpose of the measurement → a measure that is valid for one purpose may not be valid for another purpose.
  • 48. 48 Types of reliability • Internal consistency: Correlation among multiple items in a factor – Cronbach's Alpha (α) • Test-retest reliability: Correlation between test at one time and another – Product-moment correlation (r) • Inter-rater reliability: Correlation between one observer and another: – Kappa
  • 49. 49 Reliability rule of thumb < .6 = Unreliable .6 = OK .7 = Good .8 = Very good, strong .9 = Excellent > .95 = may be overly reliable or redundant – this is subjective and depends on the nature what is being measured
  • 50. Reliability rule of thumb Table 7 Fabrigar et al (1999). Table 7 Fabrigar et al. (1999) Rule of thumb - reliability coefficients should be over .70, up to approx. .95
  • 51. 51 Internal consistency (or internal reliability) Internal consistency refers to: • How well multiple items combine as a measure of a single concept • The extent to which responses to multiple items are consistent with one another Internal consistency can measured by: • Split-half reliability • Odd-even reliability • Cronbach's Alpha (α)
  • 52. 52 Internal consistency (Recoding) If dealing with a mixture of positively and negatively scored items, remember to ensure that negatively-worded items are recoded.
  • 53. 53 Types of internal consistency: Split-half reliability • Sum the scores for the first half (e.g., 1, 2, 3) of the items. • Sum the scores for the second half (e.g., 4, 5, 6) of the items. • Compute a correlation between the sums of the two halves.
  • 54. 54 Types of internal consistency: Odd-even reliability • Sum the scores for items (e.g.,1, 3, 5) • Sum the scores for items (e.g., 2, 4, 6) • Compute a correlation between the sums of the two halves.
  • 55. 55 Types of internal reliability: Alpha (Cronbach's α) • Averages all possible split-half reliability coefficients. • Akin to a single score which represents the degree of intercorrelation amongst the items. • Most commonly used indicator of internal reliability
  • 56. 56 • More items → greater reliability (The more items, the more ‘rounded’ the measure) • Minimum items to create a factor is 2. • There is no maximum, however the law of diminishing returns is that each additional item will add less and less to the reliability. • Typically ~ 4 to 12 items per factor are used. • Final decision is subjective and depends on research context How many items per factor?
  • 57. 57 Internal reliability example: Student-rated quality of maths teaching • 10-item scale measuring students’ assessment of the educational quality of their maths classes • 4-point Likert scale ranging from: strongly disagree to strongly agree
  • 58. 58 Quality of mathematics teaching 1. My maths teacher is friendly and cares about me. 2. The work we do in our maths class is well organised. 3. My maths teacher expects high standards of work from everyone. 4. My maths teacher helps me to learn. 5. I enjoy the work I do in maths classes. + 5 more
  • 61. SPSS: Cronbach’s α If Cronbach's α if item deleted is higher than the α, consider removing item.
  • 62. 62 Item-total Statistics Scale Scale Corrected Mean Variance Item- Alpha if Item if Item Total if Item Deleted Deleted Correlation Deleted MATHS1 25.2749 25.5752 .6614 .8629 MATHS2 25.0333 26.5322 .6235 .8661 MATHS3 25.0192 30.5174 .0996 .9021 MATHS4 24.9786 25.8671 .7255 .8589 MATHS5 25.4664 25.6455 .6707 .8622 MATHS6 25.0813 24.9830 .7114 .8587 MATHS7 25.0909 26.4215 .6208 .8662 MATHS8 25.8699 25.7345 .6513 .8637 MATHS9 25.0340 26.1201 .6762 .8623 MATHS10 25.4642 25.7578 .6495 .8638 Reliability Coefficients N of Cases = 1353.0 N of Items = 10 Alpha = .8790 SPSS: Reliability output Remove this item. Maths3 does not correlate well with the other items and the Cronbach's alpha would increase without this item.
  • 63. 63 Item-total Statistics Scale Scale Corrected Mean Variance Item- Alpha if Item if Item Total if Item Deleted Deleted Correlation Deleted MATHS1 22.2694 24.0699 .6821 .8907 MATHS2 22.0280 25.2710 .6078 .8961 MATHS4 21.9727 24.4372 .7365 .8871 MATHS5 22.4605 24.2235 .6801 .8909 MATHS6 22.0753 23.5423 .7255 .8873 MATHS7 22.0849 25.0777 .6166 .8955 MATHS8 22.8642 24.3449 .6562 .8927 MATHS9 22.0280 24.5812 .7015 .8895 MATHS10 22.4590 24.3859 .6524 .8930 Reliability Coefficients N of Cases = 1355.0 N of Items = 9 Alpha = .9024 SPSS: Reliability output Alpha improves.
  • 64.
  • 65. Validity Validity is the extent to which an instrument actually measures what it purports to measure. Validity = does the test measure what its meant to measure?
  • 66. 66 Validity • Validity is multifaceted and includes: – Comparing wording of the items with theory and expert opinion – Examining correlations with similar and dissimilar measures – Testing how well the measure predicts the future
  • 67. 67 Types of validity • Face validity • Content validity • Criterion validity – Concurrent validity – Predictive validity • Construct validity – Convergent validity – Discriminant validity
  • 68. 68 Face validity (low-level of importance overall) • Asks: "At face-value, do the questions appear to measure what the test purports to measure?" • Important for: Respondent buy-in • How assessed: Read the test items
  • 69. 69 Content validity (next level of importance) • Asks: "Are questions measuring the complete construct?" • Important for: Ensuring holistic assessment • How assessed: Diverse means of item generation (lit. review, theory, interviews, expert review)
  • 70. 70 Criterion validity (high importance) • Asks: "Can a test score predict real world outcomes?" • Important for: Test relevance and usefulness • How assessed: Concurrent validity: Correlate test scores with recognised external criteria such as performance appraisal scores Predictive validity: Correlate test scores with future outcome e.g., offender risk rating with recidivism
  • 71. 71 Construct validity (high importance) • Asks: Does the test assess the construct it purports to? ("the truth, the whole truth and nothing but the truth.") • Important for: Making inferences from operationalisations to theoretical constructs • How assessed: - Theoretical (is the theory about the construct valid?) - Statistical Convergent – correlation with similar measures Discriminant – not correlated with other constructs
  • 73. Composite Scores Image source: https://commons.wikimedia.org/wiki/File:PEO-hamburger.svg
  • 74. 74 Composite scores Combine item-scores into an overall factor score which represents individual differences in each of the target constructs. The new composite score can then be used for: • Descriptive statistics and histograms • Correlations • As IVs and/or DVs in inferential analyses such as MLR and ANOVA
  • 75. 75 Composite scores There are two ways of creating composite scores: • Unit weighting • Regression weighting
  • 76. 76 Unit weighting Average (or total) of item scores within a factor. (each variable is equally weighted) X = mean(y1…yp) Unit Weighting .25 .25 .25 .25
  • 77. 77 Creating composite scores: Dealing with missing data To maximise the sample size, consider computing composite scores in a way that allows for some missing data.
  • 78. 78 SPSS syntax: Compute X = mean (v1, v2, v3, v4, v5, v6) Compute X = mean.4 (v1, v2, v3, v4, v5, v6) Specifies a min. # of items. If the min. isn't available, the composite score will be missing. In this example, X will be computed for a case when the case has responses to at least 4 of the 6 items. How many items can be missed? Depends on overall reliability. A rule of thumb: ● Allow 1 missing per 4 to 5 items ● Allow 2 missing per 6 to 8 items ● Allow 3+ missing per 9+ items Composite scores: Missing data A researcher may decide to be more or less conservative depending on the factors’ reliability, sample size, and the nature of the study.
  • 79. 79 Regression weighting Factor score regression weighting The contribution of each item to the composite score is weighted to reflect responses to some items more than other items. X = .20*a + .19*b + .27*c + .34*d X .20 .19 .27 .34 a b c d This is arguably more valid, but the advantage may be marginal, and it makes factor scores between studies more difficult to compare.
  • 80. 80 Regression weighting Two calculation methods: • Manual (use Compute – New variable name = MEAN.*(list of variable names separated by commas) • Automatic (use Factor Analysis – Factor Scores – Save as variables - Regression)
  • 81. Regression weighting – SPSS data Data view: Data are standardised, centred around 0 Variable view: Variables auto-calculated through SPSS factor analysis
  • 83. 83 Writing up instrument development • Introduction –Review previous literature about the construct's underlying factors – consider both theory and research –Generate a research question e.g., “What are the underlying factors of X?” or make a hypothesis about the number of factors and what they will represent.
  • 84. 84 Writing up instrument development • Method –Materials/Instrumentation – summarise the design and development of the measures and the expected factor structure e.g., present a table of the expected factors and their operational definitions.
  • 85. 85 Writing up instrument development • Results –Factor analysis • Assumption testing • Extraction method & rotation • # of factors, with names and definitions • # of items removed and rationale • Item factor loadings & communalities • Factor correlations –Reliability for each factor –Composite scores for each factor –Correlations between factors
  • 86. 86 Writing up instrument development • Discussion –Theoretical underpinning – Was it supported by the data? What adaptations should be made to the theory? –Quality / usefulness of measure – Provide an objective, critical assessment, reflecting the measures' strengths and weaknesses –Recommendations for further improvement • Writing up a factor analysis –Download examples: http://goo.gl/fD2qby
  • 88. 88 Summary: What is psychometrics? 1. Science of psychological measurement 2. Goal: Validly measure individual psychosocial differences 3. Design and test psychological measures e.g., using 1. Factor analysis 2. Reliability and validity
  • 89. 89 Summary: Concepts & their measurement 1. Concepts name common elements 2. Hypotheses identify relations between concepts 3. Brainstorm indicators of a concept 4. Define the concept 5. Draft measurement items 6. Pre-test and pilot test 7. Examine psychometric properties 8. Redraft/refine and re-test
  • 90. 90 Summary: Measurement error 1. Deviation of measure from true score 2. Sources: 1. Non-sampling (e.g., paradigm, respondent bias, researcher bias) 2. Sampling (e.g., non-representativeness) 3. How to minimise: 1. Well-designed measures 2. Representative sampling 3. Reduce demand effects 4. Maximise response rate 5. Ensure administrative accuracy
  • 91. 91 Summary: Reliability 1. Consistency or reproducibility 2. Types 1. Internal consistency 2. Test-retest reliability 3. Rule of thumb 1. > .6 OK 2. > .8 Very good 4. Internal consistency 1. Split-half 2. Odd-even 3. Cronbach's Alpha
  • 92. 92 Summary: Validity 1. Extent to which a measure measures what it is intended to measure 2. Multifaceted 1. Compare with theory and expert opinion 2. Correlations with similar and dissimilar measures 3. Predicts future
  • 93. 93 Summary: Composite scores Ways of creating composite (factor) scores: 1. Unit weighting 1.Total of items or 2. Average of items (recommended for lab report) 2. Regression weighting 1. Each item is weighted by its importance to measuring the underlying factor (based on regression weights)
  • 94. 94 1. Introduction 1. Review constructs & previous structures 2. Generate research question or hypothesis 2. Method 1. Explain measures and their development 3. Results 1. Factor analysis 2. Reliability of factors 3. Descriptive statistics for composite scores 4. Correlations between factors 4. Discussion 1. Theory? / Measure? / Recommendations? Summary: Writing up instrument development
  • 95. 95 1. Allen, P. & Bennett, K. (2008). Reliability analysis (Ch 15) in SPSS for the health & behavioural sciences (pp. 205-218). South Melbourne, Victoria, Australia: Thomson. 2. Bryman, A. & Cramer, D. (1997). Concepts and their measurement (Ch. 4). In Quantitative data analysis with SPSS for Windows: A guide for social scientists (pp. 53-68). Routledge. 3. DeCoster, J. (2000). Scale construction notes. http://www.stat-help.com/scale.pdf (pdf) 4. Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. 5. Fowler, F. (2002). Designing questions to be good measures. In Survey research methods (3rd ed.)(pp. 76-103). Thousand Oaks, CA: Sage. Ereserve. 6. Howitt, D. & Cramer, D. (2005). Reliability and validity: Evaluating the value of tests and measures (Ch. 13). In Introduction to research methods in psychology (pp. 218-231). Harlow, Essex: Pearson. eReserve. References
  • 96. 96 Open Office Impress ● This presentation was made using Open Office Impress. ● Free and open source software. ● http://www.openoffice.org/product/impress.html

Notas do Editor

  1. 7126/6667 Survey Research &amp; Design in Psychology Semester 1, 2017, University of Canberra, ACT, Australia James T. Neill Home page: http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology Lecture page: http://en.wikiversity.org/wiki/Psychometric_instrument_development Image source:http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg Image author: Lite, http://commons.wikimedia.org/wiki/User:Lite Image license: GFDL 1.2+, CC-by-SA 3.0 unported Description: This lecture recaps the previous lecture on exploratory factor analysis, and introduces psychometrics and (fuzzy) concepts and their measurement, including (operationalisation), reliability (particularly internal consistency of multi-item measures), validity and the creation of composite scores. This lecture is accompanied by a computer-lab based tutorial. http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Tutorials/Psychometrics
  2. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain
  3. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  4. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  5. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  6. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  7. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  8. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  9. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  10. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  11. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  12. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial See also http://en.wikiversity.org/wiki/Exploratory_factor_analysis/Data_analysis_tutorial
  13. Image source:https://commons.wikimedia.org/wiki/File:Pleiades_large.jpg Image authorr: NASA, ESA, AURA/Caltech, Palomar Observatory The science team consists of: D. Soderblom and E. Nelan (STScI), F. Benedict and B. Arthur (U. Texas), and B. Jones (Lick Obs.) License: https://en.wikipedia.org/wiki/Public_domain
  14. Image source:http://commons.wikimedia.org/wiki/File:Soft_ruler.jpg Image author: Lite, http://commons.wikimedia.org/wiki/User:Lite Image license: GFDL 1.2+, CC-by-SA 3.0 unported
  15. Image source: https://www.flickr.com/photos/lenore-m/441579944/in/set-72157600039854497 Image author: Lenore Edman Image license: CC by 2.0, https://creativecommons.org/licenses/by/2.0/
  16. Bryman &amp; Cramer “Concepts &amp; their Measurement” (1997):&amp;quot;Concepts form a linchpin in the process of social research. Hypotheses contain concepts which are the products of our reflections on the world. Concepts express common elements in the world to which we give a name.&amp;quot; (p. 53)
  17. Bryman &amp; Cramer “Concepts &amp; their Measurement” (1997): “Once formulated, a concept and the concepts with which it is purportedly associated, such as social class and authoritarianism, will need to be operationally defined, in order for systematic research to be conducted in relation to it...&amp;quot;
  18. Bryman &amp; Cramer “Concepts &amp; their Measurement” (1997): “Once formulated, a concept and the concepts with which it is purportedly associated, such as social class and authoritarianism, will need to be operationally defined, in order for systematic research to be conducted in relation to it...&amp;quot;
  19. Bryman &amp; Cramer “Concepts &amp; their Measurement” (1997, p. ?) Image (left) source:http://www.flickr.com/photos/iliahi/408971482/ By Maui in Vermont: http://www.flickr.com/photos/iliahi License: CC-by-A 2.0 Image (right) source: http://commons.wikimedia.org/wiki/File:3DTomo.jpg By Tomograph: http://de.wikipedia.org/wiki/Benutzer:Tomograph License: GFDL
  20. Image source: http://commons.wikimedia.org/wiki/Image:Die_bone.jpg Creative Commons Attribution 3.0 Unported
  21. This is a cyclical or iterative process.
  22. Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  23. Image source: James Neill, Creative Commons Attribution 3.0; Adapted from original image source: http://www.uwo.ca/fhs/nursing/laschinger/image002.gif (Retreived 20 March, 2007). As of 25 March, 2008, the original image could no longer be found. It was based on research by Laschinger and colleagues on nurse empowerment.
  24. &amp;lt;number&amp;gt; Image source: Figure 4.2 Bryman &amp; Cramer (1997)
  25. &amp;lt;number&amp;gt; Image source: http://commons.wikimedia.org/wiki/File:Noise_effect.svg Image license: GFDL 1.2+
  26. Image source: http://www.freestockphotos.biz/stockphoto/17079 Image author: National Cancer Institute, https://visualsonline.cancer.gov/ Image license: Public domain
  27. &amp;lt;number&amp;gt; Image source:s Unknown. Paradigm (e.g., assumptions, focus, collection method) Personal researcher bias (conscious &amp; unconscious) Sampling (e.g., representativeness of sample) Non-sampling (e.g, non-response, inaccurate response due to unreliable measurements, misunderstanding, social desirability, faking bad, researcher expectancy, administrative error)
  28. “The lower the precision, the more subjects you&amp;apos;ll need in your study to make up for the &amp;quot;noise&amp;quot; in your measurements. Even with a larger sample, noisy data can be hard to interpret. And if you are an applied scientist in the business of testing and assessing clients, you need special care when interpreting results of noisy tests.”
  29. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain Image source: http://www.flickr.com/photos/ajawin/2614485186/ By akawin http://www.flickr.com/photos/ajawin/ License: Creative Commons by Attribution 2.0 http://creativecommons.org/licenses/by/2.0/deed.en
  30. &amp;lt;number&amp;gt; Psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals and between groups of individuals. http://en.wikipedia.org/wiki/Psychometrics
  31. &amp;lt;number&amp;gt; Psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals and between groups of individuals. http://en.wikipedia.org/wiki/Psychometrics
  32. &amp;lt;number&amp;gt; http://www.nytimes.com/2006/05/05/education/05testers.html In Australia, probably the largest collection of psychometricians is at the Australian Council for Educational Research: http://www.acer.edu.au/
  33. &amp;lt;number&amp;gt; Also item response modeling
  34. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain Image source: http://www.flickr.com/photos/psd/17433783/in/photostream By psd http://www.flickr.com/photos/psd/ License: Creative Commons by Attribution 2.0 http://creativecommons.org/licenses/by/2.0/deed.en
  35. Internal consistency (single administration) and Test-reliability (multiple administrations) are based on classical test theory. A more advanced/refined reliability technique is based Item Response Theory, which involves examining the extent to which each item discriminates between individuals with high/low overall scores. For more info see: http://en.wikipedia.org/wiki/Reliability_(psychometric)
  36. Image source: http://www.socialresearchmethods.net/kb/relandval.php Reliability is about the consistency of a measure. Validity is about whether a test actually measures what its meant to measure. Validity requires reliability, but reliability alone is not sufficient for validity. See the more complex discussion of this at: http://en.wikiversity.org/wiki/Reliability_and_validity
  37. Image source: https://commons.wikimedia.org/wiki/File:Aiga_carrental_cropped.svg Image author: Paucobot, https://commons.wikimedia.org/wiki/User:Paucabot Image license: CC-by-3.0 unported, https://creativecommons.org/licenses/by/3.0/deed.en
  38. Internal consistency (single administration) and Test-reliability (multiple administrations) are based on classical test theory. A more advanced/refined reliability technique is based Item Response Theory, which involves examining the extent to which each item discriminates between individuals with high/low overall scores. For more info see: http://en.wikipedia.org/wiki/Reliability_(psychometric)
  39. Image source: Unknown Reliable; not valid Reliable; valid
  40. Internal consistency (single administration) and Test-reliability (multiple administrations) are based on classical test theory. A more advanced/refined reliability technique is based Item Response Theory, which involves examining the extent to which each item discriminates between individuals with high/low overall scores. For more info see: http://en.wikipedia.org/wiki/Reliability_(psychometric)
  41. Image source: Table 7 Fabrigar et al (1999).
  42. Reference Howitt &amp; Cramer (2005), p. 221
  43. Reference Howitt &amp; Cramer (2005), p. 221
  44. Reference Howitt &amp; Cramer (2005), p. 221
  45. &amp;lt;number&amp;gt;
  46. This example is from Francis 6.1
  47. This example is from Francis 6.1
  48. This example is from Francis 6.1 Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ SPSS – Analyze – Scale – Reliability Analysis
  49. This example is from Francis 6.1 Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ SPSS – Analyze – Scale – Reliability Analysis
  50. This example is from Francis 6.1 Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ SPSS – Analyze – Scale – Reliability Analysis
  51. This example is from Francis 6.1
  52. This example is from Francis 6.1
  53. This example is from Francis 6.1 Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/ Life Effectiveness Questionnaire - http://wilderdom.com/leq.html For the lab report, students won&amp;apos;t have test-retest reliability, but they can include a column that indicates the number of items.
  54. Image source: Unknown Reliable; valid http://en.wikipedia.org/wiki/Construct_Validity
  55. From http://www.socialresearchmethods.net/kb/measval.htm
  56. From http://www.socialresearchmethods.net/kb/measval.htm
  57. Adapted from: http://www.psyasia.com/supportsuite/index.php?_m=knowledgebase&amp;_a=viewarticle&amp;kbarticleid=85&amp;nav=0 Prima facie extent to which an item is judged to reflect target construct. e.g., a test of ability to add two-digit numbers should cover the full range of combinations of digits. A test with only one-digit numbers, or only even numbers, would (on the face of it) not have good coverage of the content domain.
  58. Adapted from: http://www.psyasia.com/supportsuite/index.php?_m=knowledgebase&amp;_a=viewarticle&amp;kbarticleid=85&amp;nav=0 Extent to which test content covers a representative sample of the domain to be measured. Compare test with: existing literature expert panels qualitative interviews / focus groups with target sample
  59. Adapted from: http://www.psyasia.com/supportsuite/index.php?_m=knowledgebase&amp;_a=viewarticle&amp;kbarticleid=85&amp;nav=0 Involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. Concurrent validity Correlation between the measure and other recognised measures of the target construct Predictive validity Extent to which a measure predicts something that it theoretically should be able to predict. Criterion refers to the “true” score. Criterion validity refer to measures of relationship between the criterion variable and the measured variable. http://en.wikipedia.org/wiki/Concurrent_validity http://www.socialresearchmethods.net/kb/measval.htm
  60. Adapted from: http://www.psyasia.com/supportsuite/index.php?_m=knowledgebase&amp;_a=viewarticle&amp;kbarticleid=85&amp;nav=0 Totality of evidence about whether a particular operationalisation of a construct adequately represents what is intended. Not distinct from support for the substantive theory of the construct. Involves empirical and theoretical support. Includes: Statistical analyses of the internal structure of the test including the relationships between responses to different test items (internal consistency). Relationships between the test and measures of other constructs. Convergent validity Extent to which a measure correlates with measures with which it theoretically should be associated. Discriminant validity Extent to which a measure does not correlate with measures with which it theoretically should not be associated.
  61. Image source: http://www.socialresearchmethods.net/kb/considea.php Image license: ©2006, William M.K. Trochim, All Rights Reserved
  62. Image source: https://commons.wikimedia.org/wiki/File:PEO-hamburger.svg Image author: PhantomOpenEmoji, https://github.com/break24/PhantomOpenEmoji/graphs/contributors Image license: CC by 3.0 unported, https://creativecommons.org/licenses/by/3.0/deed.en
  63. &amp;lt;number&amp;gt;
  64. &amp;lt;number&amp;gt;
  65. &amp;lt;number&amp;gt;
  66. &amp;lt;number&amp;gt; compute X = mean (v1, v2, v3, v4, v5.v6)(note: a missing value will be returned for a case if 1 or more of its items are missing data) compute X = mean.4 (v1, v2, v3, v4, v5,v6) (note: in this case a missing value will be returned for a case if 3 or more of its items are missing data – the “.4” means calculate a mean using data from 4 or more items, otherwise make X missing)
  67. X = r1y1 +…. + rpyp Regression weighting will also include small weights for non-target items
  68. Image source: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  69. Image sources: James Neill, Creative Commons Attribution-Share Alike 2.5 Australia, http://creativecommons.org/licenses/by-sa/2.5/au/
  70. Image source: http://commons.wikimedia.org/wiki/File:Information_icon4.svg License: Public domain Image source: http://www.flickr.com/photos/mezone/21970578/ By http://www.flickr.com/photos/mezone/ License: Creative Commons by Attribution 2.0 http://creativecommons.org/licenses/by/2.0/deed.en