SlideShare a Scribd company logo
1 of 84
Download to read offline
1
Dr Mahmoud Danaee
Associate Prof.
Dr Anne Yee
2
What does this resemble?
3
Rorschach test
• At the end of the test, the tester says …
–you need therapy
–or you can't work for this company
4
Psychological Testing
• Occurs widely …
– in personnel selection
– in clinical settings
– in education
What constitutes a good test?
Validity and Reliability
• Validity: How well does the measure or
design do what it purports to do?
• Reliability: How consistent or stable is
the instrument?
–Is the instrument dependable?
Logical Statistical
AKA Criterion
Face Conten
t
PredictiveConvergent Concurren
t
Validity
ConsistencyReliability Objectivity
Conten
t
Construc
t
Divergent/
Discriminant
Face Validity
– Infers that a test is valid by face value
– It is clear that the test measures what it is supposed
to
– As a check on face validity, test/survey items are
sent to experts to obtain suggestions for
modification.
– Because of its vagueness and subjectivity,
psychometricians have abandoned this concept for a
long time.
Content Validity
– Infers that the test measures all aspects contributing
to the variable of interest
– Face validity Vs Content validity:
• Face validity can be established by one person
• Content validity should be checked by a panel,
and thus usually it goes hand in hand with inter-
rater reliability (Kappa!)
Example:
• Computer literacy includes skills in
operating system, word processing,
spreadsheet, database, graphics, internet,
and many others.
• It is difficult to administer a test covering
all aspects of computing. Therefore, only
several tasks are sampled from the
universe of computer skills.
• A test of computer literacy should be
written or reviewed by computer science
professors or senior programmers in the IT
industry because it is assumed that
computer scientists should know what are
important in his own discipline.
Overall:
A logically valid test simply appears to
measure the right variable in its entirety?
Subjective!!!
The Content Validity Index
Content validity has been defined as follows:
• (1) ‘‘...the degree to which an instrument has an appropriate
sample of items for the construct being measured’’ (Polit &
Beck, 2004, p. 423);
• (2) ‘‘...whether or not the items sampled for inclusion on the
tool adequately represent the domain of content addressed
by the instrument’’ (Waltz, Strickland, & Lenz, 2005, p. 155);
• (3) ‘‘...the extent to which an instrument adequately samples the
research domain of interest when attempting to measure
phenomena’’ (Wynd, Schmidt, & Schaefer, 2003, p. 509).
Two types of CVIs.
• content validity of individual items
• content validity of the overall scale.
• Researchers use I-CVI information to guide them in revising,
deleting, or substituting items
• I-CVIs tend only to be reported in methodological studies that
focus on descriptions of the content validation process
• Most often reported in scale development studies is the CVI
CVI
Degree to which an
instrument has an
appropriate sample of
items for construct
being measured
S-CVI
Content Validity of
the overall scale
S-CVI/UA
Proportion of items
on a scale that
achieves a
relevance rating of 3
or 4 by all the
experts
S-CVI/Ave
Average of the I-
CVIs forI-CVI
Content Validity of
individual items:
Question
Has each item
in the
instruments
consistency?
Are the items
representati
ve of
concepts
related to the
dissertation
topic?
Are the items
relevance to
concepts
related to the
dissertation
topic?
Are the items
clarity in term
of wording
comments
Q1 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④
Q2 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④
Q3 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④
Q4 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④
Q5 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④
Ratings
1= not relevant 2 =somewhat relevant.
3= quite relevant 4= highly relevant.
I-CVI, item-level content validity index
S-CVI, content validity index for the scale
Acceptable standard for the S-CVI recommended a minimum S-CVI of .80.
If the I-CVI is higher than 79%, the item will be appropriate.
If it is between 70% and 79%, it needs revision.
If it is less than 70% it is eliminated
Kappa statistic is a consensus index of inter-rater
agreement that adjusts for chance agreement and is an
important supplement to CVI because Kappa provides
information about the degree of agreement beyond
chance
Evaluation criteria for Kappa is the values
above 0.74= excellent
between 0.60 and 0.74=good
between 0.40 and 0.59= fair
Logical Statistical
Criterion
Face Conten
t
PredictiveConvergent Concurren
t
Validity
ConsistencyReliability Objectivity
Conten
t
Construc
t
Divergent/
Discriminant
Criterion Validity
• This type of validity is used to measure the
ability of an instrument to predict future
outcomes.
• Validity is usually determined by comparing
two instruments ability to predict a similar
outcome with a single variable being
measured.
• There are two major types of criterion validity
predictive or concurrent forms of validity.
24
Criterion validity
• A test has high criterion validity if
– It correlates highly with some external benchmark
(concurrent) ?
– How well does the test correlated with outcome criteria
(predictive)?
• Eg
– You have lost 30 pounds if your scale reported that you lost 30
pounds, you would expect that your clothes would also feel
looser
‘Warwick spider phobia questionnaire’
positive correlation with SPQ
Concurrent Criterion Validity
• Concurrent criterion validity is used when
the two instruments are used to measure the
same event at the same time.
• Example:
Predictive Criterion Validity
• Predictive validity is used when the instrument
is administered then time is allowed to pass
and is measured against the another outcome.
• Example:
Criterion validity
• When the focus of the test is on
criterion validity, we draw an inference
from test scores to performance. A high
score of a valid test indicates that the
test taker has met the performance
criteria.
• Regression analysis can be applied to
establish criterion validity. An
independent variable could be used as
a predictor variable and a dependent
variable, the criterion variable. The
correlation coefficient between them is
called validity coefficients.
How is Criterion Validity Measured?
• The statistical measure or correlation coefficient tells the
degree to which the instrument is valid based on the measured
criteria.
• What does it look like in an equation?
– The symbol “r” denotes the correlation coefficient.
– A higher “r” value shows a positive relationship between the
instruments.
– A mix of high and low “r” values shows a negative relationship.
Predictive Validity Concurrent Validity
• As a rule of thumb, for absolute value of r:
• 0.00-0.19: very weak
• 0.20-0.39: weak
• 0.40-0.59: moderate
• 0.60-0.79: strong
• 0.80-1.00: very strong.
Logical Statistical
AKA Criterion
Face Conten
t
PredictiveConvergent Concurren
t
Validity
ConsistencyReliability Objectivity
Conten
t
Construct
Divergent/
Discriminant
Construct validity
• Measuring things that are in our theory of a domain.
• The construct is sometimes called a latent variable
– You can’t directly observe the construct
– You can only measure its surface manifestations
– it is concerned with abstract and theoretical
construct, construct validity is also known
as theoretical validity
32
What are Latent Variables?
• Most/all variables in the social world are not directly
observable.
• This makes them ‘latent’ or hypothetical constructs.
• We measure latent variables with observable
indicators, e.g. questionnaire items.
• We can think of the variance of an observable
indicator as being partially caused by:
– The latent construct in question
– Other factors (error)
• I cringe when I have to go to math class.
• I am uneasy about going to the board in
a math class.
• I am afraid to ask questions in math
class.
• I am always worried about being called
on in math class.
• I understand math now, but I worry that
it's going to get really difficult soon.
Math
anxiety
• Specifying formative versus reflective constructs is a
critical preliminary step prior to further statistical
analysis. Specification follows these guidelines:
• Formative
– Direction of causality is from measure to construct
– No reason to expect the measures are correlated
– Indicators are not interchangeable
• Reflective
– Direction of causality is from construct to measure
– Measures expected to be correlated
– Indicators are interchangeable
– An example of formative versus reflective constructs is given in the
figure below.
Factor model
• A factor model identifies the relationship between observed
items and latent factors. For example, when a psychologist
wants to study the causal relationships between Math anxiety
and job performance, first he/she has to define the constructs
“Math anxiety” and “job performance.” To accomplish this
step, about the psychologist needs to develop items that
measure the defined construct.
Construct, dimension,
subscale, factor, component
• This construct has eight dimensions (e.g.
Intelligence has eight aspects)
• This scale has eight subscales (e.g. the survey
measures different but weakly related things)
• The factor structure has eight
factors/components (e.g. in factor
analysis/PCA)
Exploratory Factor Analysis
• (EFA) is a statistical approach to determining the
correlation among the variables in a dataset.
• This type of analysis provides a factor structure (a
grouping of variables based on strong correlations).
• EFA is good for detecting "misfit" variables. In
general, an EFA prepares the variables to be used for
cleaner structural equation modeling. An EFA
should always be conducted for new datasets.
. The Kaiser-Meyer-Olkin measure of sampling adequacy tests whether the partial
correlations among variables are small.
KMO Statistics
Marvelous: .90s
Meritorious: .80s
Middling: .70s
Mediocre: .60s
Miserable: .50s
Unacceptable: <.50
Bartlett’s Test of Sphericity
• Tests hypothesis that correlation matrix is an identity matrix.
• Diagonals are ones
• Off-diagonals are zeros
• A significant result (Sig. < 0.05) indicates matrix is not an
identity matrix; i.e., the variables do relate to one another enough
to run a meaningful EFA.
Anti-image
• The anti-image correlation matrix contains the negatives of the partial correlation
coefficients, and the anti-image covariance matrix contains the negatives of the
partial covariances.
• In a good factor model, most of the off-diagonal elements will be small. The
measure of sampling adequacy for a variable is displayed on the diagonal of the
anti-image correlation matrix.
Communalities
• A communality is the extent to which an item correlates
with all other items.
• Higher communalities are better. If communalities for a
particular variable are low (between 0.0-0.4), then that
variable will struggle to load significantly on any factor.
• In the table below, you should identify low values in the
"Extraction" column. Low values indicate candidates for
removal after you examine the pattern matrix.
Parallel analysis
• is a method for
determining the number
of components or factors
to retain from pca or
factor analysis.
Essentially, the program
works by creating a
random dataset with the
same numbers of
observations and
variables as the original
data.
https://www.statstodo.com
Factor analysis for dichotomous variables
• Using Factor software
and simultaneously
Parallel analysis for
binary data
Establishing construct validity
• Convergent validity
– Agrees with other measures of the same thing
• Divergent/Discriminant validity
– Different tests measure different things
– Does the test have the “ability to discriminate”?
– (Campbell & Fiske, 1959)
55
Construct validity is the extent to which a set of measured items actually
reflected the theoretical latent construct those item are designed to measure.
Thus, it deals with the accuracy of measurement.
Construct validity is made up of TWO important components which they are:
1) Convergent validity: the items that are indicators of a specific construct
should converge or share a high proportion of variance in common,
known as convergent validity. The ways to estimate the relative amount
of convergent validity among item measures:
Construct validity
Discriminant Validity:
the extent to which a construct is truly distinct frame other
construct. To test the discriminant validity the AVE for two
factors should be grater than the square of the correlation
between the two factors to provide evidence of discriminant
validity.
• Discriminant validity can be tested by examining the AVE for
each construct against squared correlations (shared variance)
between the construct and all other constructs in the model.
• A construct will have adequate discriminant validity if the
AVE exceeds the squared correlation among the constructs
(Fornell & Larcker, 1981; Hair et al., 2006).
Factor Loading:
at a minimum, all factor loading should be statistically significant.
A good rule of thumb is that standardized loading estimates should
be .5 or higher, and ideally .7 or higher.
Average Variance Extracted (AVE):
is the average squared factor loading. A VE of 0.5 or higher is a
good rule of thumb suggesting adequate convergence. A VE less
than .5 indicates that on average, more error remains in the items
than variance explained by the latent factor structure impose on the
measure (Haire et al., 2006, p 777).
Construct Reliability: construct reliability should be .7 or higher to
indicate adequate convergence or internal consistency.
Individual model
(First order CFA)
Mea surement Model
Structural Equation Modeling (SEM)
Individual
Model
Measurement
Model
Structural
Model
Developing Assessments
If necessary, consider
commercial assessments
or create a new
assessment
What assessments do
you already have that
purport to measure this?
What are you trying
to measure? Purpose
Review
Purchase Develop
Considerations
• Using what already have
– Is it carefully aligned to your purpose?
– Is it carefully matched to your purpose?
– Do you have the funds (for assessment, equipment, training)?
• Developing a new assessment
– Do you have the in-house content knowledge?
– Do you have the in-house assessment knowledge?
– Does your team have time for development?
– Does your team have the knowledge and time needed for proper scoring?
– Identify the goal of your questionnaire.
– What kind of information do you want to gather with your questionnaire?
– What is your main objective?
– Is a questionnaire the best way to go about collecting this information?
Adopting an Instrument
• Adopting an instrument is
quite simple and requires
very little effort. Even when
an instrument is adopted,
though, there still might be
a few modifications that
are necessary
Adapting an Instrument
• Adapting an instrument
requires more substantial
changes than adopting an
instrument. In this situation,
the researcher follows the
general design of another
instrument but adds items,
removes items, and/or
substantially changes the
content of each item
• Whenever possible, it is best for an instrument to be adopted.
• When this is not possible, the next best option is to adapt an
instrument.
• However, if there are no other instruments available, then the
last option is to develop an instrument.
STEP Type of Validity Development Adaption Adoption
pertest Logical
Face
+ +/- +/-
Content
+ +
+/-
Pilot /
main
study
Criterion
Concurrent + +
-
Predictive + +
-
Construct
Convergent + + +
Divergent + + +
Reliability
Types of Reliability
• Test-Retest Reliability: Degree of temporal stability of the
instrument.
– Assessed by having instrument completed by same
people during two different time periods.
• Alternate-Forms Reliability: Degree of relatedness of
different forms of test.
– Used to minimize inflated reliability correlations due to
familiarity with test items.
Types of Reliability (cont.)
• Internal-Consistency Reliability: Overall degree of
relatedness of all test items or raters.
– Also called reliability of components.
• Item-to-Item Reliability: The reliability of any single
item on average.
• Judge-to-Judge Reliability: The reliability of any single
judge on average.
• Cronbach’s alpha to evaluate the internal consistency
of observed items, and also applies factor analysis to
extract latent constructs from these consistent
observed variables.
• >0.90, means the questions are asking the same things
• 0.7 to 0.9 is the acceptable range.
Remember!
An assessment that is highly reliable is
not necessarily valid. However, for an
assessment to be valid, it must also be
reliable.
Improving Validity & Reliability
• Ensure questions are based on standards
• Ask purposeful questions
• Ask concrete questions
• Use time periods based on importance of the questions
• Use conventional language
• Use complete sentences
• Avoid abbreviations
• Use shorter questions
Overall Cronbach Coefficient Alpha
• One may argue that when a high Cronbach
Alpha indicates a high degree of internal
consistency, the test or the survey must be
uni-dimensional rather than multi-
dimensional. Thus, there is no need to further
investigate its subscales. This is a common
misconception.
Ch 11 83
Performing the Pilot test
• A pilot test involves conducting
the survey on a small,
representative set of
respondents in order to reveal
questionnaire errors before the
survey is launched.
• It is important to run the pilot
test on respondents that are
representative of the target
population to be studied.
Cronbach's alpha Measures the
intercorrelations among test items, and is
thus known as an internal
consistency estimate of reliability of test
scores
Test-retest reliability refers to the degree
to which test results are consistent over
time. In order to measure test-retest
reliability, we must first give the
same test to the same individuals on two
occasions and correlate the scores
Thank you
Dr Mahmoud Danaee
mdanaee@um.edu.my
Senior Visiting Research
Fellow , Academic
enhancement and leadership
Development Center ( ADeC)
Associate prof. Dr. Anne Yee
annyee17@um.edu.my
Addiction psychiatrist
Department of psychological
medicine, University of
Malaya Center of Addiction
Science (UMCAS)

More Related Content

What's hot

Presentation Validity & Reliability
Presentation Validity & ReliabilityPresentation Validity & Reliability
Presentation Validity & Reliability
songoten77
 
Week 9 validity and reliability
Week 9 validity and reliabilityWeek 9 validity and reliability
Week 9 validity and reliability
wawaaa789
 
Univariate & bivariate analysis
Univariate & bivariate analysisUnivariate & bivariate analysis
Univariate & bivariate analysis
sristi1992
 

What's hot (20)

Presentation Validity & Reliability
Presentation Validity & ReliabilityPresentation Validity & Reliability
Presentation Validity & Reliability
 
Tools in Qualitative Research: Validity and Reliability
Tools in Qualitative Research: Validity and ReliabilityTools in Qualitative Research: Validity and Reliability
Tools in Qualitative Research: Validity and Reliability
 
Validity and Reliability
Validity and Reliability Validity and Reliability
Validity and Reliability
 
Reliability and validity
Reliability and  validityReliability and  validity
Reliability and validity
 
Reliability and validity ppt
Reliability and validity pptReliability and validity ppt
Reliability and validity ppt
 
Scale development
Scale developmentScale development
Scale development
 
Case study method in research
Case study method in researchCase study method in research
Case study method in research
 
Reliability
ReliabilityReliability
Reliability
 
What is Reliability and its Types?
What is Reliability and its Types? What is Reliability and its Types?
What is Reliability and its Types?
 
Mixed research-methods (1)
Mixed research-methods (1)Mixed research-methods (1)
Mixed research-methods (1)
 
Validity in Research
Validity in ResearchValidity in Research
Validity in Research
 
Validity & reliability seminar
Validity & reliability seminarValidity & reliability seminar
Validity & reliability seminar
 
Level Of Measurement
Level Of MeasurementLevel Of Measurement
Level Of Measurement
 
Week 9 validity and reliability
Week 9 validity and reliabilityWeek 9 validity and reliability
Week 9 validity and reliability
 
Reliability for testing and assessment
Reliability for testing and assessmentReliability for testing and assessment
Reliability for testing and assessment
 
Univariate & bivariate analysis
Univariate & bivariate analysisUnivariate & bivariate analysis
Univariate & bivariate analysis
 
Statistics "Descriptive & Inferential"
Statistics "Descriptive & Inferential"Statistics "Descriptive & Inferential"
Statistics "Descriptive & Inferential"
 
Ch07 Experimental & Quasi-Experimental Designs
Ch07 Experimental & Quasi-Experimental DesignsCh07 Experimental & Quasi-Experimental Designs
Ch07 Experimental & Quasi-Experimental Designs
 
Research question
Research questionResearch question
Research question
 
Reliability
ReliabilityReliability
Reliability
 

Similar to Questionnaire and Instrument validity

Characteristics of a good test
Characteristics of a good testCharacteristics of a good test
Characteristics of a good test
cyrilcoscos
 
unit 9 measurements presentation- short.ppt
unit 9 measurements presentation- short.pptunit 9 measurements presentation- short.ppt
unit 9 measurements presentation- short.ppt
MitikuTeka1
 
Reliability & validity
Reliability & validityReliability & validity
Reliability & validity
shefali84
 

Similar to Questionnaire and Instrument validity (20)

Validity
ValidityValidity
Validity
 
Characteristics of a good test
Characteristics of a good testCharacteristics of a good test
Characteristics of a good test
 
Chapter 8: Measurement and Sampling
Chapter 8: Measurement and SamplingChapter 8: Measurement and Sampling
Chapter 8: Measurement and Sampling
 
Validity.pptx
Validity.pptxValidity.pptx
Validity.pptx
 
unit 9 measurements presentation- short.ppt
unit 9 measurements presentation- short.pptunit 9 measurements presentation- short.ppt
unit 9 measurements presentation- short.ppt
 
Reliability & validity
Reliability & validityReliability & validity
Reliability & validity
 
JC-16-23June2021-rel-val.pptx
JC-16-23June2021-rel-val.pptxJC-16-23June2021-rel-val.pptx
JC-16-23June2021-rel-val.pptx
 
Chapter 2 The Science of Psychological Measurement (Alivio, Ansula).pptx
Chapter 2 The Science of Psychological Measurement (Alivio, Ansula).pptxChapter 2 The Science of Psychological Measurement (Alivio, Ansula).pptx
Chapter 2 The Science of Psychological Measurement (Alivio, Ansula).pptx
 
Business Research Methods Unit III
Business Research Methods Unit IIIBusiness Research Methods Unit III
Business Research Methods Unit III
 
Research methodology measurement
Research methodology measurement Research methodology measurement
Research methodology measurement
 
reliablity and validity in social sciences research
reliablity and validity  in social sciences researchreliablity and validity  in social sciences research
reliablity and validity in social sciences research
 
Doing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative ResearchDoing observation and Data Analysis for Qualitative Research
Doing observation and Data Analysis for Qualitative Research
 
Validity of instrument
Validity of instrumentValidity of instrument
Validity of instrument
 
eeMba ii rm unit-3.1 measurement & scaling a
eeMba ii rm unit-3.1 measurement & scaling aeeMba ii rm unit-3.1 measurement & scaling a
eeMba ii rm unit-3.1 measurement & scaling a
 
unit 2.6.pptx
unit 2.6.pptxunit 2.6.pptx
unit 2.6.pptx
 
Validity
ValidityValidity
Validity
 
Ahmad measuremenr reliability and validity
Ahmad measuremenr reliability and validityAhmad measuremenr reliability and validity
Ahmad measuremenr reliability and validity
 
Scaling and measurement technique
Scaling and measurement techniqueScaling and measurement technique
Scaling and measurement technique
 
RM-3 SCY.pdf
RM-3 SCY.pdfRM-3 SCY.pdf
RM-3 SCY.pdf
 
Validity of test
Validity of testValidity of test
Validity of test
 

Recently uploaded

Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
amitlee9823
 
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
amitlee9823
 
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night StandCall Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
amitlee9823
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
amitlee9823
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
only4webmaster01
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
JoseMangaJr1
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
amitlee9823
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
amitlee9823
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Recently uploaded (20)

Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
 
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night StandCall Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Hsr Layout ☎ 7737669865 🥵 Book Your One night Stand
 
Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
 
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night StandCall Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Attibele ☎ 7737669865 🥵 Book Your One night Stand
 
Detecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning ApproachDetecting Credit Card Fraud: A Machine Learning Approach
Detecting Credit Card Fraud: A Machine Learning Approach
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Marol Naka Call On 9920725232 With Body to body massage...
 
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
 
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
Thane Call Girls 7091864438 Call Girls in Thane Escort service book now -
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 9155563397 👗 Top Class Call Girl Service B...
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Rabindra Nagar  (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Rabindra Nagar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 

Questionnaire and Instrument validity

  • 1. 1 Dr Mahmoud Danaee Associate Prof. Dr Anne Yee
  • 2. 2 What does this resemble?
  • 3. 3 Rorschach test • At the end of the test, the tester says … –you need therapy –or you can't work for this company
  • 4. 4 Psychological Testing • Occurs widely … – in personnel selection – in clinical settings – in education What constitutes a good test?
  • 5. Validity and Reliability • Validity: How well does the measure or design do what it purports to do? • Reliability: How consistent or stable is the instrument? –Is the instrument dependable?
  • 6.
  • 7. Logical Statistical AKA Criterion Face Conten t PredictiveConvergent Concurren t Validity ConsistencyReliability Objectivity Conten t Construc t Divergent/ Discriminant
  • 8. Face Validity – Infers that a test is valid by face value – It is clear that the test measures what it is supposed to – As a check on face validity, test/survey items are sent to experts to obtain suggestions for modification. – Because of its vagueness and subjectivity, psychometricians have abandoned this concept for a long time.
  • 9. Content Validity – Infers that the test measures all aspects contributing to the variable of interest – Face validity Vs Content validity: • Face validity can be established by one person • Content validity should be checked by a panel, and thus usually it goes hand in hand with inter- rater reliability (Kappa!)
  • 10. Example: • Computer literacy includes skills in operating system, word processing, spreadsheet, database, graphics, internet, and many others. • It is difficult to administer a test covering all aspects of computing. Therefore, only several tasks are sampled from the universe of computer skills. • A test of computer literacy should be written or reviewed by computer science professors or senior programmers in the IT industry because it is assumed that computer scientists should know what are important in his own discipline.
  • 11. Overall: A logically valid test simply appears to measure the right variable in its entirety? Subjective!!!
  • 12. The Content Validity Index Content validity has been defined as follows: • (1) ‘‘...the degree to which an instrument has an appropriate sample of items for the construct being measured’’ (Polit & Beck, 2004, p. 423); • (2) ‘‘...whether or not the items sampled for inclusion on the tool adequately represent the domain of content addressed by the instrument’’ (Waltz, Strickland, & Lenz, 2005, p. 155); • (3) ‘‘...the extent to which an instrument adequately samples the research domain of interest when attempting to measure phenomena’’ (Wynd, Schmidt, & Schaefer, 2003, p. 509).
  • 13. Two types of CVIs. • content validity of individual items • content validity of the overall scale. • Researchers use I-CVI information to guide them in revising, deleting, or substituting items • I-CVIs tend only to be reported in methodological studies that focus on descriptions of the content validation process • Most often reported in scale development studies is the CVI
  • 14. CVI Degree to which an instrument has an appropriate sample of items for construct being measured S-CVI Content Validity of the overall scale S-CVI/UA Proportion of items on a scale that achieves a relevance rating of 3 or 4 by all the experts S-CVI/Ave Average of the I- CVIs forI-CVI Content Validity of individual items:
  • 15. Question Has each item in the instruments consistency? Are the items representati ve of concepts related to the dissertation topic? Are the items relevance to concepts related to the dissertation topic? Are the items clarity in term of wording comments Q1 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q2 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q3 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q4 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Q5 ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ ① ② ③ ④ Ratings 1= not relevant 2 =somewhat relevant. 3= quite relevant 4= highly relevant.
  • 16. I-CVI, item-level content validity index S-CVI, content validity index for the scale
  • 17. Acceptable standard for the S-CVI recommended a minimum S-CVI of .80. If the I-CVI is higher than 79%, the item will be appropriate. If it is between 70% and 79%, it needs revision. If it is less than 70% it is eliminated
  • 18. Kappa statistic is a consensus index of inter-rater agreement that adjusts for chance agreement and is an important supplement to CVI because Kappa provides information about the degree of agreement beyond chance Evaluation criteria for Kappa is the values above 0.74= excellent between 0.60 and 0.74=good between 0.40 and 0.59= fair
  • 19.
  • 20.
  • 21.
  • 22. Logical Statistical Criterion Face Conten t PredictiveConvergent Concurren t Validity ConsistencyReliability Objectivity Conten t Construc t Divergent/ Discriminant
  • 23. Criterion Validity • This type of validity is used to measure the ability of an instrument to predict future outcomes. • Validity is usually determined by comparing two instruments ability to predict a similar outcome with a single variable being measured. • There are two major types of criterion validity predictive or concurrent forms of validity.
  • 24. 24 Criterion validity • A test has high criterion validity if – It correlates highly with some external benchmark (concurrent) ? – How well does the test correlated with outcome criteria (predictive)? • Eg – You have lost 30 pounds if your scale reported that you lost 30 pounds, you would expect that your clothes would also feel looser ‘Warwick spider phobia questionnaire’ positive correlation with SPQ
  • 25. Concurrent Criterion Validity • Concurrent criterion validity is used when the two instruments are used to measure the same event at the same time. • Example:
  • 26. Predictive Criterion Validity • Predictive validity is used when the instrument is administered then time is allowed to pass and is measured against the another outcome. • Example:
  • 27. Criterion validity • When the focus of the test is on criterion validity, we draw an inference from test scores to performance. A high score of a valid test indicates that the test taker has met the performance criteria. • Regression analysis can be applied to establish criterion validity. An independent variable could be used as a predictor variable and a dependent variable, the criterion variable. The correlation coefficient between them is called validity coefficients.
  • 28. How is Criterion Validity Measured? • The statistical measure or correlation coefficient tells the degree to which the instrument is valid based on the measured criteria. • What does it look like in an equation? – The symbol “r” denotes the correlation coefficient. – A higher “r” value shows a positive relationship between the instruments. – A mix of high and low “r” values shows a negative relationship.
  • 30. • As a rule of thumb, for absolute value of r: • 0.00-0.19: very weak • 0.20-0.39: weak • 0.40-0.59: moderate • 0.60-0.79: strong • 0.80-1.00: very strong.
  • 31. Logical Statistical AKA Criterion Face Conten t PredictiveConvergent Concurren t Validity ConsistencyReliability Objectivity Conten t Construct Divergent/ Discriminant
  • 32. Construct validity • Measuring things that are in our theory of a domain. • The construct is sometimes called a latent variable – You can’t directly observe the construct – You can only measure its surface manifestations – it is concerned with abstract and theoretical construct, construct validity is also known as theoretical validity 32
  • 33. What are Latent Variables? • Most/all variables in the social world are not directly observable. • This makes them ‘latent’ or hypothetical constructs. • We measure latent variables with observable indicators, e.g. questionnaire items. • We can think of the variance of an observable indicator as being partially caused by: – The latent construct in question – Other factors (error)
  • 34. • I cringe when I have to go to math class. • I am uneasy about going to the board in a math class. • I am afraid to ask questions in math class. • I am always worried about being called on in math class. • I understand math now, but I worry that it's going to get really difficult soon. Math anxiety
  • 35. • Specifying formative versus reflective constructs is a critical preliminary step prior to further statistical analysis. Specification follows these guidelines: • Formative – Direction of causality is from measure to construct – No reason to expect the measures are correlated – Indicators are not interchangeable • Reflective – Direction of causality is from construct to measure – Measures expected to be correlated – Indicators are interchangeable – An example of formative versus reflective constructs is given in the figure below.
  • 36.
  • 37.
  • 38. Factor model • A factor model identifies the relationship between observed items and latent factors. For example, when a psychologist wants to study the causal relationships between Math anxiety and job performance, first he/she has to define the constructs “Math anxiety” and “job performance.” To accomplish this step, about the psychologist needs to develop items that measure the defined construct.
  • 39. Construct, dimension, subscale, factor, component • This construct has eight dimensions (e.g. Intelligence has eight aspects) • This scale has eight subscales (e.g. the survey measures different but weakly related things) • The factor structure has eight factors/components (e.g. in factor analysis/PCA)
  • 40. Exploratory Factor Analysis • (EFA) is a statistical approach to determining the correlation among the variables in a dataset. • This type of analysis provides a factor structure (a grouping of variables based on strong correlations). • EFA is good for detecting "misfit" variables. In general, an EFA prepares the variables to be used for cleaner structural equation modeling. An EFA should always be conducted for new datasets.
  • 41. . The Kaiser-Meyer-Olkin measure of sampling adequacy tests whether the partial correlations among variables are small. KMO Statistics Marvelous: .90s Meritorious: .80s Middling: .70s Mediocre: .60s Miserable: .50s Unacceptable: <.50
  • 42. Bartlett’s Test of Sphericity • Tests hypothesis that correlation matrix is an identity matrix. • Diagonals are ones • Off-diagonals are zeros • A significant result (Sig. < 0.05) indicates matrix is not an identity matrix; i.e., the variables do relate to one another enough to run a meaningful EFA. Anti-image • The anti-image correlation matrix contains the negatives of the partial correlation coefficients, and the anti-image covariance matrix contains the negatives of the partial covariances. • In a good factor model, most of the off-diagonal elements will be small. The measure of sampling adequacy for a variable is displayed on the diagonal of the anti-image correlation matrix.
  • 43.
  • 44.
  • 45.
  • 46. Communalities • A communality is the extent to which an item correlates with all other items. • Higher communalities are better. If communalities for a particular variable are low (between 0.0-0.4), then that variable will struggle to load significantly on any factor. • In the table below, you should identify low values in the "Extraction" column. Low values indicate candidates for removal after you examine the pattern matrix.
  • 47. Parallel analysis • is a method for determining the number of components or factors to retain from pca or factor analysis. Essentially, the program works by creating a random dataset with the same numbers of observations and variables as the original data.
  • 49.
  • 50.
  • 51. Factor analysis for dichotomous variables
  • 52.
  • 53.
  • 54. • Using Factor software and simultaneously Parallel analysis for binary data
  • 55. Establishing construct validity • Convergent validity – Agrees with other measures of the same thing • Divergent/Discriminant validity – Different tests measure different things – Does the test have the “ability to discriminate”? – (Campbell & Fiske, 1959) 55
  • 56. Construct validity is the extent to which a set of measured items actually reflected the theoretical latent construct those item are designed to measure. Thus, it deals with the accuracy of measurement. Construct validity is made up of TWO important components which they are: 1) Convergent validity: the items that are indicators of a specific construct should converge or share a high proportion of variance in common, known as convergent validity. The ways to estimate the relative amount of convergent validity among item measures: Construct validity
  • 57. Discriminant Validity: the extent to which a construct is truly distinct frame other construct. To test the discriminant validity the AVE for two factors should be grater than the square of the correlation between the two factors to provide evidence of discriminant validity. • Discriminant validity can be tested by examining the AVE for each construct against squared correlations (shared variance) between the construct and all other constructs in the model. • A construct will have adequate discriminant validity if the AVE exceeds the squared correlation among the constructs (Fornell & Larcker, 1981; Hair et al., 2006).
  • 58. Factor Loading: at a minimum, all factor loading should be statistically significant. A good rule of thumb is that standardized loading estimates should be .5 or higher, and ideally .7 or higher. Average Variance Extracted (AVE): is the average squared factor loading. A VE of 0.5 or higher is a good rule of thumb suggesting adequate convergence. A VE less than .5 indicates that on average, more error remains in the items than variance explained by the latent factor structure impose on the measure (Haire et al., 2006, p 777). Construct Reliability: construct reliability should be .7 or higher to indicate adequate convergence or internal consistency.
  • 61. Structural Equation Modeling (SEM) Individual Model Measurement Model Structural Model
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70. Developing Assessments If necessary, consider commercial assessments or create a new assessment What assessments do you already have that purport to measure this? What are you trying to measure? Purpose Review Purchase Develop
  • 71. Considerations • Using what already have – Is it carefully aligned to your purpose? – Is it carefully matched to your purpose? – Do you have the funds (for assessment, equipment, training)? • Developing a new assessment – Do you have the in-house content knowledge? – Do you have the in-house assessment knowledge? – Does your team have time for development? – Does your team have the knowledge and time needed for proper scoring? – Identify the goal of your questionnaire. – What kind of information do you want to gather with your questionnaire? – What is your main objective? – Is a questionnaire the best way to go about collecting this information?
  • 72. Adopting an Instrument • Adopting an instrument is quite simple and requires very little effort. Even when an instrument is adopted, though, there still might be a few modifications that are necessary Adapting an Instrument • Adapting an instrument requires more substantial changes than adopting an instrument. In this situation, the researcher follows the general design of another instrument but adds items, removes items, and/or substantially changes the content of each item
  • 73. • Whenever possible, it is best for an instrument to be adopted. • When this is not possible, the next best option is to adapt an instrument. • However, if there are no other instruments available, then the last option is to develop an instrument.
  • 74. STEP Type of Validity Development Adaption Adoption pertest Logical Face + +/- +/- Content + + +/- Pilot / main study Criterion Concurrent + + - Predictive + + - Construct Convergent + + + Divergent + + +
  • 76. Types of Reliability • Test-Retest Reliability: Degree of temporal stability of the instrument. – Assessed by having instrument completed by same people during two different time periods. • Alternate-Forms Reliability: Degree of relatedness of different forms of test. – Used to minimize inflated reliability correlations due to familiarity with test items.
  • 77. Types of Reliability (cont.) • Internal-Consistency Reliability: Overall degree of relatedness of all test items or raters. – Also called reliability of components. • Item-to-Item Reliability: The reliability of any single item on average. • Judge-to-Judge Reliability: The reliability of any single judge on average.
  • 78. • Cronbach’s alpha to evaluate the internal consistency of observed items, and also applies factor analysis to extract latent constructs from these consistent observed variables. • >0.90, means the questions are asking the same things • 0.7 to 0.9 is the acceptable range.
  • 79.
  • 80. Remember! An assessment that is highly reliable is not necessarily valid. However, for an assessment to be valid, it must also be reliable.
  • 81. Improving Validity & Reliability • Ensure questions are based on standards • Ask purposeful questions • Ask concrete questions • Use time periods based on importance of the questions • Use conventional language • Use complete sentences • Avoid abbreviations • Use shorter questions
  • 82. Overall Cronbach Coefficient Alpha • One may argue that when a high Cronbach Alpha indicates a high degree of internal consistency, the test or the survey must be uni-dimensional rather than multi- dimensional. Thus, there is no need to further investigate its subscales. This is a common misconception.
  • 83. Ch 11 83 Performing the Pilot test • A pilot test involves conducting the survey on a small, representative set of respondents in order to reveal questionnaire errors before the survey is launched. • It is important to run the pilot test on respondents that are representative of the target population to be studied. Cronbach's alpha Measures the intercorrelations among test items, and is thus known as an internal consistency estimate of reliability of test scores Test-retest reliability refers to the degree to which test results are consistent over time. In order to measure test-retest reliability, we must first give the same test to the same individuals on two occasions and correlate the scores
  • 84. Thank you Dr Mahmoud Danaee mdanaee@um.edu.my Senior Visiting Research Fellow , Academic enhancement and leadership Development Center ( ADeC) Associate prof. Dr. Anne Yee annyee17@um.edu.my Addiction psychiatrist Department of psychological medicine, University of Malaya Center of Addiction Science (UMCAS)