2. • Instrument
– “general term that researchers use for a measurement
device (survey, test, questionnaire, etc.)”
• Validity
– “The soundness or appropriateness of a test or
instrument in measuring what it is designed to
measure” (Vincent 1999)
– “Degree to which a test or instrument measures what it
purports to measure” (Thomas & Nelson 1996)
3. • Validity explains
– “How well the collected data covers the actual
area of investigation”
(Ghauri and Gronhaug, 2005)
• Validity basically means
– “measure what is intended to be measured”
(Field, 2005)
4. • The validity of a scale may be defined as the
extent to which differences in observed scale
scores reflect true differences among objects
on the characteristic being measured, rather
than systematic or random error.
• Perfect validity requires that there be no
measurement error
5. Types of Experimental Validity
1. Internal validity
– is the extent to which experimenter measure the effect
of the independent variable on the dependent
variable?
2. External validity
– is the extent to which the results of a study can be
generalized from a sample to a population?
9. Types of Validity for
Measurement Instrument
Externally
Valid?
• Generally Validity can be divided into three
categories depending upon its process or
measure.
1. Logical Validity
2. Construct Validity
3. Statistical Validity Validity
Logical Construct Statistical
11. Logical Validity
Externally
Valid?
• Validity of a test instrument is verified with some
logic only
• In effect,
“An argument is valid if the truth of the
premises logically guarantees the truth of the
conclusion”
12. i. Face Validity
• “The extent to which an instrument looks as if it
measures what it is intended to measure”
(Patton, 1997)
• It Infers that a test is valid by face value
• A test has face validity if, its content simply looks
relevant to the person taking the test
Logical Validity
Face Validity
Content Validity
13. • If one can look at an instrument and understand what is being
measure, it has face validity
• Measurement experts generally hold face validity in low
regards.
– Construct validity and Statistical validity are methods much more
preferred
• As a check on face validity, test/survey items are sent to
experts to obtain suggestions for modification.
14. ii. Content Validity
• Content validity refers to the appropriateness
of the content of an instrument
• Content validity is a subjective but systematic
evaluation of how well the content of a scale
represents the measurement task at hand.
Logical Validity
Face Validity
Content
Validity
15. Example
• A scale designed to measure ‘store image’ will
be inadequate if it omitted any of major
dimensions, as:
– Quality
– Varity and Assortment of merchandise
– Etc.
16. • The judgmental approach to establish content
validity involves:
– Literature reviews and then
– Follow-ups with the evaluation by expert judges or panels.
17. • In order to apply content validity following steps
are followed:
– An exhaustive literature reviews to extract the related
items
– A content validity survey is generated (each item is
assessed using three point scale: Not necessary, Useful
but not essential and Essential).
– The survey should sent to the experts in the same field
of the research
– The content validity ratio (CVR) is then calculated for
each item by employing Lawshe (1975) ‘s method
– Items that are not significant at the critical level are
eliminated
18. Face validity Vs Content validity
• Face validity can be established by one person
• Content validity should be checked by a panel
20. Construct
• Construct is a proposed attribute of a
person that often cannot be measured
directly, but can be assessed using a
number of indicators.
• Construct may be:
– A concept,
– An idea, or
– A behavior
21. • Construct validity refers to
– “How well you transformed a concept, idea, or behavior
that is a construct into a functioning reality.
– ‘‘The degree to which an instrument has an appropriate
sample of items for the construct being measured’’
(Polit& Beck, 2004)
• With the purpose of verifying the construct validity (discriminant and convergent
validity), a factor analysis can be conducted utilizing principal component analysis
(PCA) with varimax rotation method.
22. • Construct validity addresses the question of what
construct or characteristic the scale is, in fact,
measuring
Example:
Team Rivalry
&
Sportsmanship
24. i. Discriminant/ Divergent Validity
• Discriminant validity tests that “constructs that
should have no relationship do, in fact, not have any
relationship”.
• Discriminant validity is the extent to which latent
variable (A) discriminates from other latent
variables (e.g., B, C, D).
Construct
Validity
Discriminant/
Divergent
Validity
Convergent
Validity
25. ii. Convergent Validity
• Convergent validity refers to
– “The degree to which two measures of constructs that
theoretically should be related, are in fact related.”
– Convergent validity tests that “constructs that are
expected to be related are, in fact, related”.
– Convergent validity is the extent to which the scale
correlates positively with other measures of the same
construct.
27. • Criterion validity reflects weather a scale performs a
scale performs as expected in relation to other
variables selected as meaningful criterion (criterion
validity).
• Criterion validity reflects whether a scale performs
as expected in relation to other variables selected
(criterion variables) as meaningful criteria.
• Criterion variables may include:
– Demographical or Psychographic characteristics
– Attitudinal and behavioral measures, or
– Score obtained from other scales
29. • Criterion/Statistical Validity is used to
– Measure the ability of an instrument to predict future
outcomes, &
– The extent to which a measure is related to an outcome
• It is usually determined by comparing two
instruments ability to predict a similar outcome with
a single variable being measured.