3. VALIDITY
• Validity is a property of an inference.
• In the context of research design, it is the
approximate truth of an inference.
• A valid tool should only measure what it
supposed to measuring.
Eg. A thermometer is supposed to measure
only the temperature.
4. DEFINITION
• Validity refers to an instrument or test actually
testing what it suppose to be testing.
- Treece and Treece.
• Validity refers to the degree to which an
instrument measures what it suppose to
measuring. - Polit and Hungler.
• Validity is the appropriateness, meaning,
fullness and usefulness of the interference
made from the scoring of the instrument.
- American Psychological Foundation.
6. FACE VALIDITY
• Face validity involves an overall look of an
instrument regarding its appropriateness
to measure a particular attribute or
phenomenon.
Eg. A likert scale is designed to measure the
attitude of the nurses towards the patients
admitted with HIV/AIDS.
7. CONTENT VALIDITY
• Content validity is concerned with scope of
coverage of the content area to be measured.
• It is applied in tests of knowledge
measurement.
• Judgment of the content validity may be
subjective and based on previous researchers
and experts opinion about the adequacy,
appropriateness and completeness of the
content of instrument.
• Coefficient of at least 0.7 are typically
considered acceptable validity.
8. CRITERION VALIDITY
• This type of validity is a relationship between
measurements of the instrument with some
other external criteria.
• Criterion related validity may be differentiated
by predictive and concurrent validity.
Eg. To assess the criterion validity of the tool
developed to measure the professionalism
among nurses, the nurses were asked about the
number of research papers they published. The
tool is considered strong if a positive correlation
exists between tool and publications.
9. Contd.,
• Predictive validity: It is the degree of
forecasting judgment. Eg. Some personality
tests on academic futures of students. It is the
differentiation between performances on
some future criterion and instruments ability.
• Concurrent validity: It is the degree of the
measures in present. It relates to the present
specific behaviour and characteristics.
10. Contd.,
• Convergent validity: It means an
instrument is highly correlated with another
instrument measuring similar construct.
• Divergent validity: It means an instrument
is poorly correlated with another instrument
measure the different concept. Eg. There
should be low correlation between an
instrument that measure depression and one
that self-esteem.
11. CONSTRUCT VALIDITY
• It refers to the extent to which one
instrument correlate with other instrument
measure similar construct.
• It encompass both content and criterion
validity.
• Eg. The results of the instrument to measure
pain in amputated patients may be
misleading as the pain may be due to anxiety.
13. RELIABILITY
• Reliability is the degree of consistency with
which the attributes or variables are
measured by an instrument.
• A test is considered reliable if researcher
frequently gets the same reading at
different time interval.
Eg. A sphygmomanometer gives the same
reading for the same subject different
times is considered reliable.
14. DEFINITION
• Reliability is the degree of consistency with
which an instrument measures the
attribute for which it is designed to
measure.
• Reliability is defined as the ability of an
instrument to create reproducible results.
16. STABILITY
• Stability means research instrument
provides same results when it is used
consecutively for two or more times.
• It is also known as test-retest reliability.
• The test is administered twice at two
different points of time.
• It is used for questionnaire, observation
check list, observation rating scales and
physiological measurement tools.
17. STATISTICAL CALCULATION OF
STABILITY
• Administration of a research instrument to a
sample of subjects on two different occasions.
• Scores are compared and calculated by using
Pearson’s correlation coefficient formula.
• A score above 0.70 indicates an acceptable
level of reliability of a tool, +1.00 indicates
perfect reliability and 0.00 indicates no
reliability.
19. INTERNAL CONSISTENCY
• It is also called homogeneity. It ensures all
the sub parts of a research instrument
measure the same characteristics.
• A research tool can only be considered
internally consistent if all the subparts of the
tool are measuring the same characteristics
or phenomena.
• The approach used is split-half technique.
20. Contd.,
• Divide items of a research instrument in two
equal parts through grouping either in odd
number questions and even number
question or first half and second half items
groups.
• Administer two subparts of the tool
simultaneously, score them independently
and compute correlation coefficient in the
two separate scores by using formula.
22. EQUIVALENCE
• This aspect of the reliability is estimated
when a researcher is testing the reliability of
a tool, which is used by two different
observers to observe a single phenomenon
simultaneously and independently or two
presumably parallel instruments are
administered to an individual at about the
same time.
• It is also known as interrater or
interobserver reliability.
23. Contd.,
Eg. A rating scale is developed to assess
cleanliness of the bone marrow transplantation
unit; this rating may be administered to observe
the cleanliness of the bone marrow
transplantation unit by two different observers
simultaneously but independently.
Number of agreements
• r=--------------------------------------------------
Number of agreements + Number of disagreements
24. FACTORS AFFECTING RELIABILITY
• Length of instrument
• Random error
• Training of observer
• Number of observations
• Time
• Language of items in scale
• Group homogeneity
• Duration of the scale
• Objectivity
• Extreme climate conditions
• Instruction for participants
• Difficulty index scale.