SlideShare uma empresa Scribd logo
1 de 20
Course: Educational Assessment and Evaluation
B.ed 1.5 General
Teacher: Dr. Zunaira Fatima
DEPARTMENT OF EDUCATION UNIVERSITY OF
SARGODHA
Presented by: Umaira Nasim
VALIDITY OF
ASSESSMENTS TOOL
VALIDITY
• “Validity is the degree to which an instrument, selection
process, statistical technique, or test measures what it is
supposed to measure.”
• Cook and Campbell (1979) define validity as the
appropriateness or correctness of inferences, decisions, or
descriptions made about individuals, groups, or institutions
from test results.
TEST VALIDITY
Howell’s (1992) view of validity of the test is; a valid test must
measure specifically what it is intended to measure.
Test validity means validating the use of a test in a specific context,
such as college admission or placement into a course.
When determining the validity of a test, it is important to study the
test results in the setting in which they are used.
PURPOSE OF MEASURING
VALIDITY
• Tests are designed to measure skills, abilities, or traits that are
and are not directly observable.
• Validity refers to the accuracy of an assessment
• Indication of how well an assessment actually measures what it
supposed to measure.
METHODS OF MEASURING
VALIDITY
• Content Validity : Face Validity, Curricular Validity
• Construct Validity : Convergent Validity , Discriminant
Validity
• Criterion Validity
• Concurrent Validity
• Predictive Validity
CONTENT VALIDITY
• Content validity evidence involves the degree to which the content of the
test matches a content domain associated with the construct.
• Non-statistical type of validity that involves “the systematic examination
of the test content to determine whether it covers a representative sample
of the behaviour domain to be measured” (Anastasi & Urbina, 1997).
• With respect to educational achievement tests, a test is considered
content valid when the proportion of the material covered in the test
approximates the proportion of material covered in the course.
TYPES OF CONTENT VALIDITY
• Face Validity: Whether a test appears to measure a certain criterion, it does not guarantee that the test
actually measures phenomena in that domain.
• Face validity is a starting point, but should NEVER be assumed to be provably valid for any given purpose, as
the "experts" may be wrong.
• Example: Suppose you were taking an instrument reportedly measuring your attractiveness, but the questions
were asking you to identify the correctly spelled word in each list. Not much of a link between the claim of what
it is supposed to do and what it actually does.
• Advantages: If the respondent knows what information we are looking for, they can use that “context” to
help interpret the questions and provide more useful, accurate answers.
• Disadvantage: If the respondent knows what information we are looking for, they might try to “bend &
shape” their answers to what they think we want .
CONTENT VALIDITY
• Curricular Validity: The extent to which the content of the test matches the
objectives of a specific curriculum as it is formally described.
• Educational Perspective: Curricular validity means that the content of a test that is
used to make a decision about whether a student should be promoted to the next
levels should measure the curriculum that the student is taught in schools.
• Evaluation: Curricular validity is evaluated by groups of curriculum/content
experts. The experts are asked to judge the content of the test is parallel to the
curriculum objectives and whether the test and curricular emphases are in proper
balance.
• Table of specification may help to improve the validity of the test.
CONSTRUCT VALIDITY
• Construct validity is a test’s ability to measure factors which are relevant to the field of study.
• Construct validity refers to the extent to which operationalizations of a construct (e.g.
practical tests developed from a theory) do actually measure what the theory says they do. For
example, to what extent is an IQ questionnaire actually measuring "intelligence"
• Construct validity occurs when the theoretical constructs of cause and effect accurately
represent the real world situations they are intended to model.
• The constructs have some essential properties:
• 1. Are abstract summaries of some regularity in nature?
• 2. Related with concrete, observable entities.
• Example: Integrity is a construct; it cannot be directly observed, yet it is useful for
understanding, describing, and predicting human behaviour.
TYPES OF CONSTRUCT VALIDITY
• Convergent Validity: Convergent validity refers to the degree to which a measure is
correlated with other measures that it is theoretically predicted to correlate with.
• Example: If scores on a specific mathematics test are similar to students scores on other
mathematics tests, then convergent validity is high (there is a positively correlation between
the scores from similar tests of mathematics).
• Discriminant Validity: Discriminant validity occurs where constructs that are expected not
to relate with each other, such that it is possible to discriminate between these constructs.
• Example: If discriminant validity is high, scores on a test designed to assess students skills
in mathematics should not be positively correlated with scores from tests designed to assess
intelligence.
• Convergence and discrimination are often demonstrated by correlation of the measures used
within constructs.
CRITERION VALIDITY
• Criterion validity evidence involves the correlation between the test and a criterion
variable (or variables) taken as representative of the construct.
• If the test data and criterion data are collected at the same time, this is referred to as
concurrent validity evidence. If the test data is collected first in order to predict
criterion data collected at a later point in time, then this is referred to as predictive
validity evidence.
• Example: Employee selection tests are often validated against measures of job
performance (the criterion), and IQ tests are often validated against measures of
academic performance (the criterion).
CONCURRENT VALIDITY
• “Concurrent validity is determined using other existing and similar tests which have been
known to be valid as comparisons to a test being developed. There is no other known valid
test to measure the range of cultural issues tested for this specific group of subjects”.
• Concurrent validity refers to the degree to which the scores taken at one point correlates with
other measures (test, observation or interview) of the same construct that is measured at the
same time.
• It is applicable to test employed for the diagnosis of existing status rather than for the
prediction of further outcome.
• Example: This measure the relationship between measures made with existing tests. The
existing test is thus the criterion. For example, a measure of creativity should correlate with
existing measures of creativity.
PREDICTIVE VALIDITY
• Predictive Validity is determined by showing how well predictions made from the
test are confirmed by evidence gathered at home subsequent time.
• The criterion measure against this type of validity is important because the outcome
of the subject is predicted.
• This includes correlation with measurements made with different instruments.
• Examples:
• 1. If higher scores on the Boards Exams are positively correlated with higher G.P.A.’s in the
Universities and vice versa, then the Board exams is said to have predictive validity.
• 2. We might theorize that a measure of math ability should be able to predict how well a
person will do in an engineering-based profession.
PREDICTIVE VALIDITY
• The predictive validity depends upon the following two steps.
• Obtain test scores from a group of respondents, but do not use
the test in making a decision.
• At some later time, obtain a performance measure for those
respondents, and correlate these measures with test scores to
obtain predictive validity.
FACTORS AFFECTING VALIDITY
• There are many factors that tend to make test result invalid for their intended use. A little careful effort
by the test developer help to control these factors, but some of them need systematic approach.
Factors that may affect the test validity are discussed as under:
1. Instructions to Take A Test
2. Difficult Language Structure
3. Inappropriate Level of Difficulty
4. Poorly Constructed Test Items
5. Ambiguity in Items Statements
6. Length of the Test
7. Improper Arrangement of Items
FACTORS
8. Identifiable Pattern of Answers
9. Inadequate sample
10. Inappropriate selection of constructs or measures.
11. Items that do not function as intended
12. Improper administration: inadequate time allowed, poorly controlled conditions
13. Scoring that is subjective
14. Insufficient data collected to make valid conclusions.
15. Too great a variation in data (can't see the wood for the trees).
16. Inadequate selection of target subjects.
17. Complex interaction across constructs.
RELATIONSHIP B/W VALIDITY
AND RELIABILITY
• Reliability and validity are two different standards used to gauge the usefulness
of a test.
• Reliability is a necessary requirement for validity.
• Reliability actually puts a cap or limit on validity, and if a test is not reliable, it
cannot be valid.
• Establishing good reliability is only the first part of establishing validity.
• Reliability is necessary but not sufficient for validity.
• Reliability actually puts a cap or limit on validity, and if a test is not reliable, it
cannot be valid.
CONCLUSION
Assessment is a broad term that includes measurement, testing and valuing the
worth. The validity of an assessment tools is the degree to which it measures for
what it is designed to measure. It better to follow the systematic procedure and
this rigorous approach may help to improve the validity and the reliability of the
tests.
Overall we can say that in terms of assessment, validity refers to the extent to
which a test's content is representative of the actual skills learned and whether
the test can allow accurate conclusions concerning achievement. Therefore
validity is the extent to which a test measures what it claims to measure. It is
vital for a test to be valid in order for the results to be accurately applied and
interpreted.
REFERENCES
• Cambridge Assessment (2008) The Cambridge Approach, Cambridge: University of
Cambridge Local Examinations Syndicate.
• 15.http://www.cambridgeassessment.org.uk/ca/digitalAssets/171263BB_CT
definitionIAEA08.pdf
• http://professionals.collegeboard.com/higher-ed/validity/ ces/ handbook/ test-validity
• http://changingminds.org/explanations/research/design/types_validity.htm
• Educational Assessment and Evaluation hand book of Allama Iqbal Open University
Islamabad course code 8602.

Mais conteúdo relacionado

Mais procurados

Validity, its types, measurement & factors.
Validity, its types, measurement & factors.Validity, its types, measurement & factors.
Validity, its types, measurement & factors.
Maheen Iftikhar
 
Characteristics of a good test
Characteristics  of a good testCharacteristics  of a good test
Characteristics of a good test
Ali Heydari
 
Criterion-referenced assessment
Criterion-referenced assessmentCriterion-referenced assessment
Criterion-referenced assessment
Miss EAP
 

Mais procurados (20)

Characteristics of a good assessment tool
Characteristics of a good assessment toolCharacteristics of a good assessment tool
Characteristics of a good assessment tool
 
OBJECTIVITY OF TESTS ppt.pptx
OBJECTIVITY OF TESTS ppt.pptxOBJECTIVITY OF TESTS ppt.pptx
OBJECTIVITY OF TESTS ppt.pptx
 
Standardized test
Standardized test Standardized test
Standardized test
 
Properties of-assessment-methods
Properties of-assessment-methodsProperties of-assessment-methods
Properties of-assessment-methods
 
Subjective and Objective Test
Subjective and Objective TestSubjective and Objective Test
Subjective and Objective Test
 
Types of test items and principles for constructing test items
Types of test  items and principles for constructing test items Types of test  items and principles for constructing test items
Types of test items and principles for constructing test items
 
Norm referenced and Criterion Referenced Test
Norm referenced and Criterion Referenced TestNorm referenced and Criterion Referenced Test
Norm referenced and Criterion Referenced Test
 
Teacher made test vs standardized test
Teacher made test vs standardized testTeacher made test vs standardized test
Teacher made test vs standardized test
 
Good test , Reliability and Validity of a good test
Good test , Reliability and Validity of a good testGood test , Reliability and Validity of a good test
Good test , Reliability and Validity of a good test
 
Validity, its types, measurement & factors.
Validity, its types, measurement & factors.Validity, its types, measurement & factors.
Validity, its types, measurement & factors.
 
Curriculum Evaluation
Curriculum EvaluationCurriculum Evaluation
Curriculum Evaluation
 
Construction of Test
Construction of TestConstruction of Test
Construction of Test
 
Characteristics of a good test
Characteristics  of a good testCharacteristics  of a good test
Characteristics of a good test
 
Difference between assessment, measurement and evaluation
Difference between assessment, measurement and evaluationDifference between assessment, measurement and evaluation
Difference between assessment, measurement and evaluation
 
Qualities of a Good Test
Qualities of a Good TestQualities of a Good Test
Qualities of a Good Test
 
Achievement test
Achievement testAchievement test
Achievement test
 
Criterion-referenced assessment
Criterion-referenced assessmentCriterion-referenced assessment
Criterion-referenced assessment
 
Characteristics of a good test
Characteristics  of a good testCharacteristics  of a good test
Characteristics of a good test
 
4. qualities of good measuring instrument
4. qualities of good measuring instrument4. qualities of good measuring instrument
4. qualities of good measuring instrument
 
Meaning of Test, Testing and Evaluation
Meaning of Test, Testing and EvaluationMeaning of Test, Testing and Evaluation
Meaning of Test, Testing and Evaluation
 

Semelhante a Validity of Assessment Tools

Qualities of Good Test.pdf
Qualities of Good Test.pdfQualities of Good Test.pdf
Qualities of Good Test.pdf
FaheemGul17
 
Presentation Validity & Reliability
Presentation Validity & ReliabilityPresentation Validity & Reliability
Presentation Validity & Reliability
songoten77
 
Main Requirements of an Efficient Test
Main Requirements of an Efficient TestMain Requirements of an Efficient Test
Main Requirements of an Efficient Test
Jennifer Ocampo
 
Test characteristics
Test characteristicsTest characteristics
Test characteristics
Samcruz5
 
1 Assessing the Validity of Inferences Made from Assess.docx
1  Assessing the Validity of Inferences Made from Assess.docx1  Assessing the Validity of Inferences Made from Assess.docx
1 Assessing the Validity of Inferences Made from Assess.docx
oswald1horne84988
 
Validity, reliability & practicality
Validity, reliability & practicalityValidity, reliability & practicality
Validity, reliability & practicality
Samcruz5
 

Semelhante a Validity of Assessment Tools (20)

Qualities of Good Test.pdf
Qualities of Good Test.pdfQualities of Good Test.pdf
Qualities of Good Test.pdf
 
Validity.pptx
Validity.pptxValidity.pptx
Validity.pptx
 
The validity of Assessment.pptx
The validity of Assessment.pptxThe validity of Assessment.pptx
The validity of Assessment.pptx
 
Validity of test
Validity of testValidity of test
Validity of test
 
2.3. Types of Validity in assessment and education
2.3. Types of Validity in assessment and education2.3. Types of Validity in assessment and education
2.3. Types of Validity in assessment and education
 
Validation
ValidationValidation
Validation
 
Presentation Validity & Reliability
Presentation Validity & ReliabilityPresentation Validity & Reliability
Presentation Validity & Reliability
 
validity of scale.pdf
validity of scale.pdfvalidity of scale.pdf
validity of scale.pdf
 
Validity and objectivity of tests
Validity and objectivity of testsValidity and objectivity of tests
Validity and objectivity of tests
 
Main Requirements of an Efficient Test
Main Requirements of an Efficient TestMain Requirements of an Efficient Test
Main Requirements of an Efficient Test
 
Data collection reliability
Data collection reliabilityData collection reliability
Data collection reliability
 
Test characteristics
Test characteristicsTest characteristics
Test characteristics
 
Chapter24
Chapter24Chapter24
Chapter24
 
1 Assessing the Validity of Inferences Made from Assess.docx
1  Assessing the Validity of Inferences Made from Assess.docx1  Assessing the Validity of Inferences Made from Assess.docx
1 Assessing the Validity of Inferences Made from Assess.docx
 
reliablity and validity in social sciences research
reliablity and validity  in social sciences researchreliablity and validity  in social sciences research
reliablity and validity in social sciences research
 
Validity, reliability & practicality
Validity, reliability & practicalityValidity, reliability & practicality
Validity, reliability & practicality
 
Characteristics of Good Evaluation Instrument
Characteristics of Good Evaluation InstrumentCharacteristics of Good Evaluation Instrument
Characteristics of Good Evaluation Instrument
 
Characteristics of a Good Test
Characteristics of a Good TestCharacteristics of a Good Test
Characteristics of a Good Test
 
Learning assessment-presentation
Learning assessment-presentationLearning assessment-presentation
Learning assessment-presentation
 
Assessment of Learning
Assessment of LearningAssessment of Learning
Assessment of Learning
 

Último

Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 

Último (20)

Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Single or Multiple melodic lines structure
Single or Multiple melodic lines structureSingle or Multiple melodic lines structure
Single or Multiple melodic lines structure
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Fostering Friendships - Enhancing Social Bonds in the Classroom
Fostering Friendships - Enhancing Social Bonds  in the ClassroomFostering Friendships - Enhancing Social Bonds  in the Classroom
Fostering Friendships - Enhancing Social Bonds in the Classroom
 

Validity of Assessment Tools

  • 1. Course: Educational Assessment and Evaluation B.ed 1.5 General Teacher: Dr. Zunaira Fatima DEPARTMENT OF EDUCATION UNIVERSITY OF SARGODHA Presented by: Umaira Nasim
  • 3. VALIDITY • “Validity is the degree to which an instrument, selection process, statistical technique, or test measures what it is supposed to measure.” • Cook and Campbell (1979) define validity as the appropriateness or correctness of inferences, decisions, or descriptions made about individuals, groups, or institutions from test results.
  • 4. TEST VALIDITY Howell’s (1992) view of validity of the test is; a valid test must measure specifically what it is intended to measure. Test validity means validating the use of a test in a specific context, such as college admission or placement into a course. When determining the validity of a test, it is important to study the test results in the setting in which they are used.
  • 5. PURPOSE OF MEASURING VALIDITY • Tests are designed to measure skills, abilities, or traits that are and are not directly observable. • Validity refers to the accuracy of an assessment • Indication of how well an assessment actually measures what it supposed to measure.
  • 6. METHODS OF MEASURING VALIDITY • Content Validity : Face Validity, Curricular Validity • Construct Validity : Convergent Validity , Discriminant Validity • Criterion Validity • Concurrent Validity • Predictive Validity
  • 7. CONTENT VALIDITY • Content validity evidence involves the degree to which the content of the test matches a content domain associated with the construct. • Non-statistical type of validity that involves “the systematic examination of the test content to determine whether it covers a representative sample of the behaviour domain to be measured” (Anastasi & Urbina, 1997). • With respect to educational achievement tests, a test is considered content valid when the proportion of the material covered in the test approximates the proportion of material covered in the course.
  • 8. TYPES OF CONTENT VALIDITY • Face Validity: Whether a test appears to measure a certain criterion, it does not guarantee that the test actually measures phenomena in that domain. • Face validity is a starting point, but should NEVER be assumed to be provably valid for any given purpose, as the "experts" may be wrong. • Example: Suppose you were taking an instrument reportedly measuring your attractiveness, but the questions were asking you to identify the correctly spelled word in each list. Not much of a link between the claim of what it is supposed to do and what it actually does. • Advantages: If the respondent knows what information we are looking for, they can use that “context” to help interpret the questions and provide more useful, accurate answers. • Disadvantage: If the respondent knows what information we are looking for, they might try to “bend & shape” their answers to what they think we want .
  • 9. CONTENT VALIDITY • Curricular Validity: The extent to which the content of the test matches the objectives of a specific curriculum as it is formally described. • Educational Perspective: Curricular validity means that the content of a test that is used to make a decision about whether a student should be promoted to the next levels should measure the curriculum that the student is taught in schools. • Evaluation: Curricular validity is evaluated by groups of curriculum/content experts. The experts are asked to judge the content of the test is parallel to the curriculum objectives and whether the test and curricular emphases are in proper balance. • Table of specification may help to improve the validity of the test.
  • 10. CONSTRUCT VALIDITY • Construct validity is a test’s ability to measure factors which are relevant to the field of study. • Construct validity refers to the extent to which operationalizations of a construct (e.g. practical tests developed from a theory) do actually measure what the theory says they do. For example, to what extent is an IQ questionnaire actually measuring "intelligence" • Construct validity occurs when the theoretical constructs of cause and effect accurately represent the real world situations they are intended to model. • The constructs have some essential properties: • 1. Are abstract summaries of some regularity in nature? • 2. Related with concrete, observable entities. • Example: Integrity is a construct; it cannot be directly observed, yet it is useful for understanding, describing, and predicting human behaviour.
  • 11. TYPES OF CONSTRUCT VALIDITY • Convergent Validity: Convergent validity refers to the degree to which a measure is correlated with other measures that it is theoretically predicted to correlate with. • Example: If scores on a specific mathematics test are similar to students scores on other mathematics tests, then convergent validity is high (there is a positively correlation between the scores from similar tests of mathematics). • Discriminant Validity: Discriminant validity occurs where constructs that are expected not to relate with each other, such that it is possible to discriminate between these constructs. • Example: If discriminant validity is high, scores on a test designed to assess students skills in mathematics should not be positively correlated with scores from tests designed to assess intelligence. • Convergence and discrimination are often demonstrated by correlation of the measures used within constructs.
  • 12. CRITERION VALIDITY • Criterion validity evidence involves the correlation between the test and a criterion variable (or variables) taken as representative of the construct. • If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data is collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence. • Example: Employee selection tests are often validated against measures of job performance (the criterion), and IQ tests are often validated against measures of academic performance (the criterion).
  • 13. CONCURRENT VALIDITY • “Concurrent validity is determined using other existing and similar tests which have been known to be valid as comparisons to a test being developed. There is no other known valid test to measure the range of cultural issues tested for this specific group of subjects”. • Concurrent validity refers to the degree to which the scores taken at one point correlates with other measures (test, observation or interview) of the same construct that is measured at the same time. • It is applicable to test employed for the diagnosis of existing status rather than for the prediction of further outcome. • Example: This measure the relationship between measures made with existing tests. The existing test is thus the criterion. For example, a measure of creativity should correlate with existing measures of creativity.
  • 14. PREDICTIVE VALIDITY • Predictive Validity is determined by showing how well predictions made from the test are confirmed by evidence gathered at home subsequent time. • The criterion measure against this type of validity is important because the outcome of the subject is predicted. • This includes correlation with measurements made with different instruments. • Examples: • 1. If higher scores on the Boards Exams are positively correlated with higher G.P.A.’s in the Universities and vice versa, then the Board exams is said to have predictive validity. • 2. We might theorize that a measure of math ability should be able to predict how well a person will do in an engineering-based profession.
  • 15. PREDICTIVE VALIDITY • The predictive validity depends upon the following two steps. • Obtain test scores from a group of respondents, but do not use the test in making a decision. • At some later time, obtain a performance measure for those respondents, and correlate these measures with test scores to obtain predictive validity.
  • 16. FACTORS AFFECTING VALIDITY • There are many factors that tend to make test result invalid for their intended use. A little careful effort by the test developer help to control these factors, but some of them need systematic approach. Factors that may affect the test validity are discussed as under: 1. Instructions to Take A Test 2. Difficult Language Structure 3. Inappropriate Level of Difficulty 4. Poorly Constructed Test Items 5. Ambiguity in Items Statements 6. Length of the Test 7. Improper Arrangement of Items
  • 17. FACTORS 8. Identifiable Pattern of Answers 9. Inadequate sample 10. Inappropriate selection of constructs or measures. 11. Items that do not function as intended 12. Improper administration: inadequate time allowed, poorly controlled conditions 13. Scoring that is subjective 14. Insufficient data collected to make valid conclusions. 15. Too great a variation in data (can't see the wood for the trees). 16. Inadequate selection of target subjects. 17. Complex interaction across constructs.
  • 18. RELATIONSHIP B/W VALIDITY AND RELIABILITY • Reliability and validity are two different standards used to gauge the usefulness of a test. • Reliability is a necessary requirement for validity. • Reliability actually puts a cap or limit on validity, and if a test is not reliable, it cannot be valid. • Establishing good reliability is only the first part of establishing validity. • Reliability is necessary but not sufficient for validity. • Reliability actually puts a cap or limit on validity, and if a test is not reliable, it cannot be valid.
  • 19. CONCLUSION Assessment is a broad term that includes measurement, testing and valuing the worth. The validity of an assessment tools is the degree to which it measures for what it is designed to measure. It better to follow the systematic procedure and this rigorous approach may help to improve the validity and the reliability of the tests. Overall we can say that in terms of assessment, validity refers to the extent to which a test's content is representative of the actual skills learned and whether the test can allow accurate conclusions concerning achievement. Therefore validity is the extent to which a test measures what it claims to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted.
  • 20. REFERENCES • Cambridge Assessment (2008) The Cambridge Approach, Cambridge: University of Cambridge Local Examinations Syndicate. • 15.http://www.cambridgeassessment.org.uk/ca/digitalAssets/171263BB_CT definitionIAEA08.pdf • http://professionals.collegeboard.com/higher-ed/validity/ ces/ handbook/ test-validity • http://changingminds.org/explanations/research/design/types_validity.htm • Educational Assessment and Evaluation hand book of Allama Iqbal Open University Islamabad course code 8602.