SlideShare a Scribd company logo
1 of 9
Download to read offline
ROBUST ABET LEARNING OUTCOMES DATA IN SHORTER TIME FRAMES
BY STREAMLINING SOFTWARE EVALTOOLS® EMPLOYING A
COMPREHENSIVE PROGRAM EVALUATION METHODOLOGY USING HIGH
RELATIVE COVERAGE UNIQUE ASSESSMENTS BY MULTIPLE RATERS FOR
MEASUREMENT OF SPECIFIC PERFORMANCE INDICATORS
Wajid Hussain, M. F. Addas
College of Engineering
Islamic University
Madinah Munawarrah, Saudi Arabia
wajidh@iu.edu.sa, mfaddas@iu.edu.sa
Fong Mak
Electrical and Computer Engineering
Gannon University
Erie, Pennsylvania
mak@gannon.edu
Abstract—This paper presents a novel methodology to collect
robust assessment data by multiple raters for ABET student
learning outcomes as well as course learning outcomes by
comprehensively measuring a significant number of specific
performance indicators in comparatively much shorter time
frames resulting in quicker cycles and relatively comprehensive
program term review resulting in an efficient continuous
improvement system. Assessments prepared by multiple raters
across different courses with high relative coverage of roughly
70% for a single performance indicator related to the ABET
student outcomes and respective course outcomes are used just
once for a specific measurement. A novel technique of using the
Assignment Setup Module of EvalTools® for splitting an available
assessment into multiple sections to obtain high relative coverage
is discussed. A known, well adopted Faculty Course Assessment
Report (FCAR) based upon learning outcomes assessment data is
utilized for documenting old, new action items, modifications and
proposals for course improvement by the concerned faculty.
Learning outcome assessment information for a program, course
or student can be studied in great detail by selecting course
outcome, student outcome or performance indicator/criteria
analytics for single or multiple terms and are presented using
rich graphics and histograms employing an intelligent color
coding system.
Keywords—Unique Assessments; High Relative Coverage;
EvalTools®; Student Outcomes; Course Outcomes; ABET;
Continuous Improvement; Performance Indicator
I. INTRODUCTION
Generally grade giving assessments in an engineering
curriculum are comprised of single or multiple questions and
cover more than one performance criteria [1-8]. Programs may
choose to a) develop new assessments and/or b) use the
assessments available in their curriculum for measurement of
specific performance criteria related to their program
outcomes. In the first method, additional resources and faculty
time would be required to measure the performance criteria of
interest. The second method may pose limitations on the
number of performance criteria measured in a given time
frame and the quality of data collected depending upon the
availability of streamlining electronic tools or assessments
which possess maximum relative coverage of a single
performance criterion. The result of both methods is a
comparatively small set of performance criteria finally
measured in a given time frame by a program using
assessments that may not have maximum relative coverage of
the specified criteria. Measurement of program educational
objectives, student learning outcomes and performance criteria
would therefore be completed in comparatively longer cycles.
This minimum number of performance criteria measured with
comparatively fewer assessments and obviously lesser number
of raters over a given time frame would render the program
evaluation term review less comprehensive and result in a
deficiency in the eventual realization of its PEOs.
In this paper, is presented outcomes assessment
methodology using web based streamlining software
EvalTools® 6 [10]. The Assignment Setup Module within
EvalTools® 6 is used to split existing assessments of interest
to obtain high relative coverage of minimally 70% for a
specific performance criterion. These assessments are unique
since they are used just once to measure a specific
performance criterion. This methodology would result in
realistic data since the outcome assessment score would
correspond to results from a specific performance criterion
with major contribution in the unique assessment and not be
significantly affected by other performance criteria with a
much lesser percentage of contribution in the assessment.
EvalTools® is chosen as the platform for outcomes
assessment instead of Blackboard® since it employs the
unique FCAR and EAMU performance vector methodology
[11-13] which facilitates using existing grade giving
assessments for outcomes measurement and thus a high level
of automation of the data collection process, feature-rich pick-
and-choose assessment/reporting tools, and the flexibility to
provide customized features. The basis of assessment in
FCAR [11-13] is the EAMU performance vector. The EAMU
performance vector counts the number of stu
the course whose proficiency for that ou
Excellent, Adequate, Minimal, or Unsatis
faculty report failing course outcomes (CO
outcomes (SOs), performance indicators (P
student indirect assessments and other g
concern in the respective course reflections
Based upon these course reflections, new
generated by the faculty. Old action items ar
the FCAR for a same course if offered aga
and proposals to a course are made with co
status of the old action items. The Progr
module of EvalTools® is focused on failing
analysis and discussions relating to impro
values of ABET SOs and weighted average
scientific color coding scheme [13] indi
investigation. Courses contributing to failing
examined.
By using EvalTools® 6 the entire proc
assessment, evaluation and closing the loop
systematically collecting, compiling and pre
the course and program level for an easy rev
As a result, robust assessment data by m
ABET student learning outcomes as well a
outcomes is possibly collected by comprehe
a significant number of specific performa
comparatively much shorter time frames re
cycles and relatively comprehensive prog
resulting in an efficient continuous improvem
II. UNIQUE ASSESSMENTS FOR REALISTI
INDICATORS, OUTCOMES MEASUR
A. Reinvent the wheel. Design a new set of a
specifically for realistic outcomes measur
existing grade giving assessments
Since grade giving assessments in
curriculum are comprised of single or multi
cover more than one performance criteria,
such an assessment is generally a sum total o
obtained from grading multiple perf
corresponding to this assessment. Thus the
does not actually reflect the grading resu
performance criteria but rather a comple
grading results from multiple performance
the outcomes assessment data resulting from
not realistic and does not reflect precise infor
specific performance indicators or outco
improvement. To obtain realistic data
improvement purposes one option available
create a new set of assessments specifically
criteria, outcomes measurement. Several
worldwide have chosen this approach
purposes but since it is tedious and requires
time [9], resources the programs generally
information for small set of outcomes, perfo
which are not sufficient for the impl
comprehensive academic improvement pro
udents that passed
utcome was rated
sfactory. Program
Os), ABET student
PIs), comments on
general issues of
section of FCAR.
action items are
re carried over into
ain. Modifications
onsideration of the
ram Term Review
g SOs and PIs for
ovement. Average
values of PIs with
icate failures for
g PIs and SOs are
cess of outcomes
is streamlined by
senting the data at
view and analysis.
multiple raters for
as course learning
nsively measuring
ance indicators in
esulting in quicker
gram term review
ment system.
IC PERFORMANCE
REMENT
assessments
rement besides
an engineering
iple questions and
the total score of
of individual scores
formance criteria
assessment score
ults from a single
ex distribution of
criteria. Therefore
m this approach is
rmation relating to
omes for quality
for continuous
e for faculty is to
y for performance
programs [1-8]
for accreditation
additional faculty
y collect minimal
ormance indicators
lementation of a
ocess. This would
finally result in programs spe
maintaining independent pro
realistic continuous improveme
B. Why reinvent the wheel?Sci
assessments for realistic ou
At the Islamic Universi
Engineering faculty have devel
departmental meetings a com
criteria covering all phases of
offered within the curriculu
performance criteria have also
handbook [14] of nationally s
for several engineering specia
assessment related to a specifi
would consider implementatio
suitable for that course conten
performance criteria to the tota
be defined during assessment d
The performance criteria of in
70% or more share in the total
of grading results of the other p
score would be thus rendered
example where a sample uniqu
coverage is designed with max
mapping to a CO, ABET SO.
Fig.1. Example of design of a unique
for specific performance indic
outcomes
For cases where it is not po
more share to a certain perf
assessment, the Assignment Se
used to split a question or sub
achieving 70% high relati
performance criteria. Fig.
implementation of splitting of
questions using EvalTools® 6
obtain high relative coverage an
mapping to a certain COs and A
set of questions are said to be
ending additional resources for
ocesses for accreditation and
ent
ientifically design grade giving
utcomes measurement.
ity in Madinah, College of
oped through several sessions of
mprehensive list of performance
the syllabi for different courses
um. A good percentage of
been incorporated from QIYAS
standardized learning outcomes
alizations. While designing any
fic course the concerned faculty
on of the performance criteria
nt. The contribution of various
al score of an assessment would
design by the concerned faculty.
nterest would be given a nearly
score distribution and the effect
performance criteria on the total
d negligible. Fig. 1 shows an
ue assessment with high relative
ximum coverage of a specific PI
assessment with high relative coverage
cator, course outcomes, ABET student
ossible to assign a nearly 70% or
formance criteria in an entire
etup Module of EvalTools® 6 is
b question of an assessment for
ive coverage of a specific
2 indicates examples of
f assessments to questions, sub
6 Assignment Setup Module to
nd measurement of a specific PI
ABET SO. Such assessments or
unique since they are just used
once for measurement of a certain PI. Thi
implementing unique assessments with high
of PIs mapping to COs and ABET SOs wou
measurement of outcomes assessment data f
continuous improvement.
Fig.2. Example of splitting existing grade giving ass
sub questions for high relative coverage of a
indicator, course, ABET student outcomes
C. EvalTools® 6 EAMU Vector calculation
factor
Realistic outcomes measurements are
specifying weights to different assessments e
course grading policy or by the type of asse
higher weight to laboratory assessments over
assessments since lab work covers all t
Bloom’s taxonomy [15-16] or final exams o
final exam is more comprehensive and well
quiz and students are generally more prepare
and many student skills have matured by then
The following steps are employed by
calculate the EAMU vectors:
1.Faculty using EvalTools® 6 Assignme
identify an assignment with a set of spe
split an assignment to use a specific questi
with relative high coverage of a certain P
ABET SO (for EAMU calculation).
2. EvalTools® 6 removes students who recei
in a course from EAMU vector calcula
student scores on the selected assignme
remaining students.
3. EvalTools® 6 calculates the weighted aver
the assignment, set of questions selected b
are set according to their percentage in t
scale or as per the decision of the progra
entered in the weighting factor section o
Setup Module.
4. EvalTools®6 uses average percentage t
many students fall into the EAMU catego
selected assessment criteria.
is methodology of
h relative coverage
uld ensure realistic
for comprehensive
sessments to questions,
specific performance
with weighting
also achieved by
either according to
ssment like giving
r purely theoretical
three domains of
over quiz since the
l-designed than a
ed for a final exam
n.
EvalTools® 6 to
nt Setup Module
ecific questions or
on or sub question
PI mapping to CO,
ived DN, F, W or I
ations, and enters
nts, questions for
rage percentage on
y faculty. Weights
the course grading
am committee and
of the Assignment
to determine how
ries using the pre-
5. EvalTools® 6 calculates t
rescaling to 5 for a weighted
(refer to Fig. 3 for EAMU av
Fig.3. Equation for EAMU a
D. Course Outcomes Data
Each course has specified C
each major topic of the course
data once measured would help
or learning methodologies cor
the course content. This would
information to improve the c
modifications. At the Isla
engineering has decided to use
course topics. Fig. 4 shows a c
could be used to cover a certain
Homework 2, quiz 2 and mid-t
as key assignments for CO2. F
EAMU vector for each key ass
A, yellow for M and red for U.
EAMU is (8,12,4,0) (with stud
which gives us an average of 3.
Fig.4. Data for a single course ou
Fig. 4 is only a part of the ana
under the heading Course O
sequential list of all the COs w
and their histogram plots depi
students who have not failed th
3
the EAMU average rating by
average based on a 3 point scale
verage for scale of 3).
average rating for a 3 point scale
COs which are designed to cover
e syllabus sequentially. The CO
p identify weakness in teaching
rresponding to a certain area of
help provide real time formative
course by appropriate on time
amic University, college of
8-14 COs to cover all the major
case where multiple assessments
n course outcome. For this case,
erm part-V question-42 are used
Fig. 2 shows also the color-coded
signment, green for E, white for
The course outcome CO2 group
ents failing the course removed)
61.
utcome with its multiple assessments
alytical charts of FCAR module
Outcomes Assessment where a
with various related assessments
icting performance of all those
he course are shown. At the end
2 1 0
of the Course Outcomes Assessment sectio
histogram plot displays all the COs data mea
Fig. 5. The color-coded visual results give
summarized view facilitating identification
need attention. Even though course outcom
not required for ABET program accreditati
with ABET SOs will channel faculty towa
needed for students. A direct quote fro
outcomes that are systematically assessed a
be shown to contribute to program-level outc
information provided to students, em
professional bodies and so on about grad
confirms that course outcomes assessment is
teaching and delivery improvement alon
student’s learning.
Fig.5. Shows a consolidated histogram plot of all c
E. ABET Student Outcomes and Performanc
The Islamic University college of engine
ABET SOs for all its programs. Using the s
EAMU computation for each SO, the EAM
averages are calculated. In the example sh
assignments Hw3 and Hw8 are selected for c
PI. These assignments are weighted (applica
factor either according to course grading po
selected by the specific program), added t
normalized to 100 for each student to calc
aggregated EAMU score and EAMU cl
weighted and normalized to 100 score of all
are grouped together to obtain the average of
for a specific PI which is computed as per th
3. Fig. 6 lists all the PIs mapping to AB
corresponds to ABET student outcome
mapping to a specific ABET SO are average
the final average value of the ABET SO.
consolidated ABET SOs histogram plot for
We see that SO 1 has an average value
computed by taking the average of the weigh
obtained for abet_PI_1_27 (1.39), abet_PI_
abet_PI_1_44 (3.75).
on a consolidated
asured as shown in
faculty a snapshot
of the COs which
mes assessment is
ion, aligning COs
ards the skill sets
m [4] “Learning
at course level can
comes, and thus to
mployer groups,
duation standards”
crucial for faculty
ng with assessing
course outcome data
ce Indicators Data
eering has adopted
same principle for
MU and weighted
hown in Table 1,
covering a specific
ation of weighting
olicy or any other
together and then
culate per student
lassification. This
students in a class
f the EAMU vector
he equation in Fig.
BET SO 1(SO 1
‘a’). All the PIs
ed together to give
Fig. 7 shows the
a specific course.
of 2.89 which is
hted average values
_1_43 (3.54) and
Table 1: Calculation of
III. CONTINUO
A. Term Review
The term review process
involves completion of two p
ABET SO evaluation. Fig. 8
begins with a snap shot conso
measured in the specified ter
scheme to indicate failures fo
value for each measured ABET
its corresponding aggregate PI
each PI measured for this spec
averaging this PIs data meas
different courses. Indicator e
SOs and PIs for analysis
improvement. Courses contribu
examined by selection. The inv
course reflections and generate
FCARs. Fig. 9 shows detailed P
SO listing the contributing co
calculations. Action items in
updated or deleted as per th
agreement with review membe
elevated to program level from
severity of the problem or deg
SO evaluation phase integrates
ABET SO with the comments
failing PIs taken from the Per
module of EvalTools® 6. The f
SO executive summary b) Det
c) SO/PI PVT summary d) Cou
available in printable word or
snapshot of a detailed SO/PI e
program term review. The in
reviews for a program can be
review of the Program Educ
items listed in the FCARs are
faculty for closure and program
term review reports are ap
responsible departments for imp
f aggregated EAMU for a PI
OUS IMPROVEMENT
flow for a specific program
phases a) PI evaluation and b)
shows that the PI evaluation
olidated view of all ABET SOs,
rm with scientific color coding
or investigation. The aggregate
T SO is calculated by averaging
Is data. The aggregate value for
cific ABET SO is calculated by
ured by multiple raters across
evaluation is focused on failing
and discussions relating to
uting to failing PIs and SOs are
vestigations involve study of the
ed action items in the respective
PI analysis for a selected ABET
ourses and their group EAMU
respective FCARs are edited,
he program chair decision in
ers. Certain action items may be
m course level based upon the
gree of importance. The ABET
overall comments on a specific
s of review and analysis of its
rformance Indicator Evaluation
following term review reports a)
tailed SO/PI executive summary
urse reflections/Action items are
r pdf format. Fig. 10 shows a
executive summary of a sample
nformation from multiple term
e consolidated and utilized for
cational Objectives. The action
e followed up by the concerned
m level action items mentioned in
ppropriately escalated to the
plementation.
Fig.6. List of performance indicators with cor
Fig.7. Consolidated list o
rresponding assignments mapping to a specific ABET student outco
of ABET student outcomes SOs covered by a particular course in a
ome (PIs listed for ABET SO_1)
given term
Fig.8. Perfromance Indicator Evaluation Module Eval
Fig.9. Detailed performance indicator analysis for a se
lTools® 6 beginning page showing student outcomes covered by a p
elected ABET student outcome listing the contributing courses and t
program in a given term
their group EAMU calculations
Fig.10. Portion of
B. Comprehensive Program Evaluation and
Improvement with Study of Student Evalu
Realistic ABET Student Outcomes and Pe
Indicator Information
Both program and students performance eva
on their respective measured ABET SO an
Study of student failing patterns in these
evaluations will confirm any major weakness
the collectively averaged outcome data in pr
and further investigations of the respective c
help determine specific areas such as cours
and/or depth), teaching materials, and
assessment methodology for realistic prog
improvement. Student advising based on this
faculty to identify potential areas of strengt
weak students through the observation of rela
for certain ABET SO, CO related PIs and t
ease of selection of an area of specializat
research or industry to focus on for enhan
future industry related prospects. Program, st
assessments and advising based on measur
COs and PIs facilitate outcome based educ
help the student to focus not just on improve
scores but learning outcomes since the acade
a good extent reflect performance rela
outcomes. A direct quote from [17] co
observation: “But students’ and graduates’
f detailed SO/PI executive summary of a sample program term revie
d Realistic
uations Based On
erformance
aluations are based
nd associated PIs.
individual student
s observed through
rogram evaluations
course FCARs will
e content (breadth
d/or pedagogical/
gram and student
s information helps
th in academically
atively high scores
thereby facilitating
tion of education,
nced learning and
tudent evaluations,
rable ABET SOs,
cation system and
ement of academic
emic scores now to
ative to learning
oncurs the same
assessment about
what competencies they have
constructing new criteria for qu
of including such output orie
alternative is to develop tests
surveys, an initiative now taken
very time and resource consum
ABET SOs for student evaluati
to a certain ABET SO and the c
IV. CON
This paper presents a novel
and realistic assessment data
student learning outcomes as
comprehensively measuring a
performance indicators. Using
and analytical data are obtained
time frames resulting in quic
program term review. Grade g
to extract, store electronically
program, student performance
SOs, COs and PIs thus op
comprehensive improvement.
continuous improvement for pr
established by an in depth form
program and student compet
focused on patterns, anomalies
of the educational curriculum
ew
gained may be one option in
uality. We see two possible ways
ented measurements. The best
in line with PISA and similar
n by OECD. This is, however, a
ming activity…” Fig. 11 lists the
ion. Fig. 12 lists the PIs related
contributing courses.
NCLUSION
l methodology to collect robust
by multiple raters for ABET
s well as course outcomes by
significant number of specific
g EvalTools® 6 the assessment
d in comparatively much shorter
cker cycles for comprehensive
giving assessments are dissected
y a wealth of information for
e evaluations based on ABET
pening an exciting frontier in
. World class standards in
rogram, course or student can be
mative or summative analysis of
tencies digital data especially
s related to specific components
m such as teaching, learning
methodologies, course content, materials
depth).
Fig.1
Fig.12. Performance Indicat
(breadth and/or
1. ABET student outcomes listed in a student evaluation
tors associated to a specific ABET student outcome listed in a studennt evaluation
ACKNOWLEDGMENT
The College of Engineering at the Islamic University would
like to specially thank the Civil, Mechanical and Electrical
engineering programs for data provided.
REFERENCES
[1] J. Moon, “Linking levels, learning outcomes and assessment
criteria,”Bologna Process – European Higher Educaion Area.
http://www.ehea.info/Uploads/Seminars/040701-
02Linking_Levels_plus_ass_crit-Moon.pdf
[2] “Whys & hows of assessment,” Eberly Center for Teachig Excellent,
Carnegie Mellon University.
http://www.cmu.edu/teaching/assessment/howto/basics/objectives.html
[3] Biggs, J. and Tang, C. (2007). Teaching for Quality Learning at
University. 3rd edition. England and NY: Society for Research into
Higher Education and Open University Press.
[4] “Assessment Toolkit: aligning assessent with outcomes,” UNSW,
Australia. https://teaching.unsw.edu.au/printpdf/531
[5] Houghton, W. (2004). Constructive alignment: and why it is important
to the learning process. Loughborough: HEA Engineering Subject
Centre.
[6] Hounsell, D., Xu, R. and Tai, C.M. (2007). Blending Assignments and
Assessments for High-Quality Learning. (Scottish Enhancement
Themes: Guides to Integrative Assessment, no.3). Gloucester: Quality
Assurance Agency for HigherEducation
[7] D. Kennedy, A. Hyland, and N. Ryan, “Writing and using learning
outcomes: a practical guide” Article C 3.4-1 in EUA Bologna Hanbook:
Making Bologna Work, Berlin 2006: Raabe Verlag.
[8] J. Prados, “Can ABET Really Make a Difference?” Int. J. Engng Ed.
Vol. 20, No. 3, pp. 315-317, 2004
[9] M. Manzoul, “Effective assessment process,” 2007 Best Assessment
Processes IX Symposium, April 13, Terre Haute, Inidana.
[10] Information on EvalTools® available at http://www.makteam.com
[11] J. Estell, J. Yoder, B. Morrison, F. Mak, “Improving upon best practices:
FCAR 2.0,” ASEE 2012 Annual Conference, San Antonio.
[12] C. Liu, L. Chen, “Selective and objective assessment calculation and
automation,” ACMSE’12, March 29-31, 2012, Tuscaloosa, AL, USA.
[13] F. Mak, J. Kelly, “Systematic means for identifying and justifying key
assignments for effective rules-based program evaluation,” 40th
ASEE/IEEE Frontiers in Education Conference, October 27-30,
Washington, DC.
[14] Handbok of Learning Outcomes, November 2014 draft (unpublished),
QIYAS Ministry of Education, Saudi Arabia. Bloom, B.S., Masia, B.B.
and Krathwohl, D.R. (1964).
[15] Taxonomy of Educational Objectives: The Affective Domain.
NewYork: McKay.
[16] K. Salim, R. Ali, N. Hussain, H. Haron, “An instrument for measureing
the learning outcomes of laboratory work,” Proceeding of the IETEC’13
Conference, 2013. Ho Chi Minh City, Vietnam.
[17] P. Aamodt, E. Hovdhaugen, “Assessing higher education learning
outcomes as a result of institutional and individual characteristics,”
Outcomes of Higher Education: Quality relevant and inpact, September
8-10, Paris, France

More Related Content

What's hot

Stress free eu schools - final evaluation report - v01
Stress free eu schools - final evaluation report - v01Stress free eu schools - final evaluation report - v01
Stress free eu schools - final evaluation report - v01Ioannis Zarkadas
 
Chapter 6, Training Evaluation
Chapter 6, Training Evaluation Chapter 6, Training Evaluation
Chapter 6, Training Evaluation Kacung Abdullah
 
UAEU - Phase II Project Management Plan
UAEU - Phase II Project Management PlanUAEU - Phase II Project Management Plan
UAEU - Phase II Project Management PlanMichael Dobe, Ph.D.
 
Quality assuring assessment guidelines for providers, revised 2013
Quality assuring assessment   guidelines for providers, revised 2013Quality assuring assessment   guidelines for providers, revised 2013
Quality assuring assessment guidelines for providers, revised 2013Ibrahim Khleifat
 
A framework for the use of online technology and Sakai tools in assessment
A framework for the use of online technology and Sakai tools in assessmentA framework for the use of online technology and Sakai tools in assessment
A framework for the use of online technology and Sakai tools in assessmentAuSakai
 
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…Dr. Joy Kenneth Sala Biasong
 
Chapter 12 se
Chapter 12 seChapter 12 se
Chapter 12 sedx3driver
 
Quality model ppt
Quality model pptQuality model ppt
Quality model pptDelly Win
 
Technology Action Plan
Technology Action PlanTechnology Action Plan
Technology Action Planenjackson
 
Evaluation of training methods
Evaluation of training methodsEvaluation of training methods
Evaluation of training methodsSharon
 
04 course design development phase
04 course design   development phase04 course design   development phase
04 course design development phaseDr. Chetan Bhatt
 
Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...MaFI (The Market Facilitation Initiative)
 
Iblc10 making an existing assessment more efficient
Iblc10   making an existing assessment more efficientIblc10   making an existing assessment more efficient
Iblc10 making an existing assessment more efficientMark Russell
 
Check, Check, Check in the Simulation Lab
Check, Check, Check in the Simulation LabCheck, Check, Check in the Simulation Lab
Check, Check, Check in the Simulation LabExamSoft
 

What's hot (20)

Stress free eu schools - final evaluation report - v01
Stress free eu schools - final evaluation report - v01Stress free eu schools - final evaluation report - v01
Stress free eu schools - final evaluation report - v01
 
Chapter 6, Training Evaluation
Chapter 6, Training Evaluation Chapter 6, Training Evaluation
Chapter 6, Training Evaluation
 
UAEU - Phase II Project Management Plan
UAEU - Phase II Project Management PlanUAEU - Phase II Project Management Plan
UAEU - Phase II Project Management Plan
 
Quality assuring assessment guidelines for providers, revised 2013
Quality assuring assessment   guidelines for providers, revised 2013Quality assuring assessment   guidelines for providers, revised 2013
Quality assuring assessment guidelines for providers, revised 2013
 
Scoringrubric
ScoringrubricScoringrubric
Scoringrubric
 
M3-Review-SLOs-13NOV13
M3-Review-SLOs-13NOV13M3-Review-SLOs-13NOV13
M3-Review-SLOs-13NOV13
 
A framework for the use of online technology and Sakai tools in assessment
A framework for the use of online technology and Sakai tools in assessmentA framework for the use of online technology and Sakai tools in assessment
A framework for the use of online technology and Sakai tools in assessment
 
Six sigma project
Six sigma projectSix sigma project
Six sigma project
 
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…
Core Areas of Quality Assurance (QA) and Monitoring and Evaluation Mechani…
 
Chapter 12 se
Chapter 12 seChapter 12 se
Chapter 12 se
 
Labor Markets Core Course 2013: Monitoring and evaluation
Labor Markets Core Course 2013: Monitoring and evaluation Labor Markets Core Course 2013: Monitoring and evaluation
Labor Markets Core Course 2013: Monitoring and evaluation
 
Quality model ppt
Quality model pptQuality model ppt
Quality model ppt
 
Credible Credentials: Coming of Age
Credible Credentials: Coming of AgeCredible Credentials: Coming of Age
Credible Credentials: Coming of Age
 
Technology Action Plan
Technology Action PlanTechnology Action Plan
Technology Action Plan
 
Evaluation of training methods
Evaluation of training methodsEvaluation of training methods
Evaluation of training methods
 
04 course design development phase
04 course design   development phase04 course design   development phase
04 course design development phase
 
Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...
 
Iblc10 making an existing assessment more efficient
Iblc10   making an existing assessment more efficientIblc10   making an existing assessment more efficient
Iblc10 making an existing assessment more efficient
 
Tpack konaspi ix
Tpack konaspi ixTpack konaspi ix
Tpack konaspi ix
 
Check, Check, Check in the Simulation Lab
Check, Check, Check in the Simulation LabCheck, Check, Check in the Simulation Lab
Check, Check, Check in the Simulation Lab
 

Viewers also liked

Shahid Lecture-1- MKAG1273
Shahid Lecture-1- MKAG1273Shahid Lecture-1- MKAG1273
Shahid Lecture-1- MKAG1273nchakori
 
PowerPoint Presentation Dublin June 2015 Final
PowerPoint Presentation Dublin June 2015 FinalPowerPoint Presentation Dublin June 2015 Final
PowerPoint Presentation Dublin June 2015 FinalMary Appleby (CPHR)
 
Phonegap android
Phonegap androidPhonegap android
Phonegap androidumesh patil
 
數位學生證/悠遊卡/電子發票整合平台
數位學生證/悠遊卡/電子發票整合平台數位學生證/悠遊卡/電子發票整合平台
數位學生證/悠遊卡/電子發票整合平台Aileen Ou
 
Ting - Un datamapper PHP sous stéroïdes
Ting - Un datamapper PHP sous stéroïdesTing - Un datamapper PHP sous stéroïdes
Ting - Un datamapper PHP sous stéroïdesXavier Leune
 
AWS Summit Chicago 2016発表のサービスアップデートまとめ
AWS Summit Chicago 2016発表のサービスアップデートまとめAWS Summit Chicago 2016発表のサービスアップデートまとめ
AWS Summit Chicago 2016発表のサービスアップデートまとめAmazon Web Services Japan
 
Patterns and OOP in PHP
Patterns and OOP in PHPPatterns and OOP in PHP
Patterns and OOP in PHPjulien pauli
 
CERTIFICATE OF ATTENDANCE
CERTIFICATE OF ATTENDANCECERTIFICATE OF ATTENDANCE
CERTIFICATE OF ATTENDANCEWAJID HUSSAIN
 
Migration PHP4-PHP5
Migration PHP4-PHP5Migration PHP4-PHP5
Migration PHP4-PHP5julien pauli
 
Câncer de Ovário - Solange Sanches
Câncer de Ovário - Solange Sanches Câncer de Ovário - Solange Sanches
Câncer de Ovário - Solange Sanches Oncoguia
 

Viewers also liked (16)

Shahid Lecture-1- MKAG1273
Shahid Lecture-1- MKAG1273Shahid Lecture-1- MKAG1273
Shahid Lecture-1- MKAG1273
 
PowerPoint Presentation Dublin June 2015 Final
PowerPoint Presentation Dublin June 2015 FinalPowerPoint Presentation Dublin June 2015 Final
PowerPoint Presentation Dublin June 2015 Final
 
Phonegap android
Phonegap androidPhonegap android
Phonegap android
 
數位學生證/悠遊卡/電子發票整合平台
數位學生證/悠遊卡/電子發票整合平台數位學生證/悠遊卡/電子發票整合平台
數位學生證/悠遊卡/電子發票整合平台
 
Fragebogen zur Nutzung von Blechdosen im Marekting
Fragebogen zur Nutzung von Blechdosen im MarektingFragebogen zur Nutzung von Blechdosen im Marekting
Fragebogen zur Nutzung von Blechdosen im Marekting
 
Tomm Moore
Tomm MooreTomm Moore
Tomm Moore
 
Ting - Un datamapper PHP sous stéroïdes
Ting - Un datamapper PHP sous stéroïdesTing - Un datamapper PHP sous stéroïdes
Ting - Un datamapper PHP sous stéroïdes
 
AWS Summit Chicago 2016発表のサービスアップデートまとめ
AWS Summit Chicago 2016発表のサービスアップデートまとめAWS Summit Chicago 2016発表のサービスアップデートまとめ
AWS Summit Chicago 2016発表のサービスアップデートまとめ
 
Egypt at the 2016 summer olympics
Egypt at the 2016 summer olympicsEgypt at the 2016 summer olympics
Egypt at the 2016 summer olympics
 
Patterns and OOP in PHP
Patterns and OOP in PHPPatterns and OOP in PHP
Patterns and OOP in PHP
 
CERTIFICATE OF ATTENDANCE
CERTIFICATE OF ATTENDANCECERTIFICATE OF ATTENDANCE
CERTIFICATE OF ATTENDANCE
 
Migration PHP4-PHP5
Migration PHP4-PHP5Migration PHP4-PHP5
Migration PHP4-PHP5
 
Câncer de Ovário - Solange Sanches
Câncer de Ovário - Solange Sanches Câncer de Ovário - Solange Sanches
Câncer de Ovário - Solange Sanches
 
8º ano b
8º ano b8º ano b
8º ano b
 
Hussain_Addas
Hussain_AddasHussain_Addas
Hussain_Addas
 
Infrastructure as Code
Infrastructure as CodeInfrastructure as Code
Infrastructure as Code
 

Similar to PID3687979

Planning Cycle and Use of Results
Planning Cycle and Use of ResultsPlanning Cycle and Use of Results
Planning Cycle and Use of ResultsBradley Vaden
 
Program Review & Planning Cycle
Program Review & Planning CycleProgram Review & Planning Cycle
Program Review & Planning CycleBradley Vaden
 
Curriculum (formative & summative) evaluation
Curriculum (formative & summative) evaluationCurriculum (formative & summative) evaluation
Curriculum (formative & summative) evaluationDrGavisiddappa Angadi
 
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...The Development and Usability Evaluation of a Standards-Based Grading Tool fo...
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...Alaa Sadik
 
Online Examination and Evaluation System
Online Examination and Evaluation SystemOnline Examination and Evaluation System
Online Examination and Evaluation SystemIRJET Journal
 
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...WAJID HUSSAIN
 
phase_1 (1).pdf
phase_1 (1).pdfphase_1 (1).pdf
phase_1 (1).pdfAnshPaul2
 
Using Nursing Exam Data Effectively in Preparing Nursing Accreditation
Using Nursing Exam Data Effectively in Preparing Nursing AccreditationUsing Nursing Exam Data Effectively in Preparing Nursing Accreditation
Using Nursing Exam Data Effectively in Preparing Nursing AccreditationExamSoft
 
CURRICULUM DEVELOPMENT CYCLE.pdf
CURRICULUM DEVELOPMENT CYCLE.pdfCURRICULUM DEVELOPMENT CYCLE.pdf
CURRICULUM DEVELOPMENT CYCLE.pdfVictor Rosales
 
Applying NEASC Best Practices to Ensure the Quality of Online Programs
Applying NEASC Best Practices to Ensure the Quality of Online ProgramsApplying NEASC Best Practices to Ensure the Quality of Online Programs
Applying NEASC Best Practices to Ensure the Quality of Online Programsmarando
 
Learning Outcomes Discussion
Learning Outcomes DiscussionLearning Outcomes Discussion
Learning Outcomes DiscussionLuke Dowden
 
SD-Session-3-The-Revised-SBM-Tool.pptx
SD-Session-3-The-Revised-SBM-Tool.pptxSD-Session-3-The-Revised-SBM-Tool.pptx
SD-Session-3-The-Revised-SBM-Tool.pptxKarlaLycaSequijorEsc
 
National university assessment process
National university assessment processNational university assessment process
National university assessment processAshley Kovacs
 
MBA760 Chapter 06
MBA760 Chapter 06MBA760 Chapter 06
MBA760 Chapter 06iDocs
 
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - PeregrineACBSPregion4
 
Pamantasan ng lungsod ng valenzuela bsed f il 3-1 2013 5
Pamantasan ng lungsod ng valenzuela  bsed f il 3-1 2013 5Pamantasan ng lungsod ng valenzuela  bsed f il 3-1 2013 5
Pamantasan ng lungsod ng valenzuela bsed f il 3-1 2013 5King Ayapana
 

Similar to PID3687979 (20)

Planning Cycle and Use of Results
Planning Cycle and Use of ResultsPlanning Cycle and Use of Results
Planning Cycle and Use of Results
 
Program Review & Planning Cycle
Program Review & Planning CycleProgram Review & Planning Cycle
Program Review & Planning Cycle
 
Curriculum (formative & summative) evaluation
Curriculum (formative & summative) evaluationCurriculum (formative & summative) evaluation
Curriculum (formative & summative) evaluation
 
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...The Development and Usability Evaluation of a Standards-Based Grading Tool fo...
The Development and Usability Evaluation of a Standards-Based Grading Tool fo...
 
Online Examination and Evaluation System
Online Examination and Evaluation SystemOnline Examination and Evaluation System
Online Examination and Evaluation System
 
Unit 9-6503.pptx
Unit 9-6503.pptxUnit 9-6503.pptx
Unit 9-6503.pptx
 
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...
ASEE 2017 WORKSHOP SPECIFIC AND GENERIC PERFORMANCE INDICATORS FOR THE COMPRE...
 
Chap006
Chap006Chap006
Chap006
 
Chap006
Chap006Chap006
Chap006
 
phase_1 (1).pdf
phase_1 (1).pdfphase_1 (1).pdf
phase_1 (1).pdf
 
Using Nursing Exam Data Effectively in Preparing Nursing Accreditation
Using Nursing Exam Data Effectively in Preparing Nursing AccreditationUsing Nursing Exam Data Effectively in Preparing Nursing Accreditation
Using Nursing Exam Data Effectively in Preparing Nursing Accreditation
 
CURRICULUM DEVELOPMENT CYCLE.pdf
CURRICULUM DEVELOPMENT CYCLE.pdfCURRICULUM DEVELOPMENT CYCLE.pdf
CURRICULUM DEVELOPMENT CYCLE.pdf
 
Applying NEASC Best Practices to Ensure the Quality of Online Programs
Applying NEASC Best Practices to Ensure the Quality of Online ProgramsApplying NEASC Best Practices to Ensure the Quality of Online Programs
Applying NEASC Best Practices to Ensure the Quality of Online Programs
 
Learning Outcomes Discussion
Learning Outcomes DiscussionLearning Outcomes Discussion
Learning Outcomes Discussion
 
SD-Session-3-The-Revised-SBM-Tool.pptx
SD-Session-3-The-Revised-SBM-Tool.pptxSD-Session-3-The-Revised-SBM-Tool.pptx
SD-Session-3-The-Revised-SBM-Tool.pptx
 
National university assessment process
National university assessment processNational university assessment process
National university assessment process
 
MBA760 Chapter 06
MBA760 Chapter 06MBA760 Chapter 06
MBA760 Chapter 06
 
Quality Evaluation of the Higher Education Programmes
Quality Evaluation of the Higher Education ProgrammesQuality Evaluation of the Higher Education Programmes
Quality Evaluation of the Higher Education Programmes
 
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine
2012 ACBSP Region 4 Conference Presentation #4 Sponsor - Peregrine
 
Pamantasan ng lungsod ng valenzuela bsed f il 3-1 2013 5
Pamantasan ng lungsod ng valenzuela  bsed f il 3-1 2013 5Pamantasan ng lungsod ng valenzuela  bsed f il 3-1 2013 5
Pamantasan ng lungsod ng valenzuela bsed f il 3-1 2013 5
 

More from WAJID HUSSAIN

PRESENTATION_FIE2016
PRESENTATION_FIE2016PRESENTATION_FIE2016
PRESENTATION_FIE2016WAJID HUSSAIN
 
ABET_WORKSHOP_CERTIFICATION2
ABET_WORKSHOP_CERTIFICATION2ABET_WORKSHOP_CERTIFICATION2
ABET_WORKSHOP_CERTIFICATION2WAJID HUSSAIN
 
IU_WORKSHOP_DEC31_2014
IU_WORKSHOP_DEC31_2014IU_WORKSHOP_DEC31_2014
IU_WORKSHOP_DEC31_2014WAJID HUSSAIN
 
IU_WORKSHOP_DEC30_2014
IU_WORKSHOP_DEC30_2014IU_WORKSHOP_DEC30_2014
IU_WORKSHOP_DEC30_2014WAJID HUSSAIN
 
IU_WORKSHOP_DEC29_2014
IU_WORKSHOP_DEC29_2014IU_WORKSHOP_DEC29_2014
IU_WORKSHOP_DEC29_2014WAJID HUSSAIN
 

More from WAJID HUSSAIN (9)

PRESENTATION_FIE2016
PRESENTATION_FIE2016PRESENTATION_FIE2016
PRESENTATION_FIE2016
 
WORKSHOPS_FIE2016
WORKSHOPS_FIE2016WORKSHOPS_FIE2016
WORKSHOPS_FIE2016
 
ABET_WORKSHOP_CERTIFICATION2
ABET_WORKSHOP_CERTIFICATION2ABET_WORKSHOP_CERTIFICATION2
ABET_WORKSHOP_CERTIFICATION2
 
faculty_cpsc_award1
faculty_cpsc_award1faculty_cpsc_award1
faculty_cpsc_award1
 
FACULTY_CPSC_AWARD
FACULTY_CPSC_AWARDFACULTY_CPSC_AWARD
FACULTY_CPSC_AWARD
 
certification_qiyas
certification_qiyascertification_qiyas
certification_qiyas
 
IU_WORKSHOP_DEC31_2014
IU_WORKSHOP_DEC31_2014IU_WORKSHOP_DEC31_2014
IU_WORKSHOP_DEC31_2014
 
IU_WORKSHOP_DEC30_2014
IU_WORKSHOP_DEC30_2014IU_WORKSHOP_DEC30_2014
IU_WORKSHOP_DEC30_2014
 
IU_WORKSHOP_DEC29_2014
IU_WORKSHOP_DEC29_2014IU_WORKSHOP_DEC29_2014
IU_WORKSHOP_DEC29_2014
 

PID3687979

  • 1. ROBUST ABET LEARNING OUTCOMES DATA IN SHORTER TIME FRAMES BY STREAMLINING SOFTWARE EVALTOOLS® EMPLOYING A COMPREHENSIVE PROGRAM EVALUATION METHODOLOGY USING HIGH RELATIVE COVERAGE UNIQUE ASSESSMENTS BY MULTIPLE RATERS FOR MEASUREMENT OF SPECIFIC PERFORMANCE INDICATORS Wajid Hussain, M. F. Addas College of Engineering Islamic University Madinah Munawarrah, Saudi Arabia wajidh@iu.edu.sa, mfaddas@iu.edu.sa Fong Mak Electrical and Computer Engineering Gannon University Erie, Pennsylvania mak@gannon.edu Abstract—This paper presents a novel methodology to collect robust assessment data by multiple raters for ABET student learning outcomes as well as course learning outcomes by comprehensively measuring a significant number of specific performance indicators in comparatively much shorter time frames resulting in quicker cycles and relatively comprehensive program term review resulting in an efficient continuous improvement system. Assessments prepared by multiple raters across different courses with high relative coverage of roughly 70% for a single performance indicator related to the ABET student outcomes and respective course outcomes are used just once for a specific measurement. A novel technique of using the Assignment Setup Module of EvalTools® for splitting an available assessment into multiple sections to obtain high relative coverage is discussed. A known, well adopted Faculty Course Assessment Report (FCAR) based upon learning outcomes assessment data is utilized for documenting old, new action items, modifications and proposals for course improvement by the concerned faculty. Learning outcome assessment information for a program, course or student can be studied in great detail by selecting course outcome, student outcome or performance indicator/criteria analytics for single or multiple terms and are presented using rich graphics and histograms employing an intelligent color coding system. Keywords—Unique Assessments; High Relative Coverage; EvalTools®; Student Outcomes; Course Outcomes; ABET; Continuous Improvement; Performance Indicator I. INTRODUCTION Generally grade giving assessments in an engineering curriculum are comprised of single or multiple questions and cover more than one performance criteria [1-8]. Programs may choose to a) develop new assessments and/or b) use the assessments available in their curriculum for measurement of specific performance criteria related to their program outcomes. In the first method, additional resources and faculty time would be required to measure the performance criteria of interest. The second method may pose limitations on the number of performance criteria measured in a given time frame and the quality of data collected depending upon the availability of streamlining electronic tools or assessments which possess maximum relative coverage of a single performance criterion. The result of both methods is a comparatively small set of performance criteria finally measured in a given time frame by a program using assessments that may not have maximum relative coverage of the specified criteria. Measurement of program educational objectives, student learning outcomes and performance criteria would therefore be completed in comparatively longer cycles. This minimum number of performance criteria measured with comparatively fewer assessments and obviously lesser number of raters over a given time frame would render the program evaluation term review less comprehensive and result in a deficiency in the eventual realization of its PEOs. In this paper, is presented outcomes assessment methodology using web based streamlining software EvalTools® 6 [10]. The Assignment Setup Module within EvalTools® 6 is used to split existing assessments of interest to obtain high relative coverage of minimally 70% for a specific performance criterion. These assessments are unique since they are used just once to measure a specific performance criterion. This methodology would result in realistic data since the outcome assessment score would correspond to results from a specific performance criterion with major contribution in the unique assessment and not be significantly affected by other performance criteria with a much lesser percentage of contribution in the assessment. EvalTools® is chosen as the platform for outcomes assessment instead of Blackboard® since it employs the unique FCAR and EAMU performance vector methodology [11-13] which facilitates using existing grade giving assessments for outcomes measurement and thus a high level of automation of the data collection process, feature-rich pick- and-choose assessment/reporting tools, and the flexibility to provide customized features. The basis of assessment in FCAR [11-13] is the EAMU performance vector. The EAMU
  • 2. performance vector counts the number of stu the course whose proficiency for that ou Excellent, Adequate, Minimal, or Unsatis faculty report failing course outcomes (CO outcomes (SOs), performance indicators (P student indirect assessments and other g concern in the respective course reflections Based upon these course reflections, new generated by the faculty. Old action items ar the FCAR for a same course if offered aga and proposals to a course are made with co status of the old action items. The Progr module of EvalTools® is focused on failing analysis and discussions relating to impro values of ABET SOs and weighted average scientific color coding scheme [13] indi investigation. Courses contributing to failing examined. By using EvalTools® 6 the entire proc assessment, evaluation and closing the loop systematically collecting, compiling and pre the course and program level for an easy rev As a result, robust assessment data by m ABET student learning outcomes as well a outcomes is possibly collected by comprehe a significant number of specific performa comparatively much shorter time frames re cycles and relatively comprehensive prog resulting in an efficient continuous improvem II. UNIQUE ASSESSMENTS FOR REALISTI INDICATORS, OUTCOMES MEASUR A. Reinvent the wheel. Design a new set of a specifically for realistic outcomes measur existing grade giving assessments Since grade giving assessments in curriculum are comprised of single or multi cover more than one performance criteria, such an assessment is generally a sum total o obtained from grading multiple perf corresponding to this assessment. Thus the does not actually reflect the grading resu performance criteria but rather a comple grading results from multiple performance the outcomes assessment data resulting from not realistic and does not reflect precise infor specific performance indicators or outco improvement. To obtain realistic data improvement purposes one option available create a new set of assessments specifically criteria, outcomes measurement. Several worldwide have chosen this approach purposes but since it is tedious and requires time [9], resources the programs generally information for small set of outcomes, perfo which are not sufficient for the impl comprehensive academic improvement pro udents that passed utcome was rated sfactory. Program Os), ABET student PIs), comments on general issues of section of FCAR. action items are re carried over into ain. Modifications onsideration of the ram Term Review g SOs and PIs for ovement. Average values of PIs with icate failures for g PIs and SOs are cess of outcomes is streamlined by senting the data at view and analysis. multiple raters for as course learning nsively measuring ance indicators in esulting in quicker gram term review ment system. IC PERFORMANCE REMENT assessments rement besides an engineering iple questions and the total score of of individual scores formance criteria assessment score ults from a single ex distribution of criteria. Therefore m this approach is rmation relating to omes for quality for continuous e for faculty is to y for performance programs [1-8] for accreditation additional faculty y collect minimal ormance indicators lementation of a ocess. This would finally result in programs spe maintaining independent pro realistic continuous improveme B. Why reinvent the wheel?Sci assessments for realistic ou At the Islamic Universi Engineering faculty have devel departmental meetings a com criteria covering all phases of offered within the curriculu performance criteria have also handbook [14] of nationally s for several engineering specia assessment related to a specifi would consider implementatio suitable for that course conten performance criteria to the tota be defined during assessment d The performance criteria of in 70% or more share in the total of grading results of the other p score would be thus rendered example where a sample uniqu coverage is designed with max mapping to a CO, ABET SO. Fig.1. Example of design of a unique for specific performance indic outcomes For cases where it is not po more share to a certain perf assessment, the Assignment Se used to split a question or sub achieving 70% high relati performance criteria. Fig. implementation of splitting of questions using EvalTools® 6 obtain high relative coverage an mapping to a certain COs and A set of questions are said to be ending additional resources for ocesses for accreditation and ent ientifically design grade giving utcomes measurement. ity in Madinah, College of oped through several sessions of mprehensive list of performance the syllabi for different courses um. A good percentage of been incorporated from QIYAS standardized learning outcomes alizations. While designing any fic course the concerned faculty on of the performance criteria nt. The contribution of various al score of an assessment would design by the concerned faculty. nterest would be given a nearly score distribution and the effect performance criteria on the total d negligible. Fig. 1 shows an ue assessment with high relative ximum coverage of a specific PI assessment with high relative coverage cator, course outcomes, ABET student ossible to assign a nearly 70% or formance criteria in an entire etup Module of EvalTools® 6 is b question of an assessment for ive coverage of a specific 2 indicates examples of f assessments to questions, sub 6 Assignment Setup Module to nd measurement of a specific PI ABET SO. Such assessments or unique since they are just used
  • 3. once for measurement of a certain PI. Thi implementing unique assessments with high of PIs mapping to COs and ABET SOs wou measurement of outcomes assessment data f continuous improvement. Fig.2. Example of splitting existing grade giving ass sub questions for high relative coverage of a indicator, course, ABET student outcomes C. EvalTools® 6 EAMU Vector calculation factor Realistic outcomes measurements are specifying weights to different assessments e course grading policy or by the type of asse higher weight to laboratory assessments over assessments since lab work covers all t Bloom’s taxonomy [15-16] or final exams o final exam is more comprehensive and well quiz and students are generally more prepare and many student skills have matured by then The following steps are employed by calculate the EAMU vectors: 1.Faculty using EvalTools® 6 Assignme identify an assignment with a set of spe split an assignment to use a specific questi with relative high coverage of a certain P ABET SO (for EAMU calculation). 2. EvalTools® 6 removes students who recei in a course from EAMU vector calcula student scores on the selected assignme remaining students. 3. EvalTools® 6 calculates the weighted aver the assignment, set of questions selected b are set according to their percentage in t scale or as per the decision of the progra entered in the weighting factor section o Setup Module. 4. EvalTools®6 uses average percentage t many students fall into the EAMU catego selected assessment criteria. is methodology of h relative coverage uld ensure realistic for comprehensive sessments to questions, specific performance with weighting also achieved by either according to ssment like giving r purely theoretical three domains of over quiz since the l-designed than a ed for a final exam n. EvalTools® 6 to nt Setup Module ecific questions or on or sub question PI mapping to CO, ived DN, F, W or I ations, and enters nts, questions for rage percentage on y faculty. Weights the course grading am committee and of the Assignment to determine how ries using the pre- 5. EvalTools® 6 calculates t rescaling to 5 for a weighted (refer to Fig. 3 for EAMU av Fig.3. Equation for EAMU a D. Course Outcomes Data Each course has specified C each major topic of the course data once measured would help or learning methodologies cor the course content. This would information to improve the c modifications. At the Isla engineering has decided to use course topics. Fig. 4 shows a c could be used to cover a certain Homework 2, quiz 2 and mid-t as key assignments for CO2. F EAMU vector for each key ass A, yellow for M and red for U. EAMU is (8,12,4,0) (with stud which gives us an average of 3. Fig.4. Data for a single course ou Fig. 4 is only a part of the ana under the heading Course O sequential list of all the COs w and their histogram plots depi students who have not failed th 3 the EAMU average rating by average based on a 3 point scale verage for scale of 3). average rating for a 3 point scale COs which are designed to cover e syllabus sequentially. The CO p identify weakness in teaching rresponding to a certain area of help provide real time formative course by appropriate on time amic University, college of 8-14 COs to cover all the major case where multiple assessments n course outcome. For this case, erm part-V question-42 are used Fig. 2 shows also the color-coded signment, green for E, white for The course outcome CO2 group ents failing the course removed) 61. utcome with its multiple assessments alytical charts of FCAR module Outcomes Assessment where a with various related assessments icting performance of all those he course are shown. At the end 2 1 0
  • 4. of the Course Outcomes Assessment sectio histogram plot displays all the COs data mea Fig. 5. The color-coded visual results give summarized view facilitating identification need attention. Even though course outcom not required for ABET program accreditati with ABET SOs will channel faculty towa needed for students. A direct quote fro outcomes that are systematically assessed a be shown to contribute to program-level outc information provided to students, em professional bodies and so on about grad confirms that course outcomes assessment is teaching and delivery improvement alon student’s learning. Fig.5. Shows a consolidated histogram plot of all c E. ABET Student Outcomes and Performanc The Islamic University college of engine ABET SOs for all its programs. Using the s EAMU computation for each SO, the EAM averages are calculated. In the example sh assignments Hw3 and Hw8 are selected for c PI. These assignments are weighted (applica factor either according to course grading po selected by the specific program), added t normalized to 100 for each student to calc aggregated EAMU score and EAMU cl weighted and normalized to 100 score of all are grouped together to obtain the average of for a specific PI which is computed as per th 3. Fig. 6 lists all the PIs mapping to AB corresponds to ABET student outcome mapping to a specific ABET SO are average the final average value of the ABET SO. consolidated ABET SOs histogram plot for We see that SO 1 has an average value computed by taking the average of the weigh obtained for abet_PI_1_27 (1.39), abet_PI_ abet_PI_1_44 (3.75). on a consolidated asured as shown in faculty a snapshot of the COs which mes assessment is ion, aligning COs ards the skill sets m [4] “Learning at course level can comes, and thus to mployer groups, duation standards” crucial for faculty ng with assessing course outcome data ce Indicators Data eering has adopted same principle for MU and weighted hown in Table 1, covering a specific ation of weighting olicy or any other together and then culate per student lassification. This students in a class f the EAMU vector he equation in Fig. BET SO 1(SO 1 ‘a’). All the PIs ed together to give Fig. 7 shows the a specific course. of 2.89 which is hted average values _1_43 (3.54) and Table 1: Calculation of III. CONTINUO A. Term Review The term review process involves completion of two p ABET SO evaluation. Fig. 8 begins with a snap shot conso measured in the specified ter scheme to indicate failures fo value for each measured ABET its corresponding aggregate PI each PI measured for this spec averaging this PIs data meas different courses. Indicator e SOs and PIs for analysis improvement. Courses contribu examined by selection. The inv course reflections and generate FCARs. Fig. 9 shows detailed P SO listing the contributing co calculations. Action items in updated or deleted as per th agreement with review membe elevated to program level from severity of the problem or deg SO evaluation phase integrates ABET SO with the comments failing PIs taken from the Per module of EvalTools® 6. The f SO executive summary b) Det c) SO/PI PVT summary d) Cou available in printable word or snapshot of a detailed SO/PI e program term review. The in reviews for a program can be review of the Program Educ items listed in the FCARs are faculty for closure and program term review reports are ap responsible departments for imp f aggregated EAMU for a PI OUS IMPROVEMENT flow for a specific program phases a) PI evaluation and b) shows that the PI evaluation olidated view of all ABET SOs, rm with scientific color coding or investigation. The aggregate T SO is calculated by averaging Is data. The aggregate value for cific ABET SO is calculated by ured by multiple raters across evaluation is focused on failing and discussions relating to uting to failing PIs and SOs are vestigations involve study of the ed action items in the respective PI analysis for a selected ABET ourses and their group EAMU respective FCARs are edited, he program chair decision in ers. Certain action items may be m course level based upon the gree of importance. The ABET overall comments on a specific s of review and analysis of its rformance Indicator Evaluation following term review reports a) tailed SO/PI executive summary urse reflections/Action items are r pdf format. Fig. 10 shows a executive summary of a sample nformation from multiple term e consolidated and utilized for cational Objectives. The action e followed up by the concerned m level action items mentioned in ppropriately escalated to the plementation.
  • 5. Fig.6. List of performance indicators with cor Fig.7. Consolidated list o rresponding assignments mapping to a specific ABET student outco of ABET student outcomes SOs covered by a particular course in a ome (PIs listed for ABET SO_1) given term
  • 6. Fig.8. Perfromance Indicator Evaluation Module Eval Fig.9. Detailed performance indicator analysis for a se lTools® 6 beginning page showing student outcomes covered by a p elected ABET student outcome listing the contributing courses and t program in a given term their group EAMU calculations
  • 7. Fig.10. Portion of B. Comprehensive Program Evaluation and Improvement with Study of Student Evalu Realistic ABET Student Outcomes and Pe Indicator Information Both program and students performance eva on their respective measured ABET SO an Study of student failing patterns in these evaluations will confirm any major weakness the collectively averaged outcome data in pr and further investigations of the respective c help determine specific areas such as cours and/or depth), teaching materials, and assessment methodology for realistic prog improvement. Student advising based on this faculty to identify potential areas of strengt weak students through the observation of rela for certain ABET SO, CO related PIs and t ease of selection of an area of specializat research or industry to focus on for enhan future industry related prospects. Program, st assessments and advising based on measur COs and PIs facilitate outcome based educ help the student to focus not just on improve scores but learning outcomes since the acade a good extent reflect performance rela outcomes. A direct quote from [17] co observation: “But students’ and graduates’ f detailed SO/PI executive summary of a sample program term revie d Realistic uations Based On erformance aluations are based nd associated PIs. individual student s observed through rogram evaluations course FCARs will e content (breadth d/or pedagogical/ gram and student s information helps th in academically atively high scores thereby facilitating tion of education, nced learning and tudent evaluations, rable ABET SOs, cation system and ement of academic emic scores now to ative to learning oncurs the same assessment about what competencies they have constructing new criteria for qu of including such output orie alternative is to develop tests surveys, an initiative now taken very time and resource consum ABET SOs for student evaluati to a certain ABET SO and the c IV. CON This paper presents a novel and realistic assessment data student learning outcomes as comprehensively measuring a performance indicators. Using and analytical data are obtained time frames resulting in quic program term review. Grade g to extract, store electronically program, student performance SOs, COs and PIs thus op comprehensive improvement. continuous improvement for pr established by an in depth form program and student compet focused on patterns, anomalies of the educational curriculum ew gained may be one option in uality. We see two possible ways ented measurements. The best in line with PISA and similar n by OECD. This is, however, a ming activity…” Fig. 11 lists the ion. Fig. 12 lists the PIs related contributing courses. NCLUSION l methodology to collect robust by multiple raters for ABET s well as course outcomes by significant number of specific g EvalTools® 6 the assessment d in comparatively much shorter cker cycles for comprehensive giving assessments are dissected y a wealth of information for e evaluations based on ABET pening an exciting frontier in . World class standards in rogram, course or student can be mative or summative analysis of tencies digital data especially s related to specific components m such as teaching, learning
  • 8. methodologies, course content, materials depth). Fig.1 Fig.12. Performance Indicat (breadth and/or 1. ABET student outcomes listed in a student evaluation tors associated to a specific ABET student outcome listed in a studennt evaluation
  • 9. ACKNOWLEDGMENT The College of Engineering at the Islamic University would like to specially thank the Civil, Mechanical and Electrical engineering programs for data provided. REFERENCES [1] J. Moon, “Linking levels, learning outcomes and assessment criteria,”Bologna Process – European Higher Educaion Area. http://www.ehea.info/Uploads/Seminars/040701- 02Linking_Levels_plus_ass_crit-Moon.pdf [2] “Whys & hows of assessment,” Eberly Center for Teachig Excellent, Carnegie Mellon University. http://www.cmu.edu/teaching/assessment/howto/basics/objectives.html [3] Biggs, J. and Tang, C. (2007). Teaching for Quality Learning at University. 3rd edition. England and NY: Society for Research into Higher Education and Open University Press. [4] “Assessment Toolkit: aligning assessent with outcomes,” UNSW, Australia. https://teaching.unsw.edu.au/printpdf/531 [5] Houghton, W. (2004). Constructive alignment: and why it is important to the learning process. Loughborough: HEA Engineering Subject Centre. [6] Hounsell, D., Xu, R. and Tai, C.M. (2007). Blending Assignments and Assessments for High-Quality Learning. (Scottish Enhancement Themes: Guides to Integrative Assessment, no.3). Gloucester: Quality Assurance Agency for HigherEducation [7] D. Kennedy, A. Hyland, and N. Ryan, “Writing and using learning outcomes: a practical guide” Article C 3.4-1 in EUA Bologna Hanbook: Making Bologna Work, Berlin 2006: Raabe Verlag. [8] J. Prados, “Can ABET Really Make a Difference?” Int. J. Engng Ed. Vol. 20, No. 3, pp. 315-317, 2004 [9] M. Manzoul, “Effective assessment process,” 2007 Best Assessment Processes IX Symposium, April 13, Terre Haute, Inidana. [10] Information on EvalTools® available at http://www.makteam.com [11] J. Estell, J. Yoder, B. Morrison, F. Mak, “Improving upon best practices: FCAR 2.0,” ASEE 2012 Annual Conference, San Antonio. [12] C. Liu, L. Chen, “Selective and objective assessment calculation and automation,” ACMSE’12, March 29-31, 2012, Tuscaloosa, AL, USA. [13] F. Mak, J. Kelly, “Systematic means for identifying and justifying key assignments for effective rules-based program evaluation,” 40th ASEE/IEEE Frontiers in Education Conference, October 27-30, Washington, DC. [14] Handbok of Learning Outcomes, November 2014 draft (unpublished), QIYAS Ministry of Education, Saudi Arabia. Bloom, B.S., Masia, B.B. and Krathwohl, D.R. (1964). [15] Taxonomy of Educational Objectives: The Affective Domain. NewYork: McKay. [16] K. Salim, R. Ali, N. Hussain, H. Haron, “An instrument for measureing the learning outcomes of laboratory work,” Proceeding of the IETEC’13 Conference, 2013. Ho Chi Minh City, Vietnam. [17] P. Aamodt, E. Hovdhaugen, “Assessing higher education learning outcomes as a result of institutional and individual characteristics,” Outcomes of Higher Education: Quality relevant and inpact, September 8-10, Paris, France