This document presents a novel methodology for collecting robust assessment data on ABET student learning outcomes and course learning outcomes in shorter time frames. The methodology utilizes EvalTools® software to design unique assessments with high relative coverage (70% or more) of specific performance indicators related to outcomes. Existing assessments are split into questions or sections to obtain high coverage of single indicators. Weighting factors can be applied to assessments. Student performance data is aggregated and categorized as Excellent, Adequate, Minimal or Unsatisfactory to calculate weighted averages for indicators, outcomes, and programs. Continuous improvement is enabled through comprehensive evaluation and closing of action item loops at the course and program level.
1. ROBUST ABET LEARNING OUTCOMES DATA IN SHORTER TIME FRAMES
BY STREAMLINING SOFTWARE EVALTOOLS® EMPLOYING A
COMPREHENSIVE PROGRAM EVALUATION METHODOLOGY USING HIGH
RELATIVE COVERAGE UNIQUE ASSESSMENTS BY MULTIPLE RATERS FOR
MEASUREMENT OF SPECIFIC PERFORMANCE INDICATORS
Wajid Hussain, M. F. Addas
College of Engineering
Islamic University
Madinah Munawarrah, Saudi Arabia
wajidh@iu.edu.sa, mfaddas@iu.edu.sa
Fong Mak
Electrical and Computer Engineering
Gannon University
Erie, Pennsylvania
mak@gannon.edu
Abstract—This paper presents a novel methodology to collect
robust assessment data by multiple raters for ABET student
learning outcomes as well as course learning outcomes by
comprehensively measuring a significant number of specific
performance indicators in comparatively much shorter time
frames resulting in quicker cycles and relatively comprehensive
program term review resulting in an efficient continuous
improvement system. Assessments prepared by multiple raters
across different courses with high relative coverage of roughly
70% for a single performance indicator related to the ABET
student outcomes and respective course outcomes are used just
once for a specific measurement. A novel technique of using the
Assignment Setup Module of EvalTools® for splitting an available
assessment into multiple sections to obtain high relative coverage
is discussed. A known, well adopted Faculty Course Assessment
Report (FCAR) based upon learning outcomes assessment data is
utilized for documenting old, new action items, modifications and
proposals for course improvement by the concerned faculty.
Learning outcome assessment information for a program, course
or student can be studied in great detail by selecting course
outcome, student outcome or performance indicator/criteria
analytics for single or multiple terms and are presented using
rich graphics and histograms employing an intelligent color
coding system.
Keywords—Unique Assessments; High Relative Coverage;
EvalTools®; Student Outcomes; Course Outcomes; ABET;
Continuous Improvement; Performance Indicator
I. INTRODUCTION
Generally grade giving assessments in an engineering
curriculum are comprised of single or multiple questions and
cover more than one performance criteria [1-8]. Programs may
choose to a) develop new assessments and/or b) use the
assessments available in their curriculum for measurement of
specific performance criteria related to their program
outcomes. In the first method, additional resources and faculty
time would be required to measure the performance criteria of
interest. The second method may pose limitations on the
number of performance criteria measured in a given time
frame and the quality of data collected depending upon the
availability of streamlining electronic tools or assessments
which possess maximum relative coverage of a single
performance criterion. The result of both methods is a
comparatively small set of performance criteria finally
measured in a given time frame by a program using
assessments that may not have maximum relative coverage of
the specified criteria. Measurement of program educational
objectives, student learning outcomes and performance criteria
would therefore be completed in comparatively longer cycles.
This minimum number of performance criteria measured with
comparatively fewer assessments and obviously lesser number
of raters over a given time frame would render the program
evaluation term review less comprehensive and result in a
deficiency in the eventual realization of its PEOs.
In this paper, is presented outcomes assessment
methodology using web based streamlining software
EvalTools® 6 [10]. The Assignment Setup Module within
EvalTools® 6 is used to split existing assessments of interest
to obtain high relative coverage of minimally 70% for a
specific performance criterion. These assessments are unique
since they are used just once to measure a specific
performance criterion. This methodology would result in
realistic data since the outcome assessment score would
correspond to results from a specific performance criterion
with major contribution in the unique assessment and not be
significantly affected by other performance criteria with a
much lesser percentage of contribution in the assessment.
EvalTools® is chosen as the platform for outcomes
assessment instead of Blackboard® since it employs the
unique FCAR and EAMU performance vector methodology
[11-13] which facilitates using existing grade giving
assessments for outcomes measurement and thus a high level
of automation of the data collection process, feature-rich pick-
and-choose assessment/reporting tools, and the flexibility to
provide customized features. The basis of assessment in
FCAR [11-13] is the EAMU performance vector. The EAMU
2. performance vector counts the number of stu
the course whose proficiency for that ou
Excellent, Adequate, Minimal, or Unsatis
faculty report failing course outcomes (CO
outcomes (SOs), performance indicators (P
student indirect assessments and other g
concern in the respective course reflections
Based upon these course reflections, new
generated by the faculty. Old action items ar
the FCAR for a same course if offered aga
and proposals to a course are made with co
status of the old action items. The Progr
module of EvalTools® is focused on failing
analysis and discussions relating to impro
values of ABET SOs and weighted average
scientific color coding scheme [13] indi
investigation. Courses contributing to failing
examined.
By using EvalTools® 6 the entire proc
assessment, evaluation and closing the loop
systematically collecting, compiling and pre
the course and program level for an easy rev
As a result, robust assessment data by m
ABET student learning outcomes as well a
outcomes is possibly collected by comprehe
a significant number of specific performa
comparatively much shorter time frames re
cycles and relatively comprehensive prog
resulting in an efficient continuous improvem
II. UNIQUE ASSESSMENTS FOR REALISTI
INDICATORS, OUTCOMES MEASUR
A. Reinvent the wheel. Design a new set of a
specifically for realistic outcomes measur
existing grade giving assessments
Since grade giving assessments in
curriculum are comprised of single or multi
cover more than one performance criteria,
such an assessment is generally a sum total o
obtained from grading multiple perf
corresponding to this assessment. Thus the
does not actually reflect the grading resu
performance criteria but rather a comple
grading results from multiple performance
the outcomes assessment data resulting from
not realistic and does not reflect precise infor
specific performance indicators or outco
improvement. To obtain realistic data
improvement purposes one option available
create a new set of assessments specifically
criteria, outcomes measurement. Several
worldwide have chosen this approach
purposes but since it is tedious and requires
time [9], resources the programs generally
information for small set of outcomes, perfo
which are not sufficient for the impl
comprehensive academic improvement pro
udents that passed
utcome was rated
sfactory. Program
Os), ABET student
PIs), comments on
general issues of
section of FCAR.
action items are
re carried over into
ain. Modifications
onsideration of the
ram Term Review
g SOs and PIs for
ovement. Average
values of PIs with
icate failures for
g PIs and SOs are
cess of outcomes
is streamlined by
senting the data at
view and analysis.
multiple raters for
as course learning
nsively measuring
ance indicators in
esulting in quicker
gram term review
ment system.
IC PERFORMANCE
REMENT
assessments
rement besides
an engineering
iple questions and
the total score of
of individual scores
formance criteria
assessment score
ults from a single
ex distribution of
criteria. Therefore
m this approach is
rmation relating to
omes for quality
for continuous
e for faculty is to
y for performance
programs [1-8]
for accreditation
additional faculty
y collect minimal
ormance indicators
lementation of a
ocess. This would
finally result in programs spe
maintaining independent pro
realistic continuous improveme
B. Why reinvent the wheel?Sci
assessments for realistic ou
At the Islamic Universi
Engineering faculty have devel
departmental meetings a com
criteria covering all phases of
offered within the curriculu
performance criteria have also
handbook [14] of nationally s
for several engineering specia
assessment related to a specifi
would consider implementatio
suitable for that course conten
performance criteria to the tota
be defined during assessment d
The performance criteria of in
70% or more share in the total
of grading results of the other p
score would be thus rendered
example where a sample uniqu
coverage is designed with max
mapping to a CO, ABET SO.
Fig.1. Example of design of a unique
for specific performance indic
outcomes
For cases where it is not po
more share to a certain perf
assessment, the Assignment Se
used to split a question or sub
achieving 70% high relati
performance criteria. Fig.
implementation of splitting of
questions using EvalTools® 6
obtain high relative coverage an
mapping to a certain COs and A
set of questions are said to be
ending additional resources for
ocesses for accreditation and
ent
ientifically design grade giving
utcomes measurement.
ity in Madinah, College of
oped through several sessions of
mprehensive list of performance
the syllabi for different courses
um. A good percentage of
been incorporated from QIYAS
standardized learning outcomes
alizations. While designing any
fic course the concerned faculty
on of the performance criteria
nt. The contribution of various
al score of an assessment would
design by the concerned faculty.
nterest would be given a nearly
score distribution and the effect
performance criteria on the total
d negligible. Fig. 1 shows an
ue assessment with high relative
ximum coverage of a specific PI
assessment with high relative coverage
cator, course outcomes, ABET student
ossible to assign a nearly 70% or
formance criteria in an entire
etup Module of EvalTools® 6 is
b question of an assessment for
ive coverage of a specific
2 indicates examples of
f assessments to questions, sub
6 Assignment Setup Module to
nd measurement of a specific PI
ABET SO. Such assessments or
unique since they are just used
3. once for measurement of a certain PI. Thi
implementing unique assessments with high
of PIs mapping to COs and ABET SOs wou
measurement of outcomes assessment data f
continuous improvement.
Fig.2. Example of splitting existing grade giving ass
sub questions for high relative coverage of a
indicator, course, ABET student outcomes
C. EvalTools® 6 EAMU Vector calculation
factor
Realistic outcomes measurements are
specifying weights to different assessments e
course grading policy or by the type of asse
higher weight to laboratory assessments over
assessments since lab work covers all t
Bloom’s taxonomy [15-16] or final exams o
final exam is more comprehensive and well
quiz and students are generally more prepare
and many student skills have matured by then
The following steps are employed by
calculate the EAMU vectors:
1.Faculty using EvalTools® 6 Assignme
identify an assignment with a set of spe
split an assignment to use a specific questi
with relative high coverage of a certain P
ABET SO (for EAMU calculation).
2. EvalTools® 6 removes students who recei
in a course from EAMU vector calcula
student scores on the selected assignme
remaining students.
3. EvalTools® 6 calculates the weighted aver
the assignment, set of questions selected b
are set according to their percentage in t
scale or as per the decision of the progra
entered in the weighting factor section o
Setup Module.
4. EvalTools®6 uses average percentage t
many students fall into the EAMU catego
selected assessment criteria.
is methodology of
h relative coverage
uld ensure realistic
for comprehensive
sessments to questions,
specific performance
with weighting
also achieved by
either according to
ssment like giving
r purely theoretical
three domains of
over quiz since the
l-designed than a
ed for a final exam
n.
EvalTools® 6 to
nt Setup Module
ecific questions or
on or sub question
PI mapping to CO,
ived DN, F, W or I
ations, and enters
nts, questions for
rage percentage on
y faculty. Weights
the course grading
am committee and
of the Assignment
to determine how
ries using the pre-
5. EvalTools® 6 calculates t
rescaling to 5 for a weighted
(refer to Fig. 3 for EAMU av
Fig.3. Equation for EAMU a
D. Course Outcomes Data
Each course has specified C
each major topic of the course
data once measured would help
or learning methodologies cor
the course content. This would
information to improve the c
modifications. At the Isla
engineering has decided to use
course topics. Fig. 4 shows a c
could be used to cover a certain
Homework 2, quiz 2 and mid-t
as key assignments for CO2. F
EAMU vector for each key ass
A, yellow for M and red for U.
EAMU is (8,12,4,0) (with stud
which gives us an average of 3.
Fig.4. Data for a single course ou
Fig. 4 is only a part of the ana
under the heading Course O
sequential list of all the COs w
and their histogram plots depi
students who have not failed th
3
the EAMU average rating by
average based on a 3 point scale
verage for scale of 3).
average rating for a 3 point scale
COs which are designed to cover
e syllabus sequentially. The CO
p identify weakness in teaching
rresponding to a certain area of
help provide real time formative
course by appropriate on time
amic University, college of
8-14 COs to cover all the major
case where multiple assessments
n course outcome. For this case,
erm part-V question-42 are used
Fig. 2 shows also the color-coded
signment, green for E, white for
The course outcome CO2 group
ents failing the course removed)
61.
utcome with its multiple assessments
alytical charts of FCAR module
Outcomes Assessment where a
with various related assessments
icting performance of all those
he course are shown. At the end
2 1 0
4. of the Course Outcomes Assessment sectio
histogram plot displays all the COs data mea
Fig. 5. The color-coded visual results give
summarized view facilitating identification
need attention. Even though course outcom
not required for ABET program accreditati
with ABET SOs will channel faculty towa
needed for students. A direct quote fro
outcomes that are systematically assessed a
be shown to contribute to program-level outc
information provided to students, em
professional bodies and so on about grad
confirms that course outcomes assessment is
teaching and delivery improvement alon
student’s learning.
Fig.5. Shows a consolidated histogram plot of all c
E. ABET Student Outcomes and Performanc
The Islamic University college of engine
ABET SOs for all its programs. Using the s
EAMU computation for each SO, the EAM
averages are calculated. In the example sh
assignments Hw3 and Hw8 are selected for c
PI. These assignments are weighted (applica
factor either according to course grading po
selected by the specific program), added t
normalized to 100 for each student to calc
aggregated EAMU score and EAMU cl
weighted and normalized to 100 score of all
are grouped together to obtain the average of
for a specific PI which is computed as per th
3. Fig. 6 lists all the PIs mapping to AB
corresponds to ABET student outcome
mapping to a specific ABET SO are average
the final average value of the ABET SO.
consolidated ABET SOs histogram plot for
We see that SO 1 has an average value
computed by taking the average of the weigh
obtained for abet_PI_1_27 (1.39), abet_PI_
abet_PI_1_44 (3.75).
on a consolidated
asured as shown in
faculty a snapshot
of the COs which
mes assessment is
ion, aligning COs
ards the skill sets
m [4] “Learning
at course level can
comes, and thus to
mployer groups,
duation standards”
crucial for faculty
ng with assessing
course outcome data
ce Indicators Data
eering has adopted
same principle for
MU and weighted
hown in Table 1,
covering a specific
ation of weighting
olicy or any other
together and then
culate per student
lassification. This
students in a class
f the EAMU vector
he equation in Fig.
BET SO 1(SO 1
‘a’). All the PIs
ed together to give
Fig. 7 shows the
a specific course.
of 2.89 which is
hted average values
_1_43 (3.54) and
Table 1: Calculation of
III. CONTINUO
A. Term Review
The term review process
involves completion of two p
ABET SO evaluation. Fig. 8
begins with a snap shot conso
measured in the specified ter
scheme to indicate failures fo
value for each measured ABET
its corresponding aggregate PI
each PI measured for this spec
averaging this PIs data meas
different courses. Indicator e
SOs and PIs for analysis
improvement. Courses contribu
examined by selection. The inv
course reflections and generate
FCARs. Fig. 9 shows detailed P
SO listing the contributing co
calculations. Action items in
updated or deleted as per th
agreement with review membe
elevated to program level from
severity of the problem or deg
SO evaluation phase integrates
ABET SO with the comments
failing PIs taken from the Per
module of EvalTools® 6. The f
SO executive summary b) Det
c) SO/PI PVT summary d) Cou
available in printable word or
snapshot of a detailed SO/PI e
program term review. The in
reviews for a program can be
review of the Program Educ
items listed in the FCARs are
faculty for closure and program
term review reports are ap
responsible departments for imp
f aggregated EAMU for a PI
OUS IMPROVEMENT
flow for a specific program
phases a) PI evaluation and b)
shows that the PI evaluation
olidated view of all ABET SOs,
rm with scientific color coding
or investigation. The aggregate
T SO is calculated by averaging
Is data. The aggregate value for
cific ABET SO is calculated by
ured by multiple raters across
evaluation is focused on failing
and discussions relating to
uting to failing PIs and SOs are
vestigations involve study of the
ed action items in the respective
PI analysis for a selected ABET
ourses and their group EAMU
respective FCARs are edited,
he program chair decision in
ers. Certain action items may be
m course level based upon the
gree of importance. The ABET
overall comments on a specific
s of review and analysis of its
rformance Indicator Evaluation
following term review reports a)
tailed SO/PI executive summary
urse reflections/Action items are
r pdf format. Fig. 10 shows a
executive summary of a sample
nformation from multiple term
e consolidated and utilized for
cational Objectives. The action
e followed up by the concerned
m level action items mentioned in
ppropriately escalated to the
plementation.
5. Fig.6. List of performance indicators with cor
Fig.7. Consolidated list o
rresponding assignments mapping to a specific ABET student outco
of ABET student outcomes SOs covered by a particular course in a
ome (PIs listed for ABET SO_1)
given term
6. Fig.8. Perfromance Indicator Evaluation Module Eval
Fig.9. Detailed performance indicator analysis for a se
lTools® 6 beginning page showing student outcomes covered by a p
elected ABET student outcome listing the contributing courses and t
program in a given term
their group EAMU calculations
7. Fig.10. Portion of
B. Comprehensive Program Evaluation and
Improvement with Study of Student Evalu
Realistic ABET Student Outcomes and Pe
Indicator Information
Both program and students performance eva
on their respective measured ABET SO an
Study of student failing patterns in these
evaluations will confirm any major weakness
the collectively averaged outcome data in pr
and further investigations of the respective c
help determine specific areas such as cours
and/or depth), teaching materials, and
assessment methodology for realistic prog
improvement. Student advising based on this
faculty to identify potential areas of strengt
weak students through the observation of rela
for certain ABET SO, CO related PIs and t
ease of selection of an area of specializat
research or industry to focus on for enhan
future industry related prospects. Program, st
assessments and advising based on measur
COs and PIs facilitate outcome based educ
help the student to focus not just on improve
scores but learning outcomes since the acade
a good extent reflect performance rela
outcomes. A direct quote from [17] co
observation: “But students’ and graduates’
f detailed SO/PI executive summary of a sample program term revie
d Realistic
uations Based On
erformance
aluations are based
nd associated PIs.
individual student
s observed through
rogram evaluations
course FCARs will
e content (breadth
d/or pedagogical/
gram and student
s information helps
th in academically
atively high scores
thereby facilitating
tion of education,
nced learning and
tudent evaluations,
rable ABET SOs,
cation system and
ement of academic
emic scores now to
ative to learning
oncurs the same
assessment about
what competencies they have
constructing new criteria for qu
of including such output orie
alternative is to develop tests
surveys, an initiative now taken
very time and resource consum
ABET SOs for student evaluati
to a certain ABET SO and the c
IV. CON
This paper presents a novel
and realistic assessment data
student learning outcomes as
comprehensively measuring a
performance indicators. Using
and analytical data are obtained
time frames resulting in quic
program term review. Grade g
to extract, store electronically
program, student performance
SOs, COs and PIs thus op
comprehensive improvement.
continuous improvement for pr
established by an in depth form
program and student compet
focused on patterns, anomalies
of the educational curriculum
ew
gained may be one option in
uality. We see two possible ways
ented measurements. The best
in line with PISA and similar
n by OECD. This is, however, a
ming activity…” Fig. 11 lists the
ion. Fig. 12 lists the PIs related
contributing courses.
NCLUSION
l methodology to collect robust
by multiple raters for ABET
s well as course outcomes by
significant number of specific
g EvalTools® 6 the assessment
d in comparatively much shorter
cker cycles for comprehensive
giving assessments are dissected
y a wealth of information for
e evaluations based on ABET
pening an exciting frontier in
. World class standards in
rogram, course or student can be
mative or summative analysis of
tencies digital data especially
s related to specific components
m such as teaching, learning
8. methodologies, course content, materials
depth).
Fig.1
Fig.12. Performance Indicat
(breadth and/or
1. ABET student outcomes listed in a student evaluation
tors associated to a specific ABET student outcome listed in a studennt evaluation
9. ACKNOWLEDGMENT
The College of Engineering at the Islamic University would
like to specially thank the Civil, Mechanical and Electrical
engineering programs for data provided.
REFERENCES
[1] J. Moon, “Linking levels, learning outcomes and assessment
criteria,”Bologna Process – European Higher Educaion Area.
http://www.ehea.info/Uploads/Seminars/040701-
02Linking_Levels_plus_ass_crit-Moon.pdf
[2] “Whys & hows of assessment,” Eberly Center for Teachig Excellent,
Carnegie Mellon University.
http://www.cmu.edu/teaching/assessment/howto/basics/objectives.html
[3] Biggs, J. and Tang, C. (2007). Teaching for Quality Learning at
University. 3rd edition. England and NY: Society for Research into
Higher Education and Open University Press.
[4] “Assessment Toolkit: aligning assessent with outcomes,” UNSW,
Australia. https://teaching.unsw.edu.au/printpdf/531
[5] Houghton, W. (2004). Constructive alignment: and why it is important
to the learning process. Loughborough: HEA Engineering Subject
Centre.
[6] Hounsell, D., Xu, R. and Tai, C.M. (2007). Blending Assignments and
Assessments for High-Quality Learning. (Scottish Enhancement
Themes: Guides to Integrative Assessment, no.3). Gloucester: Quality
Assurance Agency for HigherEducation
[7] D. Kennedy, A. Hyland, and N. Ryan, “Writing and using learning
outcomes: a practical guide” Article C 3.4-1 in EUA Bologna Hanbook:
Making Bologna Work, Berlin 2006: Raabe Verlag.
[8] J. Prados, “Can ABET Really Make a Difference?” Int. J. Engng Ed.
Vol. 20, No. 3, pp. 315-317, 2004
[9] M. Manzoul, “Effective assessment process,” 2007 Best Assessment
Processes IX Symposium, April 13, Terre Haute, Inidana.
[10] Information on EvalTools® available at http://www.makteam.com
[11] J. Estell, J. Yoder, B. Morrison, F. Mak, “Improving upon best practices:
FCAR 2.0,” ASEE 2012 Annual Conference, San Antonio.
[12] C. Liu, L. Chen, “Selective and objective assessment calculation and
automation,” ACMSE’12, March 29-31, 2012, Tuscaloosa, AL, USA.
[13] F. Mak, J. Kelly, “Systematic means for identifying and justifying key
assignments for effective rules-based program evaluation,” 40th
ASEE/IEEE Frontiers in Education Conference, October 27-30,
Washington, DC.
[14] Handbok of Learning Outcomes, November 2014 draft (unpublished),
QIYAS Ministry of Education, Saudi Arabia. Bloom, B.S., Masia, B.B.
and Krathwohl, D.R. (1964).
[15] Taxonomy of Educational Objectives: The Affective Domain.
NewYork: McKay.
[16] K. Salim, R. Ali, N. Hussain, H. Haron, “An instrument for measureing
the learning outcomes of laboratory work,” Proceeding of the IETEC’13
Conference, 2013. Ho Chi Minh City, Vietnam.
[17] P. Aamodt, E. Hovdhaugen, “Assessing higher education learning
outcomes as a result of institutional and individual characteristics,”
Outcomes of Higher Education: Quality relevant and inpact, September
8-10, Paris, France