A workshop about the ins and outs of assessment, including formative vs. summative assessment, and following each of Kirkpatrick's 4 levels, even in education, to produce higher-quality courses and programs that truly measure what they set out to measure.
2. What do you like most about assessment in
your classes right now?
What do you think could be going better?
Do you really know how well your students
are learning what they need to?
3. One of the common assessment models is
Kirkpatrick’s 4 levels
Reaction (i.e., student evaluations)
Learning (assignments and tests)
Behavior (how does knowledge transfer to new
situations/classes)
Impact (how does your class relate to the whole
program?)
4. What do you do with those student evaluation
surveys each semester?
Ever tried alternative evaluation forms or midterm
evaluations?
Reaction surveys of course have their issues and
shouldn’t be your only data source, but they can be
helpful in showing you where you can improve
Some other ideas to gauge reaction:
Peer evaluations after group projects
Open discussion forums to allow students to express ideas
Private meetings with each student once or twice a
semester
5. This is the big one and where most professors
spend their time when it comes to
assessment
There are many ideas, and many ways to do
effective assessment in any discipline
Consider the level of the students and the
class as well as the subject when deciding on
an assessment strategy
6. This stuff helps you find and create the most appropriate assessments!
7. Some examples:
Comprehension: After a unit on the poetry of the
Romantic period, students in an English literature
class will complete a 25-question multiple-choice test
on their understanding of the key figures and works
from this era.
▪ Assessment:Test tool (paper, BlackBoard)
Analysis: After a unit on the poetry of the Romantic
period, students in an English literature class will
engage in an online discussion regarding the themes
of this period and how they were influenced by the
political and societal trends of the time.
▪ Assessment: discussion board with rubric
8. Application/Physical Activities: After a
lesson on determining a patient’s blood
pressure, nursing students will obtain the
blood pressure of a partner using a traditional
cuff and stethoscope, and report with 100%
accuracy.
Assessment: observation with rubric or checklist
9. Evaluation/Characterization: While defining
the construction of a new residential building,
construction engineering students will be
able to make judgments with regard to the
use of brick, steel, or concrete in various
structural applications, and will revise
judgments based on observation and new
information.
Assessment: Observation and product evaluation,
with rubric
10. Center forTeaching atVanderbilt:
http://cft.vanderbilt.edu/guides-sub-pages/writing-
good-multiple-choice-test-questions/
UNC Charlotte Center forTeaching:
http://teaching.uncc.edu/learning-resources/articles-
books/best-practice/assessment-grading/designing-
test-questions
Kentucky State booklet on “writing trick questions”:
http://www.k-state.edu/ksde/alp/resources/Handout-
Module6.pdf
11. At higher levels of Bloom’s tests are often not
the best measure
Consider other ways of gauging learning:
Research projects or papers
Presentations or debates
Concept maps
Student-created media (video, audio, etc.)
Performances
Other products that show learning in action
13. You are not alone –
rubrics are not always
easy to create
Try rubric-makers that
make your job easier!
Rubistar:
http://rubistar.4teache
rs.org
iRubric:
http://www.rcampus.c
om/indexrubric.cfm
14. Level 3 and 4 of Kirkpatrick’s model is often
difficult for us to achieve in college
We may not see our students again, or have a
good idea of how our class fits in with the whole
of a degree program
Some ideas to do Level 3 & 4:
Consult with those who teach the courses just before
and after yours in the program and review objectives
Plan case studies, experiential learning, and
internships at specific points in a program and work
with other faculty to support them
Useful article: http://eric.ed.gov/?id=ED477023
15. Carnegie-Mellon Assessment resource:
http://www.cmu.edu/teaching/assessment/howto/basic
s/index.html
Online assessment strategies:
http://jolt.merlot.org/vol6no1/sewell_0310.pdf
PNW Center forTeaching and Learning resources:
http://centers.pnw.edu/teaching/assessment/
Tool for providing good assessment feedback:
http://www.purdue.edu/passnote/
Computational thinking rubric (for assessing critical
thinking and other process skills):
https://sites.google.com/site/workshopctandsblresourc
esite/home
16. Reach us at:
atrekles@pnw.edu
http://centers.pnw.edu/teaching for all
workshop notes, links, and training needs
Notas do Editor
Welcome to Assessment – all about managing tests, projects, and the grade center within BlackBoard.
As we talk about the ins and outs of assessment, consider some of these questions:
What do you like most about assessment in your classes right now?
What do you think could be going better?
Do you really know how well your students are learning what they need to?
We often reflect and observe where we’d like to change things up or improve how learning is going in our course. And because students are human, after all (and so are we), it’s not always that easy to determine how well learning is going in the first place. We may use various types of measures to do assessment, but sometimes it’s difficult to decide whether one of our objectives would be better served with a project, a discussion, a test, or some other means entirely.
The approach we should take is one where both formative and summative assessment methods are used to find out whether students are learning, and where they might yet be struggling. In this way, we can take continual “snapshots” of how they are doing over time in the class, and adjust accordingly when we find there are gaps in student knowledge. If we can diagnose issues while students are still with us for the term, there is a better chance that we will be able to help them be their best when it comes time to complete final projects and tests at the end of the semester.
Don Kirkpatrick devised a model now very common in training, known as the four levels. You may have heard this before, but in higher education, we tend to stop at learning, and often don’t consider reaction except at the very end of the semester. Are there ways, however, that we can include the Behavior and Impact levels in our assessment of our students and our courses? After all, don’t we want our students to walk into their next classes, or even worse, out the door with a degree in hand, without being able to do certain things. These skills (and attitudes) that our students must have at the end of their programs is what we can think about in terms of those Level 3 and 4 assessments. It may even be worthwhile to speak with the instructors of the courses that come after yours to find out whether students seem prepared now, where they can do better, and how you can help them get there while you have them. This conversation could even extend to your department or program chair so that you can all work together assess how well the students are doing on their command of the discipline overall.
When it comes to reaction, what do you do with those student evaluations every semester, other than put them in your faculty annual review documentation? Hopefully, you have been able to gain something from the surveys that you receive, to the point where you might even want to get feedback from students at other times in the semester, where you might be able to address problems more directly. Many faculty do a midterm evaluation, and the one linked here is to my midterm evaluation that I use, created in GoFormative.com. GoFormative allows you to create various types of interesting assessments that can be taken either anonymously (as mine is set up for), or where students must enter a name and you’ll be able to track them. This choice, of course, depends on the type of evaluation that you’re doing.
Reaction surveys aren’t perfect - which is why need three more levels! We get a sense of what students think about their learning and the class, but these opinions are individual and subjective. There’s no getting around that. While it’s usually very good information and you’ll be happy to have a good amount of it to work from, different people can interpret things very differently. I’m sure can all tell stories about “horror evaluations” and students who seemed to do more complaining than learning. This does happen, and it’s not your fault. Sometimes you just won’t get along with a student, and they will say so on their evaluation of you. The best thing to do is learn from the situation and move on.
Some other ideas for gauging reaction at different points in the semester is to offer reaction surveys of some sort in each assignment, especially with group projects. Peer evaluations keep people honest and give you a better idea of what really happened in the group. Others might open up a discussion forum, either public or anonymous, to allow students to post their ideas about the course at any time in the semester. Another idea is to hold private meetings with each student once or twice a semester to meet one-on-one and get their feedback about how things are going. Whether you teach online or face to face, these sorts of meetings can be very valuable both to you and the student.
When it comes to learning, this is where we’re experts. We spend the bulk of our time each semester assessing what students are learning and how well they’re doing it. There are many approaches, and many ways to it well, but it all comes down to the objectives that we have for the course, as well as the students themselves and the level they are at. For example, it may not be appropriate to ask freshmen in a business class to engage in a community-based practicum experience with little guidance; these students may need knowledge and skills within the discipline first to be able to function well in an authentic setting.
Remember Bloom’s Taxonomy of the three different domains? This will most definitely come in handy as you try to determine what kinds of assessments will go along with each of your objectives and related activities. Go back to your objectives and use them as a guide when developing your assessments – you will always be right on target if you use those objectives as your blueprint.
See the following documents for additional help and explanation of each area:
http://assessment.uconn.edu/docs/LearningTaxonomy_Affective.pdf
http://assessment.uconn.edu/docs/LearningTaxonomy_Cognitive.pdf
http://assessment.uconn.edu/docs/LearningTaxonomy_Psychomotor.pdf
Let’s look at some examples of matching assessments with tools and Bloom’s levels. In the first example, we have a basic, Comprehension-level objective designed to get at core knowledge in literature. An objective quiz in BlackBoard works well to gauge this knowledge.
Once we get to analysis, a test might not cut it any longer. We may need to hear more about what students have to say about the subject, and allow them to engage in the topic from several angles. For this, we might use a rubric to score their performance in a discussion forum, where they may debate back and forth about themes and influences on the literature of the period.
How about this one, looking at the Psychomotor domain? Sometimes the skill-level objectives are both the easiest and hardest to assess. We often know what doing a skill “correctly” looks like, but explaining it to students may be another matter. A rubric or checklist that can be given to them to show what specific aspects of the performance needs improvement is a good idea. With the example of taking blood pressure, there may be a series of behaviors involved in the steps of taking a patient’s blood pressure that should be taken into account, and just reporting the pressure accurately may not mean that the student has fulfilled the objective.
How about in the affective domain? This is often cited as being the hardest area to assess, as it is involved with feelings and emotions regarding a subject. Evaluation, judgment, taking a stance, and making decisions are all part of the affective domain, and these things also correspond to the evaluation level of the cognitive domain – the highest level. In some classes, students may never be asked to engage at this level, and that’s okay. Early-level classes in any subject should not always expect that students would be able to reach this point. But, toward the end of a program or unit of study, you may need to be able to assess how well students can make value judgments, such as in this example. In construction, it’s not enough to just build a house; we also need to know that our students know what ”good” buildings look like. The composition and use of materials, as well as the angles and measurements used, among other things, are all part of this, and all of these skills can be assessed with a thorough rubric used during an observation or evaluation from the professor.
Here are some very useful resources on designing good tests and test questions at all levels of Bloom’s and more.
Sometimes, of course, an objective test does not really match up with what is being measured. This is particularly true when you’re dealing with complex tasks and skills that require higher-level thinking skills. While you can certainly develop test questions that correspond to every level of Bloom’s Taxonomy, it is not always that easy to accomplish this, or to grade such questions. And, it allows for more variety and flexibility when you introduce projects, papers, presentations, debates, and other types of performances into your classroom, of course, which helps engage students at different and often deeper levels.
Some cool tools that may be useful to you and to your students include BlackBoard’s various tools, WebEx for live collaboration, Google Docs, Prezi, new templates from Microsoft that add more spice to your presentations or papers, Screencast-o-matic or Jing/Camtasia, Wordpress, or SimpleBooklet. These are all tools that students can use freely to promote their own learning, and they often require minimal knowledge or effort from you. These tools can all be figured out by students pretty easily, so giving them some options and letting them “run with it” is not a bad way to go about things. Try it and see what happens!
In order to grade things like papers, projects, and presentations, you’ll probably want a rubric. A rubric is a comprehensive way to grade, because it provides you with objectives and observations to look for in a student’s work and compare to a standard view of what is “good.” This makes the act of grading much less subjective, as can often happen with things like presentations and multimedia. When students know what you’re looking for, they are much more likely to perform where you want them to, and it takes the mystery out of learning. No student likes to be held to standards that they don’t understand. Providing rubrics and good objectives will help with this.
Some great example rubrics and information about them can be found here. BlacKboard even has a nice tool for creating rubrics and applying them to assignments, discussions, and other items students can turn in to you. Here are some more useful resources on assessments and rubrics, including Rubistar which will help fill in the blanks for you in creating rubrics and take even more pain out of the process. Yes, creating rubrics and authentic assessments does take a little more time than the traditional testing route. However, it is well worth it when you consider how much more your students will be able to do as a result of your course in the long run!
Beyond your course, students will go on to other things – other courses, and eventually graduation. You may not always get a good idea of what happens when they leave your classroom, but it may be helpful to get together with your colleagues and see how you can help each other. What foundations skills do students need to brush up on? What essential skills do they need before they start a new semester?
Levels 3 and 4 are not always that easy for us to assess no matter how much we work with our colleagues, though, because students don’t always stay in a program, or they take courses out of sequence, or programs change midstream. For example, in Education I only teach one course – I have some idea of how the skills my students get come into play in their other courses, but there are many variables including which section of a course they’re taking, new research to consider in a field, and any number of other things.
In order to approach Level 3 and 4 evaluation, consider some of these suggestions:
Consult with your colleagues that teach before and ahead of you in your program. What can you learn from each other in order to better prepare students throughout the whole of their program?
Consider working together to plan a case study, practicum, or experiential learning opportunity at key points within a program, giving all faculty involved a chance to work together to determine objectives and necessary skills and knowledge that students will need before, during and after their experience.
Here are some additional resources and helpful articles for understanding and improving your assessment practices in your classroom.
Please contact us and visit http://pnc.edu/distance for all workshop notes, links, and training needs. Thank you!