2. OBJECTIVES OF THE CHAPTER
• Describe the purposes for and various stages of formative evaluation of
instructor-developed materials, instructor-selected materials, and instructor-
presented instruction.
• Describe the instruments used in a formative evaluation.
• Develop an appropriate formative evaluation plan and construct instruments
for a set of instructional materials or an instructor presentation.
• Collect data according to a formative evaluation plan for a given set of
instructional materials or instructor presentation.
3. BACKGROUND
• If you had been developing instructional materials in the early 1970s, it is
likely that your initial draft, or perhaps a revised draft, of those materials
would have been put into final production and distributed to the target
population. The almost certain problems that would occur as a result of the
limited effectiveness of first draft instructional materials would have
probably been blamed on poor teaching and poor learning, when in fact, the
materials were not sufficient to support the instructional effort.
4. CONCEPTS
• The major concept underlying this chapter is formative evaluation, which is
the process designers use to obtain data for revising their instruction to make
it more efficient and effective. Its emphasis is on the collection and analysis
of data and the revision of the instruction. When a final version of the
instruction is produced, other evaluators may collect data to determine its
effectiveness. This latter type of evaluation is often referred to as summative
evaluation, summative in that the instruction is now in its final form, and it is
appropriate to compare it with other similar forms of instruction.
5. 5 Questions directly related to the decisions
you made while developing the materials are
appropriate for all materials:
• Are the materials appropriate for the type of learning outcome? Specific prescriptions for the
development of materials were made based on whether the objectives were intellectual or motor skills,
attitudes, or verbal information. You should be concerned whether the materials you produced are
indeed congruent with suggestions for learning each type of capability. The best evaluator of this
aspect of the materials is undoubtedly an expert in the type of learning involved.
• Do the materials include adequate instruction on the subordinate skills, and are these skills sequenced
and clustered logically? The best evaluator for this area of questions is an expert in the content area.
• Are the materials clear and readily understood by representative members of the target group?
Obviously, only members of the target group can answer these questions. Instructors familiar with
target learners may provide you with preliminary information, but only learners can ultimately judge
the clarity of the materials.
6. CONTINUED..
• What is the motivational value of the materials? Do learners find the
materials relevant to their needs and interests? Are they confident as they
work through the materials? Are they satisfied with what they have learned?
Again, the most appropriate judges of these aspects of the materials are
representative members of the target group.
• Can the materials be managed efficiently in the manner they are mediated?
Both target learners and instructors are appropriate to answer these
questions.
7. At a minimum, the types of data to collect
include the following:
• Reactions of the subject-matter expert, whose responsibility it is to verify that the content of the
module is accurate and current.
• Reactions of a manager or supervisor who has observed the learner using the skills in the performance
context.
• Test data collected on entry skills tests, pretests, and posttests.
• Comments or notations made by learners to you or marked on the instructional materials about
difficulties encountered at particular points in the materials.
8. CONTINUED…
• Data collected on attitude questionnaires or debriefing comments in which
learners reveal their overall reactions to the instruction and their perceptions
of where difficulties lie with the materials and the instructional procedures in
general.
• The time required for learners to complete various components of the
instruction.
9. The three main criteria and the decisions designers make
during the evaluation are as follows:
• Clarity: Is the message, or what is being presented, clear to individual target
learners?
• Impact: What is the impact of the instruction on individual learner’s
attitudes and achievement of the objectives and goals?
• Feasibility: How feasible is the instruction given the available resources
(time/context)?
10. Examples of questions of interest include the
following:
• How should the maturity, independence, and motivation of the learner
influence the general amount of time required to complete the instruction?
• Can learners such as this one operate or easily learn to operate any
specialized equipment required?
• Is the learner comfortable in this environment?
• Is the cost of delivering this instruction reasonable given the time
requirements?
11. When you cannot select your learners at random or when the group you have
available to draw from is relatively small, you should ensure that you include in
your sample at least one representative of each type of subgroup that exists in
your population, possibly including the following:
• Low-, average-, and high-achieving students
• Learners with various native languages
• Learners who are familiar with a particular procedure (e.g., web-based
instruction) and learners who are not
• Younger or inexperienced learners as well as more mature learners
12. The questions should therefore reflect various
components of the strategy.
• Was the instruction interesting?
• Did you understand what you were supposed to learn?
• Were the materials directly related to the objectives?
• Were sufficient practice exercises included?
• Were the practice exercises relevant?
• Did the tests really measure your knowledge of the objectives?
• Did you receive sufficient feedback on your practice exercises?
• Did you feel confident when answering questions on the tests?
13. Data Summary and Analysis
• Both the quantitative and descriptive information gathered during the trial
should be summarized and analyzed. Quantitative data consist of test scores
as well as time requirements and cost projections. Descriptive information
consists of comments collected from attitude questionnaires, interviews, or
evaluator’s notes written during the trial.
14. Outcomes
• The goal of the small-group trial and instructional revisions is refined
instruction that should be effective with most target learners in the intended
setting. Refinements required in instruction may be simple, such as changing
examples and vocabulary in test items or increasing the amount of time
allocated for study. Modifications might also require major changes in the
instructional strategy (e.g., motivational strategies, sequence of objectives,
instructional delivery format) or in the nature of information presented to
learners. Once instruction is refined adequately, the field trial can be initiated.
15. FIELD TRIAL
• In the final stage of formative evaluation, the instructor attempts to use a
learning context that closely resembles the intended context for the ultimate
use of the instructional materials. One purpose of this final stage of
formative evaluation is to determine whether the changes in the instruction
made after the small-group stage were effective; another purpose is to see
whether the instruction can be used in the context for which it was
intended—that is, is it administratively possible to use the instruction in its
intended setting?
16. Location of Evaluation
• In picking the site for a field evaluation, you are likely to encounter one of
two situations: First, if the material is tried out in a class that is currently
using large-group, lockstep pacing, then using self-instructional materials may
be a very new and different experience for the learners. It is important to lay
the groundwork for the new procedure by explaining to the learners how the
materials are to be used and how they differ from their usual instruction. In
all likelihood, you will obtain an increase in interest, if not in performance,
simply because of the break in the typical classroom instructional pattern.
17. Procedure for Conducting a Field Trial
• The procedure for conducting the field trial is similar to that for the small
group, with only a few exceptions.
• The primary change is in the role of the designer, who should do no more
than observe the process.
• The instruction should be administered or delivered by a typical instructor,
for whom the designer may have to design and deliver special training so that
he or she knows exactly how to use the instruction.
18. Data Summary and Interpretation
• Data summary and analysis procedures are the same for the small-group and
field trials. Achievement data should be organized by instructional objective,
and attitudinal information from both learners and instructors should also be
anchored to specific objectives whenever possible. Summarizing the data in
these formats aids in locating particular areas where the instruction was and
was not effective. This information from the field trial is used to plan and
make final revisions in the instruction.
19. OUTCOMES
• The goal of the field trial and final revisions is effective instruction that
yields desired levels of learner achievement and attitudes and that functions
as intended in the learning setting.
• Using data about problem areas gathered during the field trial, appropriate
revisions are made in the instruction.
20. Concerns Influencing Formative Evaluation
• The formative evaluation component distinguishes the instructional design
process from a philosophical or theoretical approach. Rather than
speculating about the instructional effectiveness of your materials, you test
them with learners; therefore, you should do the best possible job of
collecting data that truly reflect the effectiveness of your materials.
• There are several concerns about the formative evaluation context and the
learners who participate in the evaluation that the designer should keep in
mind when planning and implementing data-collection procedures.