Evaluation in Instructional Design

In Chapter 10, “Evaluation in Instructional Design: A Comparison of the Major Evaluation Models”, Johnson and Bendolph (2018) define evaluation and its logic and unpack several of its most widely used models. The detailed descriptions of industry standard evaluation models provided valuable information.

The authors first distinguished formative from summative assessment by explaining that formative assessment focuses on the improvement of an evaluation object and summative assessment centers on the determination of the evaluation object’s overall effectiveness, usefulness, or worth. They defined evaluation as “the process of determining the merit, worth, and value of things, and evaluations are the products of that process” (p. 87). They identified how the process of evaluation as described in instructional models includes two key features: (1) testing should focus on the learning objectives (criterion-referenced or objective-referenced testing) and (2) learners are to be the focus and data source for making decisions on the instruction.

Johnson and Bendolph (2018) explained the logic of evaluation according to Scriven (1980). The four steps include the following:

  1. Select the criteria of merit or worth. 
  2. Set specific performance standards (i.e., the level of performance required) for your criteria.
  3. Collect performance data and compare the level of observed performance with the level of required performance dictated by the performance standards.
  4. Make the evaluative (i.e., value) judgment(s).

Johnson and Bendolph (2018) discussed several major evaluation models. Stufflebeam’s CIPP model (Context, Input, Process, and Product) focuses on program context (for planning decisions), inputs (for program structuring decisions), process (for implementation decisions), and product (for summative decisions). Rossi’s five-domain evaluation model conceives of evaluation more broadly with its focus on tailoring each evaluation to local needs and emphasis on one or more evaluation domains: needs assessment, theory assessment, implementation assessment, impact assessment, and efficiency assessment. Chen’s Theory-Driven Evaluation (TDE) model focuses on the articulation of a program theory so that stakeholders will know how and why a program works. Kirkpatrick’s training evaluation model focuses on four levels of evaluation, including reactions, learning, transfer of learning, and business results. Brinkerhoff’s success case method (SCM) focuses on locating and understanding successes in organizational initiatives with the aim of cascading success across the organization. Patton’s Utilization-Focused Evaluation (U-FE) model focuses on conducting evaluations according to the degree to which an object of evaluation is used.

The definitions and concepts discussed in this particular chapter are important to the field of instructional technology as evaluation is a core component of instructional design models. For ID practitioners, evaluation is a critical skill as it affords a systemic procedure whereby practitioners can make value judgments regarding instructional programs and products.

References

Johnson, R. B., & Bendolph, A. (2018). Evaluation in instructional design: A comparison of the major evaluation models. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (4th Ed.), (pp. 87-96). New York, NY: Pearson Education.

Leave a comment