Skip to Main Content

ID-8920 v1

Week 5 Required Resources

Textbook(s)

The Instructional Design Knowledge Base: Theory, Research, and Practice

Richey, R.C., Klein, J.D., & Tracey, M.W. (2011). The instructional design knowledge base: Theory, research, and practice. Routledge. 

  • Chapter 9 Performance Improvement Theory (pp. 146-166): Chapter 9 delves into Performance Improvement (PI) Theory, which extends the scope of instructional design (ID) to tackle performance issues and opportunities for enhancement. The chapter elucidates the theoretical underpinnings of PI, explores various models and approaches for evaluating PI, and examines current trends in applying PI to ID practices. Additionally, it offers suggestions for future research endeavors, providing valuable guidance for doctoral students in the field of instructional design seeking research topics worthy of investigation.

Kirkpatrick’s Four Levels of Training Evaluation

Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick's four levels of training evaluation. Association for Talent Development. 

  • Chapter 2 of Kirkpatrick’s Four Levels is required reading.  Other sections of the book are optional. However, for those IDers who want to become experts at all four levels, reading the entire text will help achieve that goal.
  • The New World Kirkpatrick Model—An Overview: Chapter 2 introduces the four levels, which form the basis of the most-used training evaluation model. This chapter also introduces the New World Kirkpatrick Model, which is the framework in which the four levels are implemented into today’s work environment. The model presents the four levels in reverse, starting with Level 4, which is how the levels are considered when planning a program.

Article(s)

Video(s)

Optional Resources

Kirkpatrick’s Four Levels of Training Evaluation

Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick's four levels of training evaluation. Association for Talent Development. 

  • Part 1 Basics of Evaluation: Part 1 begins by emphasizing the pressing need to evaluate training programs thoroughly in order to optimize their effectiveness and demonstrate their value to the organization. Within this section, the four levels of evaluation are introduced. Originally developed by the late Dr. Donald Kirkpatrick in the 1950s, these levels constitute the most widely utilized training evaluation model globally. The New World Kirkpatrick Model, which was introduced in 2009, builds upon and modernizes these four levels to enhance their applicability in today’s business landscape. This model, which serves as the foundational framework for the guidance provided in this book, is detailed in Chapter 2 of Part 1. The other chapters in Part 1 include:
    • Chapter 1: Reasons for evaluating 
    • Chapter 3: Developing an Effective Evaluation Strategy 
    • Chapter 4: The Kirkpatrick Foundational Principles
  • Part 2 Data Collection, Guidelines, Methods, and Tools: Part 2 contains guidelines, methods, and tools for gathering program data so that meaningful analysis can be performed. These methods are practical and flexible, so they can be adapted to effectively evaluate any type of program in any organization. Part 2 chapters include:
    • Chapter 5: Evaluating Level 1: Reaction 
    • Chapter 6: Evaluating Level 2: Learning 
    • Chapter 7: Evaluating Level 3: Behavior 
    • Chapter 8: Evaluating Level 4: Results 
    • Chapter 9: Evaluating Beyond Traditional Classroom Training 
    • Chapter 10: Evaluation Instrument Creation Basics 
    • Chapter 11: Blended Evaluation® Items and Sample Tools
  • Part 3 Data Analysis and Reporting: Part 3 of this book provides practical, nonscientific methods for analyzing data, making good decisions to guide program process, and reporting findings and outcomes in plain language for all stakeholder groups.
    • Chapter 12: Making Data-Based Decisions 
    • Chapter 13: Using the Success Case Method to Drive Performance and Results 
    • Chapter 14: So What? Now What? 
    • Chapter 15: Reporting Progress and Demonstrating Program Value 
    • Chapter 16: Avoiding Common Evaluation Pitfalls
  • Part 4 Case Studies