Extension Evaluation Matters: 2nd Edition

Welcome to Extension Evaluation Matters (or E2M), 2nd Edition. The name, E2M, has a double meaning. This publication is a resource for all Extension professionals about the matters of Extension evaluation, such as planning and implementing an evaluation project. It is also to emphasize that evaluation matters in our work because it is about getting information to people for decision-making. Resources have been specifically chosen that can be put to work in your program right away. The publication has four chapters. The first chapter is about standards and competencies in the practice of evaluation. The foundation of all evaluation practice is built upon ethical standards, integrity and honesty, and respect for people. The second chapter concerns evaluation planning: how to make sure you are clear about your purpose and the information you need. The third chapter is about evaluation implementation, when we get to collect and analyze data and answer our evaluation questions. The fourth chapter explores how to use evaluation data. This work is supported by New Technologies for Agriculture Extension grant no. 2015-41595-24254 from the USDA National Institute of Food and Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.

Extension Evaluation Matters A Professional Development Offering of the Extension Foundation Impact Collaborative (2 nd Edition)

Editors: Dr. John Diaz and Dr. Teresa McCoy

ATTRIBUTION

Extension Evaluation Matters (2 nd Edition).

Copyright © Diaz, J., McCoy, T. 2023, Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Published by Extension Foundation.

ISBN: 978-1-7340417-2-9

Publish Date: 3/27/2024

Citations for this publication may be made using the following:

Diaz, J., and McCoy, T. (2024). Extension Evaluation Matters (2nd ed). Kansas City: Extension Foundation. ISBN: 978-1-7340417-2-9

Producer: Ashley S. Griffin

Peer Review Coordinator: Heather Martin

Technical Implementer and Editing: Dr. Rose Hayden-Smith

Welcome to the Extension Evaluation Matters , a resource created for the Cooperative Extension Service and published by the Extension Foundation. We welcome feedback and suggested resources for this publication, which could be included in any subsequent versions. This work is supported by New Technologies for Agriculture Extension grant no. 2015-41595-24254 from the USDA National Institute of Food and Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.

For more information please contact:

Extension Foundation c/o Bryan Cave LLP One Kansas City Place

1200 Main Street, Suite 3800 Kansas City, MO 64105-2122 https://extension.org

2

TABLE OF CONTENTS

Attribution............................................................................................................................................................. 2 Table of Contents .................................................................................................................................................. 3 Meet the Authors .................................................................................................................................................. 5 Editorial Review Board .......................................................................................................................................... 7 Introduction .......................................................................................................................................................... 9 Chapter 1: Standards and Competencies....................................................................................... 10 Evaluation Guiding Principles .............................................................................................................................. 11 Evaluation Cultural Competence ......................................................................................................................... 12 Evaluator Competencies ...................................................................................................................................... 13 Human Subjects Research ................................................................................................................................... 14 Chapter 2: Evaluation Planning .................................................................................................... 15 Evaluation Frameworks ....................................................................................................................................... 16 Program Theory and Logic Models ...................................................................................................................... 23 Program Theory ................................................................................................................................................................ 23 Logic Models ..................................................................................................................................................................... 24 Logic Model Training......................................................................................................................................................... 24 Love Your Logic Model...................................................................................................................................................... 25 Programs, Life Cycles, and Evaluation ................................................................................................................. 26 Program Life Cycles........................................................................................................................................................... 26 Program Evaluation .......................................................................................................................................................... 27 Evaluation Purpose and Scope............................................................................................................................. 29 Guidelines for Establishing Purpose and Scope................................................................................................................ 29 Tools for Establishing Purpose and Scope ........................................................................................................................ 30 Asset Mapping, Needs Assessment and Stakeholder Analysis............................................................................. 31 Asset Mapping .................................................................................................................................................................. 32 Needs Assessment ............................................................................................................................................................ 34

Chapter 3: Evaluation Implementation ......................................................................................... 38

Evaluation Implementation ................................................................................................................................. 39 Primary and Secondary Data - Overview .......................................................................................................................... 39 Evaluation Techniques......................................................................................................................................... 42 Introduction ...................................................................................................................................................................... 42 Surveys: An Introduction .................................................................................................................................................. 43 Direct Observation ............................................................................................................................................................ 49

3

Focus Groups .................................................................................................................................................................... 51

Quantitative Methods ......................................................................................................................................... 54 Qualitative Methods............................................................................................................................................ 56 Chapter 4 – Using Evaluation Data ............................................................................................... 66 Stage 1: Feedback and Utilization........................................................................................................................ 66 Utilization .......................................................................................................................................................................... 66 Feedback ........................................................................................................................................................................... 67

Stage 2: Reflection, Synthesis, and Revision........................................................................................................ 68

References and Resources ............................................................................................................ 69

4

MEET THE AUTHORS

The lead curators of this publication are Dr. Teresa McCoy, Director, Learning and Organizational Development, Ohio State University Extension and Dr. John Diaz, Extension Professor and Specialist, University of Florida. They served as the 2019 and 2020 Fellows, respectively, for the National Association of Extension Program and Staff Development Professionals.

Dr. John Diaz, 2020 NAEPSDP Extension Fellow

Dr. John Diaz is an Associate Professor and Extension Specialist in the Department of Agricultural Education and Communication (AEC) at the University of Florida. He focuses on program evaluation, community development, behavior change, social marketing, and local resilience. Dr. Diaz has responsibilities in research, Extension, and teaching. Dr. Diaz's research focuses on understanding the factors that influence behavior change, practice adoption, and measures of equity and inclusion among various stakeholder groups in extension activities related to local food systems, nutrition and wellness, natural resource management, conservation, and community capacity building.

Dr. Diaz's Extension program supports the mission of the University of Florida by developing strategies for developing resilient communities and measuring impacts of Extension programs. He focuses on enhancing the community development and evaluation competencies of Extension and outreach professionals across a breadth of program areas. Dr. Diaz also provides key leadership for IFAS in his role as the president of the Coalition of Florida Extension Educators for Latino Communities (CAFE Latino). He led the development of these new organizations that now provide key services to Florida Cooperative Extension, so educators have the capacity and competency to serve the needs of Florida's multicultural audiences. He teaches courses in Program Development and Evaluation in Extension Education and Culturally Responsive Pedagogy.

Image Credit: University of Florida

5

Dr. Teresa McCoy, 2019 NAEPSDP Extension Fellow Dr. Teresa McCoy is the Director of Learning and Organizational Development at the Ohio State University. Previously, she was employed by the University of Maryland Extension (UME), where she was the assistant director, UME, Evaluation & Assessment since August 2008. Dr. McCoy served as a member of the UME leadership team with responsibilities in program development and evaluation, situational analysis, strategic planning, and organization development. In Maryland, she worked closely with Extension educators to teach them program development and evaluation practices that resulted in evidence to demonstrate outcomes.

Prior to moving to Maryland, Dr. McCoy was in Lorain County, Ohio, for eight years. While there,

she was executive director of the Public Services Institute at Lorain County Community College. The Institute provided research-based information and other services to improve nonprofits and local government in the county.

She started her career in Virginia Cooperative Extension, first as a research assistant while in school and later as an Extension specialist in leadership and program development. Dr. McCoy’s evaluation interest was kindled in 1988, when she served as an evaluation assistant on a Kellogg-funded technology project. She also served as an assistant editor for the Journal of Extension during the first two years as the Journal moved to electronic format.

She holds a doctorate in public administration from the University of Baltimore, and a BA and Master’s in Public Administration from Virginia Tech.

Image Credit: Ohio State University

6

EDITORIAL REVIEW BOARD

An expert team of reviewers from Extension has and is working on reviewing the material and adding resources. The reviewer team is made up of:

Dr. Celeste Allgood

Ms. Kit Alviz

Accountability and Impact Agent

Program Planning & Evaluation Specialist

Fort Valley State University

University of California ANR

Dr. Virginia Brown

Dr. Scott Cummings

Senior Agent and FCS Evaluator

Professor and Extension Specialist

University of Maryland Extension

Texas A&M University

7

Dr. Vikram Koundinya

Dr. Alda Norris

Evaluation Specialist

Evaluation Specialist

University of California, Davis

University of Alaska

8

INTRODUCTION

Extension and evaluation both center on getting useful information to people - Michael Quinn Patton, Journal of Extension, 1983

Welcome to Extension Evaluation Matters (or E2M). The name, E2M, has a double meaning. This publication is a resource for all Extension professionals about the matters of Extension evaluation, such as planning and implementing an evaluation project. It is also to emphasize that evaluation matters in our work because it is about getting information to people for decision-making. Resources have been specifically chosen that can be put to work in your program right away. The publication has four chapters. The first chapter is about standards and competencies in the practice of evaluation. The foundation of all evaluation practice is built upon ethical standards, integrity and honesty, and respect for people. The second chapter concerns evaluation planning: how to make sure you are clear about your purpose and the information you need. The third chapter is about evaluation implementation, when we get to collect and analyze data and answer our evaluation questions. The fourth chapter explores how to use evaluation data.

9

Chapter 1: Standards and Competencies

Just as evaluation standards provide guidance for making decisions when conducting program evaluation studies, evaluator competencies that specify the knowledge, skills, and dispositions central to effectively accomplishing those standards have the potential to further increase the effectiveness of evaluation efforts. — Stephena, L., King, J. A., Ghere, G., & Minnema, J. (2005).

This chapter delves into the field of evaluation principles and competencies developed by the American Evaluation Association (AEA) and through research studies conducted over time. Another critical issue for researchers and/or evaluators is to make sure that they adhere to the ethical principles of respect for persons, beneficence, and justice established by the Belmont Report.

The chapter contents include:

Evaluation Guiding Principles

Evaluation Cultural Competence

Evaluator Competencies

Human-Subjects Research

Standards and Competencies: Guiding principles, cultural competence, competencies, and working with human subjects.

Image by Gerd Altmann from Pixabay.

10

EVALUATION GUIDING PRINCIPLES

The American Evaluation Association (AEA) developed a set of guiding principles for evaluators. These principles address the ethical and professional standards that evaluators should follow in all aspects of the evaluation process. Extension professionals, whether they describe themselves as an evaluator or not, should adhere to these principles in their work.

The five principles are:

Systematic inquiry

Competence

Integrity

Respect for people

Common good and equity

For Extension, our evaluations should adhere to these principles by:

 Being conducted in a thorough, systematic way that takes into account our context and our clientele and the limitations of the evaluation.

 Using evaluation skills and knowledge that will enable the evaluation to be carried out.

 Practicing honesty, communicating clearly, disclosing any conflicts of interest, and operating with transparency.

 Treating people fairly and reducing risks or harm, insuring that people are fully informed about the evaluation work, protecting confidentiality, and appreciating the different experiences and perspectives that people bring to the evaluation.

 Making sure the evaluation advances the common good of Extension, clientele, and the community.

To read about each of the principles in- depth, visit the AEA’s web page here.

This is a PDF of the full version of AEA Guiding Principles.

11

EVALUATION CULTURAL COMPETENCE

The American Evaluation Association (AEA) defines a culturally competent evaluator as a person who “is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Evaluators who strive for cultural competence: acknowledge the complexity of cultural identify; recognize the dynamics of power; recognize and eliminate bias in language; employ culturally appropriate methods.” AEA’s Public Statement on Cultural Competence in Evaluation (seen to the right) provides a set of practices that you can use to integrate cultural competence into your evaluation work. The Centers for Disease Control (CDS) developed a set of cultural competence standards that accompany their evaluation competencies. Along with the standards, CDC has identified “Practical Strategies for Culturally Competent Evaluation.”

With its emphasis on stakeholder engagement, this version of CDC’s Framework for Program Evaluation (see Figure 1) emphasizes an even greater commitment to cultural competence than do less participatory

evaluation approaches. Evaluations guided by the CDC framework actively involve engaging a range of stakeholders throughout the entire process, and cultural competence is essential for ensuring truly meaningful engagement. As evaluators, we have an ethical obligation to create an inclusive climate in which everyone invested in the evaluation — from agency head to program client — can fully participate. At the same time, significantly engaging stakeholders, particularly in the planning stage, will enhance the evaluation’s cultural competence.

Figure 1: CDC’s Framework for Program Evaluation in Public Health

12

EVALUATOR COMPETENCIES

Developed by AEA in 2018, these competencies frame “the important characteristics of professional evaluation practice.” The five competency domains are:

Professional practice,

Methodologies,

Context,

Planning and management, and

Interpersonal.

In 2012, Rodgers, Hillaker, Haas, and Peters identified a set of Extension evaluation competencies that was based on a taxonomy developed by Ghere et al (2006). These Extension competencies are:

Project management,

 Systematic inquiry-quantitative, qualitative, and mixed-methods knowledge and skills, and

Situational analysis.

In 2020, Dr. Diaz et al. explored the overlap of competencies between general program evaluation and Extension education context and content since Extension educators may need unique competencies to answer evaluation questions. Rodgers et al. (2012) represents the sole exploration of the essential competencies required by professionals who use evaluation as one part of their job portfolio, which leaves unanswered questions regarding the applicability of current evaluator competency models in such settings. A national, expert panel of evaluation specialists identified 36 competencies necessary for Extension educators. The 36 competencies are organized into the five competency domains proposed by the American Evaluation Association (professional practice; methodologies; context; planning and management; and interpersonal).

13

HUMAN SUBJECTS RESEARCH

The Common Rule for the protection of research participants is codified in the Code of Federal Regulations (45 CFR Part 46). These regulations are grounded in the Belmont Report of 1979. A PDF of the report is available here. This report was written in response to abuses of people in the name of research, as with medical experiments in Nazi Germany, and with such infamous cases as the Tuskegee Syphilis Study and the case of Henrietta Lacks, whose cells were harvested without her knowledge.

The Belmont Report provides three ethical guidelines that researchers should adhere to:

1. Respect for persons: Individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. This ethical guideline is applied through informed consent and voluntary participation in the research. 2. Beneficence: Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. This guideline is applied through participants understanding the risks and benefits of the research. 3. Justice: Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of "fairness in distribution" or "what is deserved." This principle is applied through the fair selection of research participants.

Most universities require that researchers participate in some type of training about human subjects research. A common training program is the Collaborative Institutional Training (CITI).

Your university has an Institutional Review Board (IRB) that ensures that researchers are following these regulations. This video provides a short introduction to the work of IRBs. Link: https://youtu.be/U8fme1boEbE

14

Chapter 2: Evaluation Planning

To achieve great things, two things are needed; a plan, and not quite enough time. — Leonard Bernstein.

Evaluation planning takes forethought and time to achieve the intended purposes. This chapter explores program frameworks, models, and life cycles, and provides resources to help evaluators/researchers scope the evaluation purpose and identify key stakeholders.

The chapter’s contents include:

Evaluation Frameworks

Program Theory and Logic Models

Programs, Life Cycles, and Evaluation

Evaluation Purpose and Scope

Stakeholder Analysis

Standards and Competencies:

Using frameworks, applying program theory and logic models, and understanding programs, life cycles, and purpose of evaluations.

15

EVALUATION FRAMEWORKS

Scriven's (1991) definition of evaluation is the most commonly cited and used:

Evaluation is the process of determining the merit, worth, and value of things, and evaluation are the products of that process.

An evaluation framework is made up of the distinct steps involved in the overall evaluation process. While there may be some differences in various models, there are also similarities across the models.

CDC Evaluation Framework for Public Health Programs

The Centers for Disease Control (CDC) Evaluation Framework for Public Health Programs provides an excellent overall framework from which to start your evaluation work. The website, which contains a range of resources, is available here: https://www.cdc.gov/eval/framework/index. htm

The framework is made up of six steps and includes a set of four standards that should guide the evaluation. Here are the six steps:

Engage stakeholders,

Describe the program,

Focus the evaluation design,

Gather credible evidence,

Justify conclusions, and

Ensure use and share lessons learned.

The steps are described in greater detail below.

Figure 2: CDC’s Framework for program evaluation.

16

Step 1: Engaging Stakeholders

The evaluation cycle begins by engaging stakeholders (i.e., the persons or organizations having an investment in what will be learned from an evaluation and what will be done with the knowledge). Public health work involves partnerships; therefore, any assessment of a public health program requires considering the value systems of the partners. Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, an evaluation might not address important elements of a program’s objectives, operations, and outcomes. Therefore, evaluation findings might be ignored, criticized, or resisted because the evaluation did not address the stakeholders’ concerns or values. After becoming involved, stakeholders help to execute the other steps. Identifying and engaging the following three principal groups of stakeholders are critical:

 Those involved in program operations (e.g., sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff);

 Those served or affected by the program (e.g., clients, family members, neighborhood organizations, academic institutions, elected officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing organizations); and

Primary users of the evaluation.

Step 2: Describing the Program

Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description should discuss the program’s capacity to effect change, its stage of development, and how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables comparisons with similar programs and facilitates attempts to connect program components to their effects. Moreover, stakeholders might have differing ideas regarding program goals and purposes. Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating with stakeholders to formulate a clear and logical description will bring benefits before data are available to evaluate program effectiveness. Aspects to include in a program description are need, expected effects, activities, resources, stage of development, context, and logic model.

Step 3: Focusing the Evaluation Design

The evaluation must be focused to assess the issues of greatest concern to stakeholders while using time and resources as efficiently as possible. Not all design options are equally well-suited to meeting the information needs of stakeholders. After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Among the items to consider when focusing an evaluation are purpose, users, uses, questions, methods, and agreements.

17

Step 4: Gathering Credible Evidence

An evaluation should strive to collect information that will convey a well-rounded picture of the program so that the information is seen as credible by the evaluation’s primary users. Information (i.e., evidence) should be perceived by stakeholders as believable and relevant for answering their questions. Such decisions depend on the evaluation questions being posed and the motives for asking them. For certain questions, a stakeholder’s standard for credibility might require having the results of a controlled experiment, whereas for another question, a set of systematic observations (e.g., interactions between an outreach worker and community residents) would be the most credible. Consulting specialists in evaluation methodology might be necessary in situations where concern for data quality is high or where serious consequences exist associated with making errors of inference (i.e., concluding that program effects exist when none do, concluding that no program effects exist when in fact they do, or attributing effects to a program that has not been adequately implemented).

Step 5: Justifying Conclusions

The evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders. Stakeholders must agree that conclusions are justified before they will use the evaluation results with confidence. Justifying conclusions on the basis of evidence includes standards, analysis and synthesis, interpretation, judgment, and recommendations.

Step 6: Ensuring Use and Shared Lessons Learned

Lessons learned in the course of an evaluation do not automatically translate into informed decision-making and appropriate action. Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately. Preparing for use involves strategic thinking and continued vigilance, both of which begin in the earliest stages of stakeholder engagement and continue throughout the evaluation process. Five elements are critical for ensuring use of an evaluation, including design, preparation, feedback, follow-up, and dissemination.

The four standards in the center of the framework – utility , feasibility , propriety , and accuracy – help to ensure the quality and effectiveness of the evaluation.

18

Rainbow Framework

Better Evaluation, a nonprofit collaborative organization, developed the Rainbow Framework, which is made up of seven steps. Each step of the framework is assigned a color. Together, they comprise a “rainbow,” as is seen in the diagram to the right. The Better Evaluation Rainbow Framework prompts you to think about a series of key questions. It is important to consider all these issues, including reporting, at the beginning of an evaluation. The Framework can be used to plan an evaluation or to locate information about particular types of methods. We describe the seven steps below.

1. MANAGE an evaluation or evaluation system

Manage an evaluation (or a series of evaluations), including deciding who will conduct the evaluation and who will make decisions about it.

Understand and engage stakeholders: Who needs to be involved in the evaluation? How can they be identified and engaged?

Figure 3: Better Evaluation Rainbow Framework

 Establish decision making processes: Who will have the authority to make what type of decisions about the evaluation? Who will provide advice or make recommendations about the evaluation? What processes will be used for making decisions?

 Decide who will conduct the evaluation: Who will actually undertake the evaluation?

 Determine and secure resources: What resources (time, money, and expertise) will be needed for the evaluation and how can they be obtained? Consider both internal (e.g. staff time) and external (e.g. previous participants’ time) resources.

 Define ethical and quality evaluation standards: What will be considered a high quality and ethical evaluation? How should ethical issues be addressed?

 Document management processes and agreements: How will the evaluation’s management processes and agreements be documented?

 Develop planning documents for the evaluation: What needs to be done to design, plan and implement the evaluation? What planning documents need to be created (evaluation framework, evaluation plan, evaluation design, evaluation work plan)?

19

 Review evaluation (do meta-evaluation): How will the evaluation itself be evaluated including the plan, process, and report?

 Develop evaluation capacity: How can the ability of individuals, groups and organizations to conduct and use evaluations be strengthened?

2. DEFINE what is to be evaluated

Develop a description (or access an existing version) of what is to be evaluated and how it is understood to work.

 Develop initial description: What exactly is being evaluated?

 Develop program theory / logic model: How is the intervention understood to work (program theory, theory of change, logic model)?

 Identify potential unintended results: What are possible unintended results (both positive and negative) that will be important to address in the evaluation?

3. FRAME the boundaries for an evaluation

Set the parameters of the evaluation – its purposes, key evaluation questions and the criteria and standards to be used.

 Identify primary intended users: Who are the primary intended users of this evaluation?

 Decide purpose: What are the primary purposes and intended uses of the evaluation?

 Specify the key evaluation questions: What are the high level questions the evaluation will seek to answer? How can these be developed.

 Determine what “success” looks like: What should the criteria and standards for judging performance be? Whose criteria and standards matter? What process should be used to develop agreement about these?

4. DESCRIBE activities, outcomes, impacts and context

Collect and retrieve data to answer descriptive questions about the activities of the project/program/ policy, the various results it has had, and the context in which it has been implemented.

 Sample: What sampling strategies will you use for collecting data?

20

 Use measures, indicators or metrics: What measures or indicators will be used? Are there existing ones that should be used or will you need to develop new measures and indicators?

 Collect and/ or retrieve data: How will you collect and/ or retrieve data about activities, results, context and other factors?

 Manage data: How will you organize and store data and ensure its quality?

 Combine qualitative and quantitative data: How will you combine qualitative and quantitative data?

 Analyze data : How will you investigate patterns in the numeric or textual data?

 Visualize data: How will you display data visually?

5. UNDERSTAND CAUSES of outcomes and impacts

Collect and analyze data to answer causal questions about what has produced outcomes and impacts that have been observed.

 Check that results support causal attribution: How will you assess whether the results are consistent with the theory that the intervention produced them?

 Compare results to the counterfactual: How will you compare the factual with the counterfactual – what would have happened without the intervention?

 Investigate possible alternative explanations: How will you investigate alternative explanations?

6. SYNTHESIZE data from one or more evaluations

Combine data to form an overall assessment of the merit or worth of the intervention, or to summarize evidence across several evaluations.

 Synthesize data from a single evaluation: How will you synthesize data from a single evaluation?

 Synthesize data across evaluations: Do you need to synthesize data across evaluations? If so, how should this be done?

 Generalize findings: How can the findings from this evaluation be generalized to the future, to other sites and to other programs?

21

7. REPORT AND SUPPORT USE of findings

Develop and present findings in ways that are useful for the intended users of the evaluation and support them to make use of them.

 Identify reporting requirements: What timeframe and format is required for reporting?

 Develop reporting media: What types of reporting formats will be appropriate for the intended users?

 Ensure accessibility : How can the report be easy to access and use for different users?

 Develop recommendations: Will the evaluation include recommendations? How will these be developed and by whom?

 Support use: In addition to engaging intended users in the evaluation process, how will you support the use of evaluation findings?

22

PROGRAM THEORY AND LOGIC MODELS

Program Theory

Program theory is at the foundation of any evaluation because it articulates what the program is supposed to accomplish--the outcomes. Program theory is also called a theory of change or a pathway of change. The intervention that is used to cause behavior change is the program. Watch this brief video (less than three minutes) for a simple explanation of theory of change that is relevant to Extension work. Link: https://youtu.be/gAkajtmYnNg

23

Logic Models

A logic model is used in program theory to show the chain of events or causal links that will produce the outcomes. For a brief explanation of logic models (a little over three minutes), take a look at this YouTube video produced by the North Carolina Coalition Against Domestic Violence. Link: https://youtu.be/wFaJo6FF_yA

Logic Model Training

The University of Wisconsin Extension Program Development and Evaluation unit provides a comprehensive on-line logic model training module. The module is made up of seven sections:

What is a logic model?

More about outcomes

More about your program "logic"

What does a logic model look like?

How do I draw a logic model?

How good is my logic model?

 Using logic models in Evaluation: Indicators and Measures

24

Love Your Logic Model

If you want to learn how to love your logic model listen to this 75-minute video featuring Tom Chappel, Chief Evaluation Officer at the Centers for Disease Control. Link: https://youtu.be/2HrG5ButP_g

25

PROGRAMS, LIFE CYCLES, AND EVALUATION

In Extension, we talk about programs a great deal - AND we assume that we are all talking about the same thing. That may not be the case. Some people may call a field tour a program; others may call it an activity.

For Extension educational work, Israel, Harder, & Brodeur (2015) define a program as "a comprehensive set of activities that includes an educational component that is intended to bring about a sequence of outcomes among targeted clients." Review their fact sheet, What is an Extension Program? for an introduction to Extension programs.

Program Life Cycles

Programs have life cycles. Trochim et al (2016) in The Guide to the Systems Evaluation Protocol identify four main life-cycle stages that a program moves through (and sometimes back and forth in the cycle):

 Initiation: The program is just getting started and may be in a pilot phase. Major changes are generally taking place as trial and error occur.

 Development: The program is implemented successfully, and minor revisions occur.

 Stability: The program is producing consistent results, and the curriculum and protocols are in place.

 Dissemination: The program is being adopted at multiple locations and within different contexts.

26

Figure 4: Characterizing a Program. Extracted from Trochim et al (2016) with permission of the author.

Program Evaluation

The program cycle determines the level of evaluation that should be implemented. It would not make sense in terms of expectations and resources to plan a sophisticated outcome evaluation for a program that is in the initiation phase. Trochim et al recommend that the program life cycle be in alignment with these evaluation strategies:

 Initiation: Process evaluation for rapid feedback, such as post-only reaction surveys and open- ended questions.

 Development: Change in knowledge, attitudes, skills, and aspirations (KASA) outcomes because of the program, such as pre-tests and post-tests.

 Stability: Program effectiveness in causing the intended change, such as with control groups and quasi-experimental designs.

 Dissemination: Program effectiveness across multiple sites to determine generalizability through statistical analysis.

27

McCoy and Braun (2014), in the Program Assessment Tool (PAT), provide another view of program life cycles that is based on the work of Boyle (1981). While developed specifically for the University of Maryland Extension, their comprehensive rubric can be used across Extension organizations. You can download a PDF of the PAT (updated in 2022) here.

Figure 5: Program Evaluation Rubric. Developed by McCoy and Braun (2014) for the University of Maryland Extension; updated in 2022. Graphic from the guide is used with permission of the authors. Accessed from https://extension.umd.edu/about/program-and-organizational-development/program-planning-evaluation- and-assessment/program-evaluation/ume-program-assessment-tool-pat/

28

EVALUATION PURPOSE AND SCOPE

Limited resources in terms of time and money require that evaluation projects be clearly targeted in terms of the purpose and scope (or boundary) of what is most needed and will be used. Michael Quinn Patton (2013), in his book, Utilization-Focused Evaluation , says that evaluations should "be judged by their utility and actual use" (p. 37). Patton defines use as "how real people in the real world apply evaluation findings and experience and learn from the evaluation process" (p. 1). A U-FE checklist developed by Patton provides the details on how to plan and carry out a useful evaluation.

Guidelines for Establishing Purpose and Scope

Step-by-step guidance on how to determine your evaluation purpose or "frame the boundaries for an evaluation" is explained by BetterEvaluation. This resource walks through the following four question categories:

 Who are the primary intended users of this evaluation?

 What are the primary purposes and intended uses of the evaluation?

 What are the high-level questions the evaluation will seek to answer? How can these be developed?

 What should be the criteria and standards for judging performance? Whose criteria and standards matter? What process should be used to develop agreement about these?

The Centers for Disease Control (CDC) checklist for focusing an evaluation is based on the four evaluation standards:

 Utility: Who needs the information from this evaluation and how will they use it

 Feasibility: How much money, time, skill, and effort can be devoted to this evaluation?

29

 Propriety: Who needs to be involved in the evaluation to be ethical?

 Accuracy: What design will lead to accurate information?

Tools for Establishing Purpose and Scope

The CDC's Developing an Effective Evaluation Plan: Setting the Course for Effective Program Evaluation is a comprehensive manual with multiple useful tools.

The Community Tool Box from the Center for Community Health & Development at the University of Kansas identified four main steps to develop an evaluation plan:

Clarify program objectives and goals.

Develop evaluation questions.

Develop evaluation methods.

 Set up a timeline for evaluation activities.

This How to Develop an Evaluation Plan Power Point from the Community Tool Box walks through each of these four steps. A PDF of the Power Point is located here.

30

ASSET MAPPING, NEEDS ASSESSMENT AND STAKEHOLDER ANALYSIS

When ownership is local and national, and various stakeholders work together, program innovations have a greater chance to take root and survive.

— Dr. Ruth Simmons, President, Prairie View A&M University

Just like programs, program evaluations have stakeholders. Often, these are individuals who intend to use the evaluation results (see the section on evaluation purpose and scope and the discussion of Utilization- Focused Evaluation), who provide resources to support the program, who are involved in the program implementation, and who are beneficiaries of the program.

The Centers for Disease Control's (CDC) guide, Developing an Effective Evaluation Plan: Setting the Course for Effective Program Evaluation (2011) gives the following ways that stakeholders can help the evaluation:

 Determine and prioritize key evaluation questions.

Pre-test data collection instruments.

31

Facilitate data collection.

Implement evaluation activities.

 Increase credibility of analysis and interpretation of evaluation information.

Ensure evaluation results are used.

Asset Mapping

Community Asset Mapping refers to the process of creating an inventory of the skills, talents and resources that exist within a community or neighborhood. Identification of assets and skills, possessed by residents, businesses, organizations and institutions, can support neighborhoods in reaching their optimum potential.

Understanding Community Assets

A community asset or resource is anything that improves the quality of a community. Community assets can include:

 Expertise and skills of individuals in the community

Citizen groups

Natural and built environments

 Physical spaces in the community (schools, churches, libraries, recreation centers)

Local businesses and services

 Local institutions and organizations (private, public, nonprofit)

Why Use an Asset Map?

The process of asset mapping illuminates connections between people and places; it can foster a greater sense of community pride and ownership; it can build capacity for turning common ideas into positive actions. The knowledge, skills and resource information amassed through mapping can inform organizing and facilitating activities on topics that reflect the pulse of community-thinking.

There are many reasons that you may decide to do an Asset Map of your community or neighborhood. You may want to develop:

 A Community Map to paint a broad picture of community assets

32

 A Community Involvement Directory to showcase activities of formal and informal groups, including ways to get involved in their efforts

 A Neighborhood Business Directory listing neighborhood businesses and services

 An Individual Asset Bank featuring the gifts, talents, interests, and resources of individuals

In addition, you may want to create inventories or maps based on interests or specific topics. For example, you may decide to put together an inventory of:

 Transportation: public transportation stops, bike routes, flex car sites, carpooling opportunities, taxi services

 Childcare: individuals who provide childcare, are interested in swapping child care or collaborating on play dates

 Open Spaces: meeting spaces, parks, playgrounds, walking paths

 Food: community gardens, individual/family gardens, fruit trees, urban edibles, farmers markets

 Emergency Preparedness: water lines, gas lines, trucks, cell phones, ladders, fire extinguishers

 Local Economy: goods and services provided by individuals within the community

 Bartering: skills and stuff that neighbors are willing to barter for and share with other neighbors

The Asset Mapping Process

Identifying and mapping assets in your neighborhood or community can be as simple or as in-depth as you like. While each asset mapping project will ultimately involve different steps and outcomes, there are several key elements to consider in the development of your project:

Identify and involve partners

 Define your community or neighborhood boundaries

Define the purpose

 Determine what types of assets to include

Identify the methods

Report back

More resources are available at:

 UCLA Center for Health Policy Research – Asset Mapping

 University of Kansas Community Tool Box: Identifying Community Assets and Resources

33

Website

o

Power Point PDF

o

 Participatory Asset Mapping – A Community Research Lab Toolkit

 University of Wisconsin - Collecting Group Data Techniques Quick Tips

Needs Assessment

An integral step in the program development process is identifying the needs of a community. Formal and nonformal educators seeking to develop and deliver an educational program must first be informed of what their audience lacks in order to develop the right curriculum or training (Etling & Maloney, 1995). A need is the “discrepancy or gap between ‘what is’ and ‘what should be’” (Witkin & Altschuld, 1995, p. 4). The “what is” is the current state, the “what should be” is the desired or expected outcome, and the gap is the identified need(s). Extension professionals must understand what needs to target with educational programming in order to help achieve the desired situation (the “what should be”). A needs assessment is “a systematic set of procedures undertaken for the purpose of setting priorities and making decisions about program or organizational improvement and allocation of resources” (Witkin & Altschuld, 1995, p. 4).

A Three-Phase Plan for Assessing Needs

Each phase is laid out to guide the assessor through the entire needs assessment process (Witkin & Altschuld, 1995). The first phase, Pre-assessment, is exploratory by nature and seeks to help you prepare the needs assessment for implementation. Assessment is the second phase; data gathering and analysis occur here. During the last phase, Post-assessment, the Extension professional sets priorities, communicates results, and evaluates the needs assessment for effectiveness.

34

Figure 6: Three-phase plan for assessing needs (Witkin & Altschuld, 1995).

Needs Assessment Tools and Techniques

As mentioned earlier in this publication, the needs assessment should follow a systematic set of procedures. Using methods and protocols that have demonstrated high reliability and validity ensures the results of your own needs assessment are viable and trustworthy (Witkin & Altschuld, 1995). Both quantitative and qualitative tools and techniques exist but be careful when determining which to use. Survey methods such as the Borich Model are useful when you already know the needs or set of skills required but aren’t sure which to focus on first. Other methods such as interviews and the nominal group technique are useful when you do not have any determined or identified needs.

The University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) Extension has published an 11-part series on needs assessment. You can access that resource here.

You may also want to read the CDC’s Community Needs Assessment Workbook, available here. North Carolina State University ’s Evaluation team has developed several county needs assessment tools; they are available here.

The CDC's checklist for engaging stakeholders can help you think through who the stakeholders of your evaluation are.

35

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Page 24 Page 25 Page 26 Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Page 39 Page 40 Page 41 Page 42 Page 43 Page 44 Page 45 Page 46 Page 47 Page 48 Page 49 Page 50 Page 51 Page 52 Page 53 Page 54 Page 55 Page 56 Page 57 Page 58 Page 59 Page 60 Page 61 Page 62 Page 63 Page 64 Page 65 Page 66 Page 67 Page 68 Page 69 Page 70 Page 71 Page 72

impact.extension.org

Powered by