Evaluation Matters Flipbook Migration Doc to be Updated

Extension Evaluation Matters A Professional Development Offering of the eXtension Foundation Impact Collaborative

Extension and evaluation both center on getting useful information to people . - Michael Quinn Patton, Journal of Extension, 1983

Attribution Extension Evaluation Matters eFieldbook Copyright © Diaz, J, McCoy, T. 2021, Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Published by eXtension Foundation.

e-pub: 978-1-7340417-2-9

Publish Date: 9/18/2021

Citations for this eFieldbook may be made using the following:

Diaz, J, McCoy, T. (2020). Extension Evaluation Matters eFieldbook (2nd ed., 1st rev.). Kansas City: eXtension Foundation. ISBN: 978-1-7340417-2-9.

Producer: Ashley S. Griffin

Welcome to the Extension Evaluation Matters eFieldbook, a resource created for the Cooperative Extension Service and published by the eXtension Foundation. We welcome feedback and suggested resources for this eFieldbook, which could be included in any subsequent versions. This work is supported by New Technologies for Agriculture Extension grant no. 2015-41595-24254 from the USDA National Institute of Food and Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture. For more information please contact:

eXtension Foundation c/o Bryan Cave LLP One Kansas City Place

1200 Main Street, Suite 3800 Kansas City, MO 64105-2122 https://impact.extension.org/

Table of Contents

Meet the Authors The lead curators of this e-fieldbook are Teresa McCoy, Director, Learning and Organizational Development, Ohio State University Extension and John Diaz, Extension Professor and Specialists, University of Florida. They have served as the 2019 and 2020 Fellows, respectively, for the National Association of Extension Program and Staff Development Professionals.

Dr. John Diaz

Teresa McCoy

2020 NAEPSDP eXtension Fellow Assistant Professor and Extension Specialist University of Florida Bio

2019 NAEPSDP eXtension Fellow

Director, Learning and Organizational Development

Ohio State University Extension

Editorial Review Board An expert team of reviewers from Extension has and is working on reviewing the material and adding resources. The reviewer team is made up of:

Dr. Celeste Allgood

Accountability and Impact Agent

Fort Valley State University

eFieldbook Reviewer

Ms. Kit Alviz

Program Planning and Evaluation Analyst

University of California

Bio

eFieldbook Reviewer

Dr. Virginia Brown

Senior Agent and FCS Evaluator

University of Maryland Extension

eFieldbook Reviewer

Dr. Scott Cummings

Professor and Extension Specialist

Texas A & M University

Bio

eFieldbook Reviewer

Dr. Vikram Koundinya

Evaluation Specialist

University of California-Davis

Bio

eFieldbook Reviewer

Dr. Alda Norris

Evaluation Specialist

University of Alaska

eFieldbook Reviewer

Introduction

Extension and evaluation both center on getting useful information to people - Michael Quinn Patton, Journal of Extension, 1983

Welcome to Extension Evaluation Matters (or E2M). The name, E2M, has a double meaning. This e-fieldbook is a resource for all Extension professionals about the matters of Extension evaluation, such as planning and implementing an evaluation project. It is also to emphasize that evaluation matters in our work because it is about getting information to people for decision-making. Resources have been specifically chosen that can be put to work in your program right away. The e-fieldbook has three chapters. The first chapter is about standards and competencies in the practice of evaluation. The foundation of all evaluation practice is built upon ethical standards, integrity and honesty, and respect for people. The second chapter concerns evaluation planning–how to make sure you are clear about your purpose and the information you need. The third chapter is about evaluation implementation when we get to collect and analyze data and answer our evaluation questions.

Chapter 1: Standards and Competencies Just as evaluation standards provide guidance for making decisions when conducting program evaluation studies, evaluator competencies that specify the knowledge, skills, and dispositions central to effectively accomplishing those standards have the potential to further increase the effectiveness of evaluation efforts. — Stephena, L., King, J. A., Ghere, G., & Minnema, J. (2005). This chapter delves into the field of evaluation principles and competencies developed by the American Evaluation Association (AEA) and through research studies conducted over time. Another critical issue for researchers and/or evaluators is to make sure that they adhere to the ethical principles of respect for persons, beneficence, and justice established by the Belmont Report. Chapter Contents:

1. Evaluation Guiding Principles 2. Evaluation Cultural Competence 3. Evaluator Competencies 4. Human-Subjects Research

Standards and Competencies

Guiding principles, cultural competence, competencies, and working with human subjects. Image by Gerd Altmann from Pixabay

Evaluation Guiding Principles The American Evaluation Association (AEA) developed a set of guiding principles for evaluators. These principles address the ethical and professional standards that evaluators should follow in all aspects of the evaluation process. Extension professionals, whether they describe themselves as an evaluator or not, should adhere to these principles in their work.

The five principles are:

● Systematic inquiry ● Competence ● Integrity ● Respect for people ● Common good and equity

For Extension, our evaluations should adhere to these principles by:

● Being conducted in a thorough, systematic way that takes into account our context and our clientele and the limitations of the evaluation. ● Using evaluation skills and knowledge that will enable the evaluation to be carried out. ● Practicing honesty, communicating clearly, disclosing any conflicts of interest, and operating with transparency. ● Treating people fairly and reducing risks or harm, insuring that people are fully informed about the evaluation work, protecting confidentiality, and appreciating the different experiences and perspectives that people bring to the evaluation. ● Making sure the evaluation advances the common good of Extension, clientele, and the community.

To read about each of the principles in-depth, visit the AEA’s web page here.

This is a PDF of the full version of AEA Guiding Principles.

Resources :

American Evaluation Association (2011). Public statement on cultural competence in evaluation. Washington, DC: Author.

Evaluation Cultural Competence

The American Evaluation Association (AEA) defines a culturally competent evaluator as a person who “is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Evaluators who strive for cultural competence: acknowledge the complexity of cultural identity; recognize the dynamics of power; recognize and eliminate bias in language; employ culturally appropriate methods.” AEA’s Public Statement on Cultural Competence in Evaluation (seen to the right) provides a set of practices that you can use to integrate cultural competence into your evaluation work. The Centers for Disease Control (CDC) developed a set of cultural competence standards that accompany their evaluation competencies. Along with the standards, CDC has identified “Practical Strategies for Culturally Competent Evaluation.”

With its emphasis on stakeholder engagement, this version of CDC’s Framework for Program Evaluation (see Figure 1) emphasizes an even greater commitment to cultural competence than do less participatory evaluation approaches. Evaluations guided by the CDC framework actively involve engaging a range of stakeholders throughout the entire process, and cultural competence is essential for ensuring truly meaningful engagement. As evaluators, we have an ethical obligation to create an inclusive climate in which everyone invested in the evaluation—from agency head to program client— can fully participate. At the same time, significantly engaging stakeholders, particularly in the planning stage, will enhance the evaluation’s cultural competence.

Reflection

Discuss

What strategies have you used to create culturally competent program evaluations?

References :

American Evaluation Association. (2011). Public statement on cultural competence in evaluation. Fairhaven, MA: Author. Retrieved from https://www.eval.org/p/cm/ld/fid=92

Centers for Disease Control and Prevention. (2014). Practical strategies for culturally competent evaluation . Atlanta, GA: US Dept of Health and Human Services. Retrieved from: https://www.cdc.gov/asthma/program_eval/cultural_competence_guide.pdf

Evaluator Competencies

Developed by AEA in 2018, these competencies frame “the important characteristics of professional evaluation practice.” The five competency domains are:

● Professional practice, ● Methodologies, ● Context, ● Planning and management, and ● Interpersonal.

In 2012, Rogers, Hillaker, Haas, and Peters identified a set of Extension evaluation competencies that was based on a taxonomy developed by Ghere et al (2006). These Extension competencies are:

● Project management, ● Systematic inquiry-quantitative, qualitative, and mixed-methods knowledge and skills, and ● Situational analysis.

In 2020, Dr. Diaz et al. explored the overlap of competencies between general program evaluation and extension education context and content since extension educators may need unique competencies to answer evaluation questions. Rogers et al. (2012) represents the sole exploration of the essential competencies required by professionals who use evaluation as one part of their job portfolio, which leaves unanswered questions regarding the applicability of current evaluator competency models in such settings. A national, expert panel of evaluation specialists identified 36 competencies necessary for Extension educators. The 36 competencies are organized into the five competency domains proposed by the American Evaluation Association: o Professional practice, o Methodologies,

o Context, o Planning and management, and o Interpersonal.

Read more: https://www.sciencedirect.com/science/article/pii/S0149718920300276?casa_token=fEB b1CdLrAYAAAAA:zMcHquSn57l0KWMs4n0qmHlVikOOJOCDnKAVJSZj4Fsci0nhi0AUB EtHVpklPCHOQ8JRM5yzx5o

References : Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006). A professional development unit for reflecting on program evaluator competencies. American Journal of Evaluation , 27(1), 108-123. Rodgers, M. S., Hillaker, B. D., Haas, B. E., & Peters, C. (2012). Taxonomy for assessing evaluation competencies in Extension. Journal of Extension [On-line]. 50(4) Article 4FEA2. Available at https://joe.org/joe/2012august/a2.php Human Subjects Research The Common Rule for the protection of research participants is codified in the Code of Federal Regulations (45 CFR Part 46). These regulations are grounded in the Belmont Report of 1979. This report was written in response to abuses of people in the name of research, as with medical experiments in Nazi Germany, and with such infamous cases as the Tuskegee Syphilis Study and the case of Henrietta Lacks whose cells were harvested without her knowledge.

The Belmont Report provides three ethical guidelines that researchers should adhere to:

1. Respect for persons: Individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. This ethical guideline is applied through informed consent and voluntary participation in the research. 2. Beneficence: Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. This guideline is applied through participants understanding the risks and benefits of the research.

3. Justice: Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of "fairness in distribution" or "what is deserved." This principle is applied through the fair selection of research participants.

Reflection

Your university has an Institutional Review Board (IRB) that ensures that researchers are following these regulations. This video provides a short introduction to the work of IRBs.

[embed]https://youtu.be/U8fme1boEbE[/embed]

Reflection

Most universities require that researchers participate in some type of training about human subjects research. A common training program is the Collaborative Institutional Training Initiative (CITI).

References : Centers for Disease Control and Prevention (nd). U.S. public health service syphillis study at Tuskegee. Retreived from: https://www.cdc.gov/tuskegee/timeline.htm

Collaborative Institutional Training Initiative (CITI Program). (nd). Retreived from: https://about.citiprogram.org/en/mission-and-history/

Department of Health, Education, and Welfare (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved from: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xethical

Johns Hopkins Medicine (nd). The legacy of Henrietta Lacks. Retrieved from: https://www.hopkinsmedicine.org/henriettalacks/immortal-life-of-henrietta-lacks.html

Chapter 2: Evaluation Planning To achieve great things, two things are needed; a plan, and not quite enough time. — Leonard Bernstein.

Evaluation planning takes forethought and time to achieve the intended purposes. This chapter explores program frameworks, models, and life cycles, and provides resources to help evaluators/researchers scope the evaluation purpose and identify key stakeholders.

Chapter Contents :

1. Evaluation Frameworks 2. Program Theory and Logic Models 3. Programs, Life Cycles, and Evaluation 4. Evaluation Purpose and Scope 5. Stakeholder Analysis

Evaluation Planning:

Using frameworks, applying program theory and logic models, and understanding programs, life cycles, and purpose of evaluations.

Image by Gerd Altmann from Pixabay

Evaluation Framework Models

Scriven's (1991) definition of evaluation is the most commonly cited and used:

Evaluation is the process of determining the merit, worth, and value of things, and evaluation are the products of that process. An evaluation framework is made up of the distinct steps involved in the overall evaluation process. While there may be some differences in various models, there is also similarities across the models.

CDC Evaluation Framework for Public Health Programs

The Centers for Disease Control (CDC) Evaluation Framework for Public Health Programs provides an excellent overall framework from which to start your evaluation work. It is made up of six steps and includes a set of four standards that should guide the evaluation:

1. Engage stakeholders, 2. Describe the program, 3. Focus the evaluation design,

4. Gather credible evidence, 5. Justify conclusions, 6. Ensure use and share lessons learned.

Step 1: Engaging Stakeholders -The evaluation cycle begins by engaging stakeholders (i.e., the persons or organizations having an investment in what will be learned from an evaluation and what will be done with the knowledge). Public health work involves partnerships; therefore, any assessment of a public health program requires considering the value systems of the partners. Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, an evaluation might not address important elements of a program’s objectives, operations, and outcomes. Therefore, evaluation findings might be ignored, criticized, or resisted because the evaluation did not address the stakeholders’ concerns or values. After becoming involved, stakeholders help to execute the other steps. Identifying and engaging the following three principal groups of stakeholders are critical: ● those involved in program operations (e.g., sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff); ● those served or affected by the program (e.g., clients, family members, neighborhood organizations, academic institutions, elected officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing organizations); and ● primary users of the evaluation. Step 2: Describing the Program- Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description should discuss the program’s capacity to effect change, its stage of development, and how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables comparisons with similar programs and facilitates attempts to connect program components to their effects. Moreover, stakeholders might have differing ideas regarding program goals and purposes. Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating with stakeholders to formulate a clear and logical description will bring benefits before data are available to evaluate program effectiveness. Aspects to include in a program des cription are need, expected effects, activities, resources, stage of development, context, and logic model. Step 3: Focusing the Evaluation Design- The evaluation must be focused to assess the issues of greatest concern to stakeholders while using time and resources as efficiently as possible. Not all design options are equally well-suited to meeting the information needs of stakeholders. After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Among the items to consider

when focusing an evaluation are purpose, users, uses, questions, methods, and agreements. Step 4: Gathering Credible Evidence- An evaluation should strive to collect information that will convey a well-rounded picture of the program so that the information is seen as credible by the evaluation’s primary users. Information (i.e., evidence) should be perceived by stakeholders as believable and relevant for answering their questions. Such decisions depend on the evaluation questions being posed and the motives for asking them. For certain questions, a stakeholder’s standard for credibility might require having the results of a controlled experiment; whereas for another question, a set of systematic observations (e.g., interactions between an outreach worker and community residents) would be the most credible. Consulting specialists in evaluation methodology might be necessary in situations where concern for data quality is high or where serious consequences exist associated with making errors of inference (i.e., concluding that program effects exist when none do, concluding that no program effects exist when in fact they do, or attributing effects to a program that has not been adequately implemented). Step 5: Justifying Conclusions- The evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders. Stakeholders must agree that conclusions are justified before they will use the evaluation results with confidence. Justifying conclusions on the basis of evidence includes standards, analysis and synthesis, interpretation, judgment, and recommendations. Step 6: Ensuring Use and Shared Lessons Learned- Lessons learned in the course of an evaluation do not automatically translate into informed decision-making and appropriate action. Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately. Preparing for use involves strategic thinking and continued vigilance, both of which begin in the earliest stages of stakeholder engagement and continue throughout the evaluation process. Five elements are critical for ensuring use of an evaluation, including design, preparation, feedback, follow-up, and dissemination.

The four standards in the center of the framework help to ensure the quality and effectiveness of the evaluation.

Rainbow Framework

Better Evaluation, a nonprofit collaborative organization, developed the Rainbow Framework that is made up of seven steps. Each step of the framework is assigned a color that makes up the "rainbow," as is seen in the diagram to the left.

The Better Evaluation Rainbow Framework prompts you to think about a series of key questions. It is important to consider all these issues, including reporting, at the beginning of an evaluation. The Framework can be used to plan an evaluation or to locate information about particular types of methods.

1. MANAGE an evaluation or evaluation system Manage an evaluation (or a series of evaluations), including deciding who will conduct the evaluation and who will make decisions about it. ● Understand and engage stakeholders: Who needs to be involved in the evaluation? How can they be identified and engaged? ● Establish decision making processes: Who will have the authority to make what type of decisions about the evaluation? Who will provide advice or make recommendations about the evaluation? What processes will be used for making decisions? ● Decide who will conduct the evaluation: Who will actually undertake the evaluation?

● Determine and secure resources: What resources (time, money, and expertise) will be needed for the evaluation and how can they be obtained? Consider both internal (e.g. staff time) and external (e.g. previous participants’ time) resources. ● Define ethical and quality evaluation standards: What will be considered a high quality and ethical evaluation? How should ethical issues be addressed? ● Document management processes and agreements: How will the evaluation’s management processes and agreements be documented? ● Develop planning documents for the evaluation: What needs to be done to design, plan and implement the evaluation? What planning documents need to be created (evaluation framework, evaluation plan, evaluation design, evaluation work plan)? ● Review evaluation (do meta-evaluation): How will the evaluation itself be evaluated including the plan, process, and report? ● Develop evaluation capacity: How can the ability of individuals, groups and organizations to conduct and use evaluations be strengthened? 2. DEFINE what is to be evaluated Develop a description (or access an existing version) of what is to be evaluated and how it is understood to work. ● Develop initial description: What exactly is being evaluated? ● Develop program theory / logic model: How is the intervention understood to work (program theory, theory of change, logic model)? ● Identify potential unintended results: What are possible unintended results (both positive and negative) that will be important to address in the evaluation? 3. FRAME the boundaries for an evaluation Set the parameters of the evaluation – its purposes, key evaluation questions and the criteria and standards to be used. ● Identify primary intended users: Who are the primary intended users of this evaluation? ● Decide purpose: What are the primary purposes and intended uses of the evaluation? ● Specify the key evaluation questions: What are the high level questions the evaluation will seek to answer? How can these be developed? ● Determine what ‘success’ looks like: What should be the criteria and standards for judging performance? Whose criteria and standards matter? What process should be used to develop agreement about these? 4. DESCRIBE activities, outcomes, impacts and context Collect and retrieve data to answer descriptive questions about the activities of the project/program/ policy, the various results it has had, and the context in which it has been implemented. ● Sample: What sampling strategies will you use for collecting data?

● Use measures, indicators or metrics: What measures or indicators will be used? Are there existing ones that should be used or will you need to develop new measures and indicators? ● Collect and/ or retrieve data: How will you collect and/ or retrieve data about activities, results, context and other factors? ● Manage data: How will you organize and store data and ensure its quality? ● Combine qualitative and quantitative data: How will you combine qualitative and quantitative data? ● Analyze data: How will you investigate patterns in the numeric or textual data? ● Visualize data: How will you display data visually? 5. UNDERSTAND CAUSES of outcomes and impacts Collect and analyze data to answer causal questions about what has produced outcomes and impacts that have been observed. ● Check the results support causal attribution: How will you assess whether the results are consistent with the theory that the intervention produced them? ● Compare results to the counterfactual: How will you compare the factual with the counterfactual - what would have happened without the intervention? ● Investigate possible alternative explanations: How will you investigate alternative explanations? 6. SYNTHESISE data from one or more evaluations Combine data to form an overall assessment of the merit or worth of the intervention, or to summarize evidence across several evaluations. ● Synthesize data from a single evaluation: How will you synthesize data from a single evaluation? ● Synthesize data across evaluations: Do you need to synthesize data across evaluations? If so, how should this be done? ● Generalize findings: How can the findings from this evaluation be generalized to the future, to other sites and to other programs? 7. REPORT AND SUPPORT USE of findings Develop and present findings in ways that are useful for the intended users of the evaluation, and support them to make use of them. ● Identify reporting requirements: What timeframe and format is required for reporting? ● Develop reporting media: What types of reporting formats will be appropriate for the intended users? ● Ensure accessibility: How can the report be easy to access and use for different users?

● Develop recommendations: Will the evaluation include recommendations? How will these be developed and by whom? ● Support use: In addition to engaging intended users in the evaluation process, how will you support the use of evaluation findings?

References : BetterEvaluation (2014) Using the BetterEvaluation Rainbow Framework . Retrieved from: www.betterevaluation.org Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR 1999;48(No. RR-11). Retrieved from: https://www.cdc.gov/eval/framework/index.htm

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.

Program Theory and Logic Models

Program Theory Program theory is at the foundation of any evaluation because it articulates what the program is supposed to accomplish--the outcomes. Program theory is also called a theory of change or a pathway of change. The intervention that is used to cause behavior change is the program. Watch this brief video (less than three minutes) for a simple explanation of theory of change that is relevant to Extension work:

[embed]https://youtu.be/gAkajtmYnNg[/embed]

Logic Models

A logic model is used in program theory to show the chain of events or causal links that will produce the outcomes.

Reflection

Logic Model Training The University of Wisconsin Extension Program Development and Evaluation unit provides a comprehensive on-line logic model training module. The module is made up of seven sections:

1. What is a logic model? 2. More about outcomes

3. More about your program "logic" 4. What does a logic model look like?

5. How do I draw a logic model? 6. How good is my logic model? 7. Using logic models in Evaluation: Indicators and Measures

Logic Model Overview

For a short explanation of logic models (a little over three minutes), take a look at this YouTube video produced by the North Carolina Coalition Against Domestic Violence:

[embed]https://youtu.be/wFaJo6FF_yA[/embed]

Love your Logic Model

If you want to learn how to love your logic model (about 1.15 hours), listen to Tom Chappel, Chief Evaluation Officer at the Centers for Disease Control.

[embed]https://youtu.be/2HrG5ButP_g[/embed]

References : University of Wisconsin Extension Program Development & Evaluation (nd). Welcome to enhancing program performance with logic models. Retrieved from: http://lmcourse.ces.uwex.edu/interface/coop_M1_Overview.htm

Programs, Life Cycles, and Evaluation

Programs

In Extension, we talk about programs a great deal--AND we assume that we are all talking about the same thing. That may not be the case. Some people may call a field tour a program; others may call it an activity. For Extension educational work, Israel, Harder, & Brodeur (2015) define a program as "a comprehensive set of activities that includes an educational component that is intended to bring about a sequence of outcomes among targeted clients." Review their fact sheet, What is an Extension Program? for an introduction to Extension programs. Program Life Cycles Programs have life cycles. Trochim et al (2016) in The Guide to the Systems Evaluation Protocol identify four main life-cycle stages that a program moves through (and sometimes back and forth in the cycle): 1. Initiation: The program is just getting started and may be in a pilot phase. Major changes are generally taking place as trial and error occur. 2. Development: The program is implemented successfully and minor revisions occur. 3. Stability: The program is producing consistent results and the curriculum and protocols are in place. 4. Dissemination: The program is being adopted at multiple locations and within different contexts.

Characterizing a Program Extracted from Trochim et al (2016), with permission of the author

Program Evaluation The program cycle determines the level of evaluation that should be implemented. It would not make sense in terms of expectations and resources to plan a sophisticated outcome evaluation for a program that is in the initiation phase. Trochim et al recommend that the program life cycle be in alignment with these evaluation strategies: 1. Initiation: Process evaluation for rapid feedback, such as post-only reaction surveys and open-ended questions. 2. Development: Change in knowledge, attitudes, skills, and aspirations (KASA) outcomes because of the program, such as pre-tests and post-tests. 3. Stability: Program effectiveness in causing the intended change, such as with control groups and quasi-experimental designs. 4. Dissemination: Program effectiveness across multiple sites to determine generalizability through statistical analysis. McCoy and Braun (2014), in the Program Assessment Tool, provide another view of program life cycles that is based on the work of Boyle (1981). While developed specifically for the University

of Maryland Extension, their comprehensive rubric can be used across Extension organizations.

Program Evaluation Rubric

Developed by McCoy and Braun (2014) for the University of Maryland Extension

References : Boyle, P. A. (1981). Planning better programs . New York: McGraw-Hill.

Israel, G., Harder, A., & Brodeur, C. W. (2015). What is an Extension program?Gainesville, FL: University of Florida/Institute of Food and Agricultural Sciences.

McCoy, T., & Braun, B. (2014). The program assessment tool. College Park, MD: University of Maryland Extension.

Trochim, W. Urban, J.B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2016). The guide to the systems evaluation protocol (V3.1). Ithaca, NY: Cornell.

Graphic from The Guide is used with permission of the author.

Image by Ewa Urban from Pixabay

Evaluation Purpose and Scope Limited resources in terms of time and money require that evaluation projects be clearly targeted in terms of the purpose and scope (or boundary) of what is most needed and will be used. Michael Quinn Patton (2013), in his book, Utilization Focused Evaluation , says that evaluations should "be judged by their utility and actual use" (p. 37). Patton defines use as "how real people in the real world apply evaluation findings and experience and learn from the evaluation process" (p. 1). A UFE checklist developed by Patton provides the details on how to plan and carry out a useful evaluation.

Guidelines for Establishing Purpose and Scope Step-by-step guidance on how to determine your evaluation purpose or "frame the boundaries for an evaluation" is explained by BetterEvaluation in this two-page document. This resource walks through the following four question categories: 1. Who are the primary intended users of this evaluation? 2. What are the primary purposes and intended uses of the evaluation? 3. What are the high-level questions the evaluation will seek to answer? How can these be developed? 4. What should be the criteria and standards for judging performance? Whose criteria and standards matter? What process should be used to develop agreement about these? The Centers for Disease Control (CDC) checklist for focusing an evaluation is based on the four evaluation standards: 1. Utility: Who needs the information from this evaluation and how will they use it 2. Feasability: How much money, time, skill, and effort can be devoted to this evaluation? 3. Propriety: Who needs to be involved in the evaluation to be ethical? 4. Accuracy: What design will lead to accurate information?

Tools for Establishing Purpose and Scope

The CDC's Developing an Effective Evaluation Plan: Setting the Course for Effective Program Evaluation is a comprehensive manual with multiple useful tools. The Community Tool Box from the Center for Community Health & Development at the University of Kansas identified four main steps to develop an evaluation plan:

1. Clarify program objectives and goals. 2. Develop evaluation questions. 3. Develop evaluation methods. 4. Set up a timeline for evaluation activities.

This How to Develop an Evaluation Plan Power Point from the Tool Box walks through each of these four steps.

References : BetterEvaluation (2013). Frame the boundaries for an evaluation. Retrieved from: https://www.betterevaluation.org/sites/default/files/Frame%20-%20Compact.pdf

Centers for Disease Control and Prevention (2018). CDC program evaluation framework checklist for step 3: Focus the evaluation. Retrieved from: https://www.cdc.gov/eval/steps/step3/Step-3-Checklist-Final.pdf

Patton, M. Q. (2013). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.

University of Kansas Center for Community Health and Development (nd). Community tool box. Retrieved from: https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/evaluation-plan/main

(Image by 3D Animation Production Company from Pixabay

Asset Mapping, Needs Assessment and Stakeholder Analysis When ownership is local and national, and various stakeholders work together, program innovations have a greater chance to take root and survive. — Dr. Ruth Simmons, President, Prairie View A&M University

Just like programs, program evaluations have stakeholders. Often, these are individuals who intend to use the evaluation results (see the section on evaluation purpose and scope and the discussion of Utilization-Focused Evaluation), who provide resources to support the program, who are involved in the program implementation, and who are beneficiaries of the program. The Centers for Disease Control's (CDC) guide, Developing an Effective Evaluation Plan: Setting the Course for Effective Program Evaluation (2011) gives the following ways that stakeholders can help the evaluation:

● Determine and prioritize key evaluation questions. ● Pre-test data collection instruments. ● Facilitate data collection. ● Implement evaluation activities. ● Increase credibility of analysis and interpretation of evaluation information. ● Ensure evaluation results are used (p. 7).

Asset Mapping Asset Mapping

Community Asset Mapping refers to the process of creating an inventory of the skills, talents and resources that exist within a community or neighborhood. Identification of assets and skills, possessed by residents, businesses, organizations and institutions, can support neighborhoods in reaching their optimum potential. Understanding Community Assets ● A community asset or resource is anything that improves the quality of a community. Community assets can include: ● Expertise and skills of individuals in the community ● Citizen groups ● Natural and built environments ● Physical spaces in the community (schools, churches, libraries, recreation centers) ● Local businesses and services ● Local institutions and organizations (private, public, nonprofit) Why use an Asset Map? The process of asset mapping illuminates connections between people and places; it can foster a greater sense of community pride and ownership; it can build capacity for turning common ideas into positive actions. The knowledge, skills and resource information amassed through mapping can inform organizing and facilitating activities on topics that reflect the pulse of community-thinking. There are many reasons that you may decide to do an Asset Map of your community or neighborhood. You may want to develop:

● A Community Map to paint a broad picture of community assets ● A Community Involvement Directory to showcase activities of formal and informal groups, including ways to get involved in their efforts ● A Neighborhood Business Directory listing neighborhood businesses and services ● An Individual Asset Bank featuring the gifts, talents, interests, and resources of individuals In addition, you may want to create inventories or maps based on interests or specific topics. For example, you may decide to put together an inventory of: ● Transportation: public transportation stops, bike routes, flex car sites, carpooling opportunities, taxi services ● Child care: individuals who provide childcare, are interested in swapping child care or collaborating on play dates ● Open Spaces: meeting spaces, parks, playgrounds, walking paths ● Food: community gardens, individual/family gardens, fruit trees, urban edibles, farmers markets ● Emergency Preparedness: water lines, gas lines, trucks, cell phones, ladders, fire extinguishers ● Local Economy: goods and services provided by individuals within the community ● Bartering: skills and stuff that neighbors are willing to barter for and share with other neighbors The Asset Mapping Process Identifying and mapping assets in your neighborhood or community can be as simple or as in-depth as you like. While each asset mapping project will ultimately involve different steps and outcomes, there are several key elements to consider in the development of your project: ● Identify and involve partners ● Define your community or neighborhood boundaries ● Define the purpose ● Determine what types of assets to include ● Identify the methods ● Report back Read more at: https://naaee.org/sites/default/files/assetmappingworkbook2013.pdf More resources at: ● https://healthpolicy.ucla.edu/programs/health-data/trainings/Documents/tw_cba2 0.pdf ● https://ctb.ku.edu/en/table-of-contents/assessment/assessing-community-needs- and-resources/identify-community-assets/main

● https://www.communityscience.com/knowledge4equity/AssetMappingToolkit.pdf ● https://fyi.extension.wisc.edu/programdevelopment/files/2017/07/Tipsheet3.pdf

Needs Assessment Needs Assessment

An integral step in the program development process is identifying the needs of a community. Formal and nonformal educators seeking to develop and deliver an educational program must first be informed of what their audience lacks in order to develop the right curriculum or training (Etling & Maloney, 1995). A need is the “discrepancy or gap between ‘what is’ and ‘what should be’” (Witkin & Altschuld, 1995, p. 4). The “what is” is the current state, the “what should be” is the desired or expected outcome, and the gap is the identified need(s). Extension professionals must understand what needs to target with educational programming in order to help achieve the desired situation (the “what should be”). A needs assessment is “a systematic set of procedures undertaken for the purpose of setting priorities and making decisions about program or organizational improvement and allocation of resources” (Witkin & Altschuld, 1995, p. 4). A Three-Phase Plan for Assessing Needs Each phase is laid out to guide the assessor through the entire needs assessment process (Witkin & Altschuld, 1995). The first phase, Pre-assessment, is exploratory by nature and seeks to help you prepare the needs assessment for implementation. Assessment is the second phase; data gathering and analysis occur here. During the last phase, Post-assessment, the Extension professional sets priorities, communicates results, and evaluates the needs assessment for effectiveness.

Needs Assessment Tools and Techniques

As mentioned earlier in this publication, the needs assessment should follow a systematic set of procedures. Using methods and protocols that have demonstrated high reliability and validity ensures the results of your own needs assessment are viable and trustworthy (Witkin & Altschuld, 1995). Both quantitative and qualitative tools and techniques exist, but be careful when determining which to use. Survey methods such as the Borich Model are useful when you already know the needs or set of skills required but aren’t sure which to focus on first. Other methods such as interviews and the nominal group technique are useful when you do not have any determined or identified needs. Read more at: https://edis.ifas.ufl.edu/topic_series_conducting_the_needs_assessment Additional resources: ● https://www.cdc.gov/globalhealth/healthprotection/fetp/training_modules/15/com munity-needs_pw_final_9252013.pdf ● https://evaluation.ces.ncsu.edu/county-needs-assessment/

Reflection

The CDC's checklist for engaging stakeholders can help you think through who are the stakeholders of your evaluation. The System Evaluation Protocol (SEP) guide provides a graphic that can help you think through your evaluation stakeholders. The graphic has the program at the center but you can replace that with "program evaluation" and start to define your stakeholders.

Evaluation Stakeholder Analysis Graphic from the SEP guide, with permission of the author

Another useful way to think about stakeholders and their influence on the evaluation is to use an onion diagram--a graphic of concentric circles. In this image, the person at the center of the onion is the most influential and important to the project. As you move toward the outer rings, the stakeholders have less influence and impact on the evaluation. It's easy to use this basic model provided in this Power Point Stakeholder Map to create your own stakeholder analysis map. References : Centers for Disease Control and Prevention. Program evaluation framework checklist for Step 1: Engage stakeholders. Retrieved from: https://www.cdc.gov/eval/steps/step1/index.htm Trochim, W. Urban, J.B., Hargraves, M., Hebbard, C., Buckley, J., Archibald, T., Johnson, M., & Burgermaster, M. (2016). The guide to the systems evaluation protocol (V3.1). Ithaca, NY: Cornell.

Stakeholder graphic from The Guide is used with permission of the author.

Images by Gerd Altmann from Pixabay

Chapter 3: Implementation

A surplus of effort could overcome a deficit of confidence. — Sonia Sotomayor

Systematic inquiry forms the foundation of evaluation and research. Evaluators/researchers must have a working knowledge about methods and data analysis and the rigorour approach that is required to follow designs and protocols.

Evaluation Implementation: Using and analyzing primary and secondary data collected with quantitative and qualitative methods.

Chapter Contents:

1. Evaluation Implementation 2. Primary and Secondary Data

3. Quantitiative Methods 4. Qualitative Methods 5. Data Analysis

Evaluation Implementation

The University of Minnesota Children, Youth, and Families at Risk (CYFAR) provides a plethora of evaluation resources. An introduction to evaluation implementation and choosing an approach is provided by Dr. Mary Marczak, Research and Evaluation Specialist, Center for Family Development, University of Minnesota.

Your browser does not support the video tag.

A table summarizing Mary's information is on this web page.

References : University of Minnesota Children, Youth, and Families at Risk (nd). Quantitative or qualitative approaches. Retrieved from: https://cyfar.org/quantitative-or-qualitative-approaches

Primary and Secondary Data In evaluation, there are two general types of data that are used: primary data and secondary data. Primary data is collected by the researcher by using such means as surveys, interviews, or direct observation. Secondary data is obtained from other existing sources, such as databases (for example, Census data), records (such as financial statements, enrollment charts), or other types of written material (books, publications). The University of Minnnesota Children, Youth, and Families at Risk (CYFAR) and provides a summary of differences in the two types of data.

Primary Data:

● Data collected by the evaluator using methods such as observations, surveys, or interviews. ● Can be more expensive and time-consuming, but it allows for more targeted data collection. ● Offers an opportunity to review any and all secondary data available before collecting primary data (saving time)

Secondary Data:

● Provides information if existing data on a topic or project is not current or directly applicable to the chosen evaluation questions. ● Information that has already been collected, processed, and reported by another researcher or entity. ● Will reveal which questions still need to be addressed and what data has yet to be collected. There are many sources of secondary data that Extension Educators can access to help inform program development and evaluation. A rich source of data is the U. S. Census. Over time, the Census has added more data, more search capability, and more data visualizations. If you have 1.5 hours to watch a webinar that goes indepth, take a look at this video:

[embed]https://youtu.be/5aUMspCxJik[/embed]

Census.gov has a library in the main menu where you can access many infographics that could be used in reports, grant applications, Power Points, and webinars. Take a look at this less than 2 minute video to learn more about the data visualizations in Census.gov:

[embed]https://youtu.be/QovimkK74-s[/embed]

Here are suggested resources by major program areas:

Agriculture:

● United States Department of Agriculture Census of Agriculture ● United States Department of Agriculture Census of Horticulture ● United States Department of Agriculture Census of Aquaculture ● United States Department of Agriculture Organic Agriculture ● United States Department of Agriculture-Economic Research Service Data Products

Family and Consumer Sciences (including health): ● County Health Rankings & Roadmaps

● Centers for Disease Control and Prevention: Data & Statistics ● Health Data.gov ● The United States Census Bureau-Health ● National Financial Educators Council: Financial Literacy Statistics ● National Financial Capability Study ● Prosperity Now Score Card ● Consumer Financial Protection Bureau

4-H Youth Development:

● Anne E. Casey Foundation ● Youth.Gov ● Centers for Disease Control and Prevention-Adolescent and School Health: Data and Statistics ● Health and Human Services Office of Population Affairs Facts & Stats

Environment & Natural Resources:

● Natural Resources Conservation Service Soils Survey

In addition, the Pew Research Center maintains "FactTank" News in the Numbers"

Reflection

References : University of Minnesota Children, Youth, and Families at Risk (nd). Quantitative or qualitative approaches. Retrieved from: https://cyfar.org/quantitative-or-qualitative-approaches

Image by Harish Sharma from Pixabay

Evaluation Techniques

Introduction When collecting primary data for the purposes of program evaluation, there are multiple techniques to consider. These include: ● Surveys ● Direct Observation ● Focus groups and interviews Selecting the right technique depends on the evaluation purpose and type of data of interest (quantitative vs. quantitative). Methods Advantages Limitations

Useful for Providing

Key Informant Interviews

-general descriptive data -understanding of attitudes and behaviors -suggestions and recommendations -information to interpret quantitative data -customer views on services, products, benefits -information on implementation problems -suggestions and recommendations for improving activities -data on physical infrastructure, supplies, conditions -information about an agency’s delivery systems, services -insights into behaviors or events -quantitative data on narrowly focused questions -when probability sampling is difficult -data on attitudes, beliefs, behaviors of customers or partners

-provides in-depth, inside information -flexibility permits exploring unanticipated topics -easy to administer -relatively inexpensive -takes 4-6 weeks -can be completed rapidly (5 weeks) -very economical -group discussion may reduce inhibitions, allowing free exchange of ideas -phenomenon can be examined in its natural setting -may reveal conditions or problems informants are unaware of -can be completed in 3-4 weeks -can generate quantitative data -reduces non-random sampling errors -requires limited personnel

-does not generate quantitative data -susceptible to interviewer and selection bias

Focus Group Interviews

-does not provide quantitative data -discussion may be dominated by a few individuals

-susceptible to moderator bias

Direct Observation

-susceptible to observer bias -act of observing can affect behaviors -distortions can occur if sites selected are not representative -susceptible to sampling bias -requires statistical analysis skills -inappropriate for gathering in-depth,

Surveys

qualitative information

Additional Resources: https://www.racialequitytools.org/resourcefiles/G3658_4.pdf https://pdf.usaid.gov/pdf_docs/pnaby209.pdf

Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Page 24 Page 25 Page 26 Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Page 39 Page 40 Page 41 Page 42 Page 43 Page 44 Page 45 Page 46 Page 47 Page 48 Page 49 Page 50 Page 51 Page 52 Page 53 Page 54 Page 55 Page 56 Page 57 Page 58 Page 59 Page 60

impact.extension.org

Powered by