If you’re trying to focus your evaluation plan to measure what matters, these five basic questions can help you clarify, sharpen, and prioritize!
- What’s the purpose of evaluating?
- Who needs to be involved in evaluating?
- Which of your intended outcomes can you observe and therefore measure?
- Which approach to evaluation is best suited to what you want to learn?
- Who conducts evaluation?
1. What’s the purpose of evaluating?
What compels you to evaluate? Reasons to evaluate include to:
Understand social change.
You want to understand how your arts-based programs are moving the needle to achieve intended social or civic outcomes. Thinking about what outcomes you are focusing on, and what will offer meaningful evidence of change can sharpen the direction of your evaluation efforts.
If you’re most interested in understanding the social change that occurs and how your efforts may have contributed to such change, then outcomes evaluation (also called summative evaluation) is where your evaluation should focus.
Civically and socially engaged art by nature is rooted in process as much as, if not more than, product. Evaluation can help you understand the efficacy of implementation strategies and creative methodologies. It can help you know how art “tipped the needle” to effect certain change. It can help clarify capacity needs and issues, sharpen roles and enhance partnerships, among other things.
If you’re most concerned about improving practice or programs or understanding effectiveness of program design and implementation, you will want to consider process evaluation (also called formative evaluation). Process evaluation can help answer questions about how change occurs, what needs to change, and what contextual factors impacted the work.
Whether it’s being accountable to your own organization, the goals of a partnership, or scrutiny from public and private funders, demands are increasing for greater accountability in reporting results. Ultimately, though, the intrinsic reasons to evaluate are to continually improve practice and programs in order to effect social change.
2. Who needs to be involved in evaluating?
Who cares about the effectiveness of your work? In arts and social change work, there are often many different people who have a stake in evaluation—participants, program staff, the board, participating artists, partners, sponsors, funders, the public, and other stakeholders who are affected by the project. Engaging stakeholders in defining the purpose and focus of evaluation is important to measuring what matters.
The scope of evaluation could be huge if you were to try to evaluate for all interested parties’ purposes! It’s important to ask: Which stakeholders’ interests are most important to focus on? What information will matter to whom?
Different stakeholders may want to learn different things from evaluation and value different kinds of evidence. Selected stakeholders can help ensure that you measure what matters and also help determine suitable evaluation approaches, for example, an approach that is sensitive to a culturally specific group. Stakeholders may also be involved in designing or testing instruments, actually conducting evaluation interviews or focus groups, as well as interpreting results. Participatory evaluation is an approach that is grounded in involving stakeholders in these ways.
3. Which of your intended outcomes can you observe and therefore measure?
If you can observe it, you can measure it. So say many evaluators and researchers. Some changes are easier to observe and measure than others. The challenge is to select the most important and valuable outcome(s) to learn about and those for which you are able to measure change.
The Metropolitan Group, a service agency that supports social change endeavors, has outlined a useful “Measuring What Matters” framework that divides measures of change between those that measure action and those that measure result.
Action measures are those that seek to quantify inputs (time, resources, activities you put in) and outputs (what you create, such as collaborations, donations, news stories, community engagement activities).
Result measures quantify the outcomes (what results) and ultimately impact (what difference you make)—in a program to eradicate food insecurity, an outcome could be measured in the number of families moved beyond food insecurity. An ultimate impact would be healthier children and families, and even stronger communities.
You’ll need to ask how easy or difficult it will be to gather the data you need; where information already exists; whether benchmark data exists. The Metropolitan Group advises to “walk before you run.” Based on capacity and resources, you may only be able to focus on action measures, often considered the basics and easily measurable.
You may need to invest resources and enlist professionals in order to address result measures. These are typically harder to obtain but likely of greater value to understanding actual social change.
For examples of typical social and civic outcomes of arts-based work and examples of related indicators that measure such results, visit the Social Impact Indicators section.
For the full article and framework by The Metropolitan Group, go to Measuring What Matters.
For more on defining what is feasible while planning for evaluation, read: Shifting Expectations: An Urban Planner’s Reflections on Evaluation of Community-Based Arts, by Maria Rosario Jackson
4. Which approach to evaluation is best suited to what you want to learn?
There are options beyond the typical outcomes-based approach. Outcomes-based approaches to evaluation are prevalent, in part, because they are perpetuated in conventional nonprofit management training and by funders. However, practitioners of arts and social change work note that outcomes cannot and sometimes should not be predetermined, but instead allowed to emerge as a project evolves. Other evaluation approaches are worth considering. In this section, explore:
Outcomes-based evaluation — Probably the most familiar approach to evaluation is an outcomes-based one. Outcomes-based evaluation requires that you define the results you’re after and how you’ll get there through your activities. This is often done by articulating a theory of change and creating a logic model that diagrams the relationship between goals, resources, activities, and intended outcomes. For more on outcomes-based evaluation, check out:
Making Measures Work for You: Outcomes and Evaluation, by Craig McGarvey, published by GrantCraft.
Developmental evaluation — is used when goals and outcomes are not pre-set but rather evolve as learning occurs. It supports continuous progress and rapid response to complex situations with multiple variables. The evaluator is often an integral member of the program design team. Developmental evaluation acknowledges that a program might be only one factor contributing to change and is designed to capture the dynamics of systems, interdependencies, and emergent interconnections. It is best suited for initiatives that are at an initial stage of development or undergoing significant change, and can benefit from careful tracking of the process. Developmental evaluation is especially appropriate for organizations and programs focused on innovation and social change. For more on developmental evaluation, check out:
A Developmental Evaluation Primer by Jamie A.A. Gamble
Ethnographic evaluation — collects qualitative data. Ethnographic evaluation emphasizes listening carefully and observing real-life actions to understand how people make sense of their lives (and making those understandings comprehensible to others outside the particular groups under study). As a tool for evaluation, ethnographic approaches favor firsthand observation, writing and documentation of stories, and community dialogue. An ethnographic evaluation produces “data collection” of a distinct kind—subjective accounts of how people actually interact with systems, programs, and policies.
This data is collected through experiences of the evaluator in the field, side by side with participants. More than simply a way to glean information about how many people received services or how efficient a program runs, ethnographic data attempts to “measure” what is meaningful to people; how they see themselves in relationship to the social dynamics that surround them. It’s important to note that qualitative data can be gathered and interpreted systematically to have credibility. This summary is from Dr. Maribel Alvarez. For more on ethnographic evaluation, read:
Getting Inside the Story: Ethnographic Approaches to Evaluation, published by GrantCraft
Participatory evaluation — is a process that involves key participants in planning and implementing the evaluation, including setting goals, developing research questions, interpreting data, making decisions, and using the information. The participatory approach is designed to increase participation in and ownership of collective inquiry on the part of stakeholders, as well as the usefulness of the information gathered. For more on Participatory Evaluation, check out:
Participatory Action Research: Involving “All the Players” in Evaluation and Change by Craig McGarvey and published by GrantCraft
Deliberative Democratic Evaluation Checklist, from the Evaluation Center at Western Michigan University
5. Who conducts evaluation?
A common question is whether you need an outside evaluator. In deciding whether internal staff or an outside evaluator will conduct evaluation, Craig Dreeszen in Fundamentals of Arts Management observes:
Often program managers implement evaluation of their programs. Internal evaluations have the advantages of economy, first-hand knowledge, and expedience. In addition, evaluation skills are cultivated within the organization, and lessons learned can be implemented immediately. A disadvantage is that self-evaluation presents the risk of bias. An evaluation consultant may be perceived by funders and other stakeholders as more reliable and unbiased. This is more costly, but may yield more credible results.
The choice is not necessarily either/or. It’s possible for external evaluators to consult with staff and key stakeholders in their design and data collection. And staff may opt to contract with an outside evaluator to help shape evaluation, create instruments, and facilitate discussions.
More and more, the notion is being challenged that an outside evaluator is the ideal for all circumstances. The complexity of arts and social change work suggests the value of multiple perspectives by various stakeholders. There can be an advantage for evaluators to have knowledge of the community, the program, and even direct experience in the program itself.
There are some times when outside evaluation expertise is important, if not essential. Longitudinal studies that follow a population over time to understand change may require outside support in order to find and track original respondents and then compare and analyze data. Community mapping can determine correlations between cultural engagement and social/civic outcomes. Professional researchers can, for example, gather or provide extant data regarding ethnic or social class profiles or indicators of neighborhood well-being. Their expertise is important in credibly analyzing possible links and correlations between such outcomes and cultural activity.
For external expertise, cultural organizers can:
- Identify evaluators through peer networks or the American Association of Evaluation
- Partner with regional data and mapping centers based at: colleges and universities, urban or regional planning agencies, public agencies, or foundation centers.
To learn more about reasons for and ways to work with outside professionals to gauge social impact, check out:
Civic Engagement and the Arts: Issues of Conceptualization and Measurement, by Mark Stern and Susan Seifert, Social Impact of the Arts, University of Pennsylvania.