Speaking for Themselves: Advocates’ Perspectives on Evaluation
This 20-page report was produced by Innovation Network – a nonprofit organization that shares planning and evaluation tools and know-how by providing consulting, training, and online tools to help organizations create lasting change in their communities – with support by Annie E. Casey Foundation and The Atlantic Philanthropies. The report examines the current state of advocacy strategy and evaluation practice. It includes a section on the importance of interim measures of success and a list of indicators for advocacy activity. The report gives readers a better understanding of advocates' views on evaluation, the advocacy strategies and capacities they find effective, and the practices used to evaluate advocacy work. More than 200 nonprofit advocacy staff responded to the survey from which the publication draws its data. The report offers numerous recommendations based on Innovation Network’s research for advocates, funders, and evaluators. One key finding is that only one in four responding nonprofit organizations engaged in advocacy has evaluated its work; the remaining 75 percent do not systematically collect information to inform their advocacy strategy. This piece opens with three brief sections on the methodology used in the initiative. Information was garnered via an online survey that 211 evaluation advocates completed. Key terminology – advocacy, evaluation, and advocates – is defined. The organizational characteristics of the survey respondents are presented: the geographic focus of the organizations (mostly state and community/local), their budget size (a well-distributed cross-section ranging from under $50,000 to over $10 million), programmatic focus (mostly human services and public/societal benefit), level of advocacy experience (anywhere from one year to more than ten years), percentage of resources dedicated to advocacy (an evenly distributed cross-section from 0% to over 75%), and funding sources (approximately 80% came from private funding). The next section reports the initiative’s findings regarding advocacy approaches. The report presents what respondents felt were effective strategies (e.g. community/grassroots organizing, coalition building, public education, and legislative advocacy); capacities beyond creative strategies necessary for success found in a previous study (e.g. visibility/media savvy, visionary leadership, an engaged board, ample staffing with skillful management, strong network, polished technical skills, and a collaborative culture); and specific capacities valuable for advocacy as found in this study (e.g. research and communications, organizational support for advocacy, collaboration with external parties, and resource and staffing for advocacy). The following section presents the initiative’s findings regarding evaluation practices, outlining the amount and nature of respondents' experiences with it; only 24.6% of respondents’ advocacy work had been evaluated. It then describes, as reported by the quarter of respondents whose work has been evaluated, the benefits and challenges of evaluation, and how respondents have used what they learned from it (e.g. program design/redesign, changes in staffing/staffing procedures, development of strategic plans, fundraising, plan for organizational/program changes, affirmation of effective strategies, increase/strengthening of stakeholder support, gain of support for next steps, and refinement of evaluation practices). The report then presents an extensive list of respondents' interim measures of success arranged into eight categories: building the base, decision maker support, strengthened infrastructure or position within issue movement, communications, opening window of opportunity, issue campaign sustainability or strengthening, interim progress tied to legislative victory, and general mention of evaluation. The next part of this findings subsection offers readers a number of resources for documenting progress, such as the Organizational Research Services and the Advocacy and Policy Change Composite Logic Model. The remaining part in the evaluation practices subsection presents advocates ideas on communicating advocacy success and the many obstacles encountered. The final major section offers recommendations based on the initiative’s findings. Funders, evaluators, and advocates working together are advised to: define terms; demonstrate contribution, not attribution; plan outcomes; and improve interim outcomes. Evaluators are told to: distill and share know-how; be flexible; design and use interim outcomes; find a balance between numbers and stories; and explore creative designs. Advocates are advised to: start evaluating if not already doing so; articulate and document assumptions; develop measures of success; and infuse evaluation into the work. Funders are told to: champion the cause; support evaluation and capacity building; and emphasize strategic learning. Two appendices at the end of this piece present tools used in this initiative. The first shows the survey questions which advocates completed. The second breaks down source data regarding the focus of respondents’ work into several subcategories in addition to the four primary categories of research and communications, organizational support for advocacy, collaboration with external parties, and resource and staffing for advocacy.