Community Tool Box

(Click image to enlarge)

Resource Details

An online service based at the University of Kansas, this extensive site aims to promote community health and development by connecting people, ideas and resources. With over 7,000 pages presented in user-friendly language, the Community Tool Box (CTB) hopes to build capacity for those who wish to change their communities for the better. Users can approach the site in five different ways: To read about specific skills in community work, users may click on the Table of Contents to locate the 46 chapters and nearly 300 distinct CTB sections. For guidance in doing key tasks, users may click on Do the Work to connect to Toolkits that support 16 core competencies. For help with a problem, users can click on Solve a Problem to access a troubleshooting guide that presents common dilemmas in doing this work and supports for addressing them. To explore best practices, users can click on Use Promising Approaches to find supports for implementing best processes and links to databases of best practices. To connect with people, users may click on Connect with Others to reach others doing this work through online forums, asking an advisor, and linking to other online resources and websites. From these five entry points, users can navigate the 7,000+ pages as they see fit. Chapter 36 focuses on program evaluation. The framework presented in this chapter comes from a working group assembled by the U.S. Centers for Disease Control and Prevention (CDC). The chapter aims to discuss the advantages of evaluating specific programs; explain how to evaluate specific programs; offer standards for “good” evaluation; and discuss how a group would go about applying the framework presented. On pages 2-3 (if one prints the material) are several good definitions of general terms adapted to community programs, such as evaluation, program (with many examples), program evaluation, and stakeholders. These standard and somewhat general definitions may help those with little evaluation experience. The first section speaks to why the authors feel evaluation should be done: to clarify plans, to improve communication, to gather feedback, and to improve program effectiveness. The next section delves into instructions on how to do program evaluation. It delineates areas of evaluation and presents the differences between each area, and presents the CDC working group’s framework in a useful graphic (an adaptation of a common learning cycles diagram) and step-by-step instructions. The six steps are: engage stakeholders, describe the program (with a statement of need, expectations, activities, resources, the program’s stage of development, context, and a logic model), focus the evaluation design (considering purpose, users, uses, questions, methods, and agreements), gather credible evidence (such as indicators, sources, quality, quantity, and logistics), justify conclusions (based on standards, analysis/synthesis, interpretation, judgment, and recommendations), ensure use and share lessons learned (focusing on design, preparation, feedback, follow-up, and dissemination). In particular, note the section on indicators on page 15, which offers a nice general list of examples along with some discussion of indicator selection. The following subsection discusses standards for “good” evaluation, which relate to the four elements are the center of the framework graphic on page 5. The standards are presented along four dimensions: utility (how useful is it?), feasibility (does it make sense and is it possible?), propriety (is it ethical?), and accuracy (is it correct/accurate?). The final section is about applying the framework to conduct optimal evaluations, and essentially stresses that users should rely upon the framework entirely, even when obstacles arise. This entire chapter on evaluation relates program evaluation to a community development setting, and thus, all examples are presented through the lens of community-based work. This chapter, as with most in the CTB, has a separate section that includes sample tools and checklists. Chapter 36’s Tool #1 is a copy of the framework itself. Tool #2 is an interactive checklist for ensuring effective evaluation reports with 19 items. Tool #3 presents four of the six steps in evaluation practice (engage stakeholders, describe the program, focus the evaluation design, and gather credible evidence) and the most relevant standards for each step in table form. And Tool #4 is an evaluation standards checklist that presents 30 standards organized around the four dimensions discussed in the chapter (utility, feasibility, propriety, and accuracy). One additional checklist is offered which summarizes the major points contained in the chapter and checks for reader comprehension. One important note: Chapters 27-29 are about Cultural Competence, Spirituality, and Arts; however Chapter 29, the arts-specific chapter, is under construction therefore unavailable. Despite being unfinished, this site is an extensive web resource that makes a good case for planning and could be useful in guiding users. Each chapter ends with tools and checklists. Language throughout the site is user-friendly, so it might be useful for the novice and the established practitioner.

No votes yet