Forum Date: January 28, 2011
NYTAP developed this program in response to requests to explore the topic of evaluation of capacity-building/technical assistance/management assistance from attendees of previous programs, as well as numerous discussions among the board and Program Work Group. This program was the first of three that held on this topic in 2011.
Peter J. York, Senior Vice President and Director of Research, TCC Group, www.tccgrp.comPeter York specializes in: designing and implementing evaluations of foundation-supported multi-site initiatives, community building initiatives and social programs; conducting strategic reviews of best practices to inform the development and implementation of foundation grant making strategies; and providing technical assistance to grantees around evaluation, program design and strategic planning. A current focus of York’s work is on assisting private foundations, corporations and nonprofit organizations with developing and using “evaluative learning” approaches, designs, methods and tools that can best answer, in a formative manner, the questions “what works” and “why?” At TCC, York leads the firm’s evaluation work with private and corporate philanthropies, and nonprofit organizations. Recent work includes facilitating evaluation capacity building for grantee organizations for funders like the Howard Hughes Medical Institute, Massachusetts Cultural Council, the California Endowment, and the Ontario Arts Council. He also designed “evaluative learning” systems for organizations like the Philadelphia Zoo, grantee organizations of the Virginia G. Piper Charitable Trust and Sun Valley United Way, and the New York Boys Club. He led, designed, and conducted cluster evaluations of foundation-developed program initiatives like the Wachovia Foundation’s Teachers and Teaching Initiative, the Deaconess Foundation’s Impact Partnership initiative, and the Flint Funders’ Collaborative (Charles Stewart Mott Foundation, Ruth Mott Foundation, Community Foundation of Greater Flint, and the United Way of Genesee County) “BEST” capacity building initiative, and also lead the development of cost-effective evaluative learning tools like the Core Capacity Assessment Tool. York has conducted numerous presentations and workshops throughout the country to funders and nonprofit leaders on the topic of evaluation, and written reports and articles about evaluation approaches and substantive lessons learned. In 2005, Fieldstone Alliance published his book A Funder’s Guide to Evaluation: Leveraging Evaluation to Improve Nonprofit Effectiveness. York conducted his graduate studies at Case Western Reserve University’s Mandel School of Applied Social Sciences, where he earned his Master’s Degree in Social Service Administration and is “all but dissertation” in his Ph.D. work.
Judy Levine, Executive Director, Cause Effective, www.causeeffective.org
Judy Levine has over two decades of experience as a non-profit management advisor. As a Cause Effective staff member since 1993, and through her work as an independent consultant, Ms. Levine has trained, consulted and authored materials for over 2,000 nonprofit organizations. Her expertise is in cultivating support from individual donors, fundraising planning, and Board and organizational development. Ms. Levine has experience working in a multitude of organizational cultures and is particularly skilled in understanding how to tailor her assistance to the relevant audience in order to meet their needs. Ms. Levine participated in the development of many of the methodologies that Cause Effective uses today, specifically in the areas of resource and organizational development. Ms. Levine also brings her own experience as a Board member to the work she does for clients of Cause Effective. She has provided fundraising and financial oversight in her roles as Chair of the Church Street School for Music and Art and former President of Taste of Tribeca. In addition, she is the former Board Chair of Pepatian (a Bronx-based CBO that promotes new Latino performance), and served as Fundraising Chair of Transportation Alternatives. Prior to joining Cause Effective, Ms. Levine worked as an independent consultant and trainer in the area of strategic fundraising for a diverse set of clients, and was also Director of Programs for the Cultural Council Foundation where she focused on fundraising consulting and strategic planning. Ms. Levine holds a Ph.D. in Performance Studies from New York University and has also published several articles on topics in the arts and in non-profit administration.
Wayne Ho, MPP, Executive Director, Coalition for Asian American Children and Families, www.cacf.org/
Wayne is responsible for leading the nation’s only pan-Asian children’s advocacy organization by overseeing agency administration, program oversight, board relations, staff supervision, community partnerships, and fundraising to improve the health and well-being of Asian Pacific American children and families. He serves on the board of directors of Coro New York Leadership Center, Human Services Council, New York Foundation, and Partnership for After School Education (PASE). Wayne is a member of the NYS Governor’s Children’s Cabinet Advisory Board, NYS Office of Children and Family Services’ Internal Review Board, NYC Citizen Review Panel, Immigration Advisory Board of the NYC Administration for Children’s Services (ACS), and Board of Directors of the Metropolitan Museum of Art’s Multicultural Audience Development Initiative. He is also an Adjunct Professor at the Leonard N. Stern School of Business of New York University. Wayne has conducted policy analysis for ACS on options for public and non-profit agencies to expand childcare and worked with the Blue Ridge Foundation New York on performance management systems for start-up non-profits. In the San Francisco Bay Area, Wayne founded several volunteer-based programs to empower youth of color to pursue higher education and to become community advocates. Wayne received his bachelor degree from UC Berkeley and his Master in Public Policy from Harvard University’s Kennedy School of Government. He also completed the New American Leaders Fellowship Program of the Coro New York Leadership Center and New York Immigration Coalition. Wayne received a Making a Difference Award from the Family Health Project in 2008.
1. What questions are being asked?
2. How are these questions being answered?
3. Why aren’t we learning from evaluation?
4. The outcomes we should measure
5. Key Take-Aways
The challenges of answering questions about capacity-building:
1. It is difficult to develop measurements for assessing organizational effectiveness and management assistance success
2. Determining the causal relationship between the capacity building interventions and client and community impact is not easy
3. How one measures success varies greatly in relation to the type of capacity building intervention that is provided
4. Evaluation can be multi-layered: focusing on individual, organizational, programmatic, client and/or community impact
Why evaluation isn’t achieving learning:
1. Makes uncontrollable/unattainable community impact and/or long-term outcomes the metric of success
2. Typically assesses the “whole” program/strategic effort, not its component parts
3. Aspires to a scientific research design ideal that is appropriate for large-scale population studies to achieve generalizability, but not for context-specific, real-time learning
4. Gathering data from the wrong source – implementers and secondary data sources, not the direct recipients/targets
1. Expect “Ready, Set” Services to Achieve “Readiness” Outcomes; specifically:
Awareness and Knowledge
Attitude and Motivational Changes
2. Expect “Go” Services to Achieve “Action” Outcomes; specifically:
Opportunities for action
3. If you want to combine learning and honest accountability, measure the achievable outcome for the “real” target of the intervention (i.e., individuals and/or organizations/groups)
4. Don’t just learn if it worked, but spend time with your capacity builders figuring out what worked, for whom, and under what conditions
Judy Levine, Executive Director, Cause Effective
Overview – Cause Effective is a nonprofit organization that assists community-based organizations to build their fundraising capacity through individual donor development, board development and special events. The organization has recently been working on tools to measure the outcomes it achieves with client organizations.
CE begins its evaluation practice with an awareness of the “red herrings,” i.e. that especially with fundraising, clients and funders will often judge the success of the consultation by asking how much money was raised. While this is one indicator to be measured, it will not necessarily be a meaningful measure of the success of building fundraising capacity in a short-term client engagement.
CE works to build the capacity of the organization to develop individual donor bases over the long term, and evaluates the outcome of a consultation based on the following criteria:
(a) money raised
(b) number of solicitations
(c) number of “askers”
(d) current funder mix
(b) Number of markets touched
(a) Executive Director
(c) Board committees
(a) Integration of development cycle over 12 months
(b) Strong relationship between programming and fundraising
(a) Clarity of brand and message
(b) Range of ambassadors
(c) Variety of tools
VII) Fundraising Planning and Evaluation
(a) Meaningful self assessment
(b) Strategic framework guides actions
(c) Reflection and course correction indicate a learning culture
Wayne Ho, Executive Director of the Coalition for Asian Children and Families
Background: CACF is a coalition that advocates to raise awareness, bring resources and effect policy changes to benefit the diverse Asian communities, children and families in the New York City region.
CACF obtained a contract from the Compassion Capital Fund Demonstration Project (www.acf.hhs.gov/programs/fbci/progs/fbci_ccf.html), a federal program to provide capacity-building resources, training and small grants to community- and faith-based organizations seeking to alleviate poverty in their communities.
The evaluation criteria set out in the contract did not provide any innovative methods for evaluating outcomes for this project. Instead, it only measured outputs, such as number of people attending trainings and number of organizations applying for and receiving funding capacity building.
One insight that was revealed through this program was the different self-perceptions that many of these organizations have. Many of these organizations do not see themselves as “non-profits” but instead as part of the communities they serve. In other words, they do not self-identify as service agencies but as members of the Chinese, Hmong or Korean communities for example.
The presentations were followed by a lively question & answer session. Many of the questions reflected the difficulty and complexity of measuring outcomes for technical assistance. Some of the topics raised in the q&a session included:
Can & should consultants try to measure the impact of interventions such as board development on the quality of direct service provision? If so, how?
The length of time it takes for real organizational change to take place makes it difficult to measure the impact of many types of consultations.
Funders of technical assistance do not generally include funding for long-term evaluation of a consultation. Providers who are interested in doing so must invest their own resources in long-term follow-up.
Providers are limited in the evaluation they can do because they need to respond to funders’ criteria for success.
Funders are forced to develop their own outcome measurement criteria because there is a lack of consensus in the field regarding meaningful outcomes for management assistance.
Among the 32 evaluation forms collected from attendees, 22 (69%) stated that the program met their expectations, 5 (16%) stated that it exceeded their expectations, 4 (13%) stated that they were not sure, and 1 (3%) stated that it did not meet their expectations.
Among future evaluation-related topics attendees would like to see covered are:
Identifying evaluation criteria for management assistance
Integration of data collection for evaluation among service providers
How to measure “softer” TA services
Ways to utilize existing research to improve capacity
Examples of metrics used by existing service providers
Dialogue with private and government funders regarding their expectations and needs for evaluation
Evaluating advocacy & community organizing
Measuring qualitative outcomes/involving staff and stakeholders in evaluation
Alternatives to experimental and quasi-experimental designs for research
Evaluation for NPO’s that do work other than direct service, e.g. research
Session on “GO” services, (as described by Peter York).