Blog

Motivated to #ShiftThePower in nonprofit evaluations

17 May 2018

This blog is part of a series that provides reflections on a new GrantCraft Leadership Paper “How Community Philanthropy Shifts Power: What Donors Can Do to Make That Happen”, published with the GFCF and Global Alliance for Community Philanthropy in April 2018.

Dana R.H. Doan, IU Lilly Family School of Philanthropy & LIN Center for Community Development

As I read GrantCraft’s latest report, How Community Philanthropy Shifts Power: What Donors Can Do to Help Make That HappenI found myself scribbling down thoughts relating to nonprofit performance evaluations. Less than a week before, I completed my first year in a doctoral program at the Indiana University Lilly Family School of Philanthropy, after working in the nonprofit sector for almost two decades, including nearly ten years with a community philanthropy organization. The GrantCraft report, co-written by Jenny Hodgson and Anna Pond, provides so much practical advice for funders seeking to contribute to positive development outcomes. I particularly welcome the report’s guidance on how funders can use metrics and due diligence to empower local people.

In my second semester, I spent a good chunk of my time researching past efforts to measure nonprofit effectiveness – the history, the ethical dilemmas, current trends, and ongoing challenges. In my study, I encountered a LOT of scholarship on the ineffectiveness, even harmfulness, of many (still!) common approaches to measuring nonprofit performance. As the report states, it is clear to many that institutions in positions of power, such as funders and policy makers, play a key role in enforcing or heavily influencing many unfortunate practices we see today in performance measurement.

Consider the dizzying array of tools and platforms designed to help donors compare or evaluate nonprofits. And this is despite a half century of scholarly works and practitioners trying to raise awareness about the complexity of nonprofit work and the need for context in any determination of effectiveness. Such cautions appear to be swept aside in the popular pursuit of standardized, quantifiable, and comparable measures. Aside from the fact that these tools and platforms prioritize efficiency at the expense of impact, they raise an important ethical question: Who decides what is (and what is not) to be measured?

According to principal agent theory, a more powerful “principal” can influence a less powerful “agent” to act against its best interest (Eisenhardt 1989). Applying this theory to development, a community philanthropy organization must guard against playing the role of agent to its funders and principal to its grantees. Hodgson and Pond’s guidance alludes to principal agent theory, calling on funders to think about power dynamics when establishing values, determining metrics, and communicating with grantees. I appreciated the language they use in the report, replacing power laden terminology, such as beneficiary and downward accountability, with constituent and outward accountability.

Funders need to think about the incentive structures they helped create – intentionally or not – for their nonprofit partners. When a key goal is to build relationships, shift power, and promote collaboration; quantifiable outputs, speed, and efficiency are not likely to be the right indicators. In fact, those indicators can be detrimental to the long-term goals. The reality of the situation is that, “not everything that can be counted counts, and not everything that counts can be counted” (Cameron, 1963).

One approach to performance measurement that appears to be gaining credibility for its focus on ensuring accountability to individuals and communities that nonprofits are meant to serve is Constituent Voice™. Interestingly, funders are coming together to test its potential. One example is the Resilient Rootsinitiative, coordinated by CIVICUSwith technical support from Keystone Accountabilityand Accountable Now. The initiative aims to study the resilience of nonprofits that are accountable and responsive to their primary constituents. Meanwhile, in the United States, the Fund for Shared Insight’s Listen for Goodinitiative is working with U.S. based human service organizations (and their funders) to set up constituent feedback loops.

During GEO’s 2018 National Conference held last month in San Francisco, Valerie Threlfall, Director of Listen for Good, shared preliminary findings from working with 46 nonprofits. The results are promising and enlightening. When the constituent feedback data was disaggregated, there were notable differences in satisfaction ratings across age groups, gender, and racial identify. Specifically, they found adults, females, and Caucasians reporting comparatively higher satisfaction levels compared with youth, non-female, and constituents of colour.

Listen for Good’s findings mirror scholarly research in the Public Administration discipline, which reveal differences in levels of constituent satisfaction with public services across gender, race, and location. Scholars compared constituent satisfaction with administrative records of service outputs and efficiency (e.g., number of people served, number of issues resolved, cost per unit served). When satisfaction and service levels converged, scholars generally felt the data was to be trusted; however, a debate would emerge whenever satisfaction and service levels did not correlate. In this situation, some scholars would discredit constituent feedback as unreliable or biased. Other scholars; however, discovered that constituent feedback reveals important information that could not be captured by “objective” data. For these scholars, contextual information such as differences in culture, values, life experience, and expectations could only be revealed through perceptual measures…

“…official performance measures…tend to be labeled as objective simply because they reflect the perspective of administrators as opposed to citizens…what’s the difference between expert (agency) and citizen feedback?…the distinction is actually between measures developed by a relatively small group of experts vs. individual judgements of large numbers of citizens.” (Schacter 2010)

Since reading the GrantCraft report on community philanthropy, it’s got me thinking that while constituent voice holds great potential to provide agency, it can also be manipulated by people in positions of power. It requires guiding principles and examples, such as those laid out in the report by Hodgson and Pond, and Keystone’s Ethical Framework for Feedback Exercises. Only in that way, will we be able to design better methods for collecting, learning, improving, and reporting in a way that promotes responsiveness, equity, and agency for constituents, realiable data to inform decisions by nonprofit staff, and assurances to funders. As Hodgson and Pond indicate, a movement to prioritize feedback that empowers will require “A blending of systems…A loosening of the reins…A shift in power.”

By: Dana R.H. Doan, Doctoral Student, Indiana University Lilly Family School of Philanthropy and Advisor to the LIN Center for Community Development

 

Sources 

Cameron, William Bruce (1963), Informal Sociology: A casual introduction to sociological thinking. Random House, New York (p.13).

Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74.

Schachter, H. L. (2010). “Objective and Subjective Performance Measures: A Note on Terminology.” Administration & Society, Vol.42(5): 550-567.

 

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments