Posted by: Tayo Akinyemi | April 18, 2011

Atonement: The Social Responsibility Assumed by MNCs and SMEs

While reading Creating Shared Value by Mark Kramer, it occurred to me that the concept, which “involves creating economic value in a way that also creates values for society by addressing its needs and challenges” applies to large companies.  Kramer does not say this, but that’s what I infer.  Actually, I think most of the dialogue around triple bottom line, BoP, CSR, social innovation, development through enterprise, etc. centers around the need for big companies to do things differently.

For whatever reason, small companies don’t attract the same level of ire and finger-pointing that the mega corporate leviathans do.  On some level, this makes sense.  No one pickets outside the neighborhood bookstore.  After all, the guy or gal is just trying to make a living.

Why does this matter?  To be honest, I’m not sure that it does.  Although I doubt anyone would contend with the assertion that small companies need to tow the before mentioned line, their harm/value profiles differ from their larger cousins.  Unless the small business has obviously nefarious intentions, most people assume that the damage they do is limited.  Perhaps more importantly, their benefit to society is unlikely to be question.  Additionally, the economics are different.  One might believe that small companies lack the financial resources to make upfront investments that could reduce costs or drive revenue in the future.  Unless of course, the business model is designed to create value this way.

Surely, I think there’s some value in being clear about who and what we’re talking about.  But beyond that, given the sheer number of SMEs out there, it might be worthwhile to expand the conversation beyond their oft-cited role in job creation.  Just a thought.

Note:  I know that there are many small businesses, social businesses, social enterprises, and other hybrids that take a shared value perspective.  It’s just that there’s not a lot being said about their roles and responsibilities as a whole.

Posted by: Tayo Akinyemi | April 14, 2011

Is There Anything Really New about Collective Impact?

Collective impact. Isn’t this just some  sexy new moniker for what we used to call collaboration and partnership? Sounds like a case of history repeating to me.  Or is it?

As defined by Mark Kramer and John Kania in an article published in the Winter 2011 edition of the Stanford Social Innovation Review (SSIR), collective impact is “the commitment of a group of important actors from different sectors to a common agenda for solving a specific social problem.”
From what I can tell, the defining characteristics include the before-mentioned common agenda, and “a single set of goals, measured in the same way.”  Kramer and Kania describe the USP as “…a centralized infrastructure, a dedicated staff, and a structured process that leads to a common agenda, shared measurement, continuous communication, and mutually reinforcing activities among all participants.”  In the same breath, they acknowledge that “Collaboration is nothing new. The social sector is filled with examples of partnerships, networks, and other types of joint efforts.”

Frankly, I don’t understand how/why a run-of-the mill network or public private partnership couldn’t have all of the characteristics listed above, but perhaps this misses the point. For all I know, collective impact is a difficult phenomenon to understand until you’re in it, getting things done on the ground.  This perspective seems consistent with the war stories shared by a few participants at the Collective Impact conference hosted by FSG Social Impact Consultants and SSIR a couple of weeks ago.  (Fortunately for me, it was streamed live.)  Many of them thanked Mark Kramer for giving a name to the work that they’d been doing for years.

What’s interesting from my perspective, is the way that collective impact describes a mechanism to engage in systems-based problem-solving. As Kramer and Kania contend, social problems aren’t created by a single organization, so they can’t be solved by one.  There are other variables—people, institutions, behaviors, etc. in the mix as well.  It’s actually this distinction—between adaptive problems, which are complicated enough to require non-formulaic solutions and the attention of lots of different people and resources, and regular problems that are  straight-forward to solve—that makes the case for a multi-faceted problem solving approach.

Now, the reason this rings my bell is that I’ve never quite understood why social problems are often disaggregated and tackled by silo-ed experts.  I mean it’s not as if providing potable water to a rural village is unrelated to challenges with agriculture, sustainable livelihoods, health, and education.  It seems more sensible, albeit incredibly more difficult and complicated, to view these issues as interconnected elements of the same faulty system.
For that reason, talking about collective impact makes some sense to me.

Kramer and Kania seem to agree.  As they put it, “..large-scale social change comes from better cross-sector coordination rather than from the isolated intervention of individual organizations.”  However, they also assert that:

The power of collective action comes not from the sheer number of participants or the uniformity of their efforts, but from the coordination of their differentiated activities through a mutually reinforcing plan of action. Each stakeholder’s efforts must fit into an overarching plan if their combined efforts are to succeed.

Hmm…Makes me wonder how this jibes with Bill Easterly’s discussion of planning vs. seeking as problem-solving methods in development.  Does using a top-down approach to solve an adaptable problem really make sense?  Although Kramer and Kania contend that searching for single solutions presented by single organizations is the wrong way to address certain types of issues, it seems like using a bottom-up, market-based methodology (and I use this term very loosely), to find discrete solutions is a precursor to engaging in collective impact.  After all if you don’t have a number of organizations with unique perspectives and theories of change how do you pool resources?

In a way, that’s why Esther Duflo’s work with randomized control trials to discover solutions to “micro-level” problems  is so important.   (I blogged about this yesterday.)   She’s trying to figure out what works for discrete challenges, because making seemingly small improvements “at the margin” can make a significant difference in the lives of the poor.

But then again, I suppose if you held a shot-gun wedding between collective impact and discrete impact, what you could end up with is a pool of long-term, collective experimentation, i.e. a license to “search” for the answers in a way that leverages the respective strengths of the players and respects the complexity of the problem.  Pretty nifty, right?

I can’t say that I’m entirely convinced that collective impact describes a new paradigm for the social sector.  Again, that may not be the point.  However, to the extent that it names (or re-names) a pre-existing phenomenon, one must beware the dangers of “buzz word bingo”.  When new terminology is introduced, dilution of meaning and significance begins almost immediately.   It’s not enough to name-check collective impact like a mediocre rapper with dubious lyrical ability. (Example: We put two and two together and get five.  Collective impact.)  It’ll be important to be crystal clear about what it is, what it isn’t, and what it looks like and feels like when it’s being done.

I haven’t blogged for a long time–longer than I care to admit.  But I have been roused from hibernation by some truly compelling ideas shared by Esther Duflo.  (I’d started to wonder if spring had left me behind, which is embarrassing given how late and timid its arrival has been.)  In any case, I just watched Duflo deliver the 6th annual Richard H. Sabot lecture entitled “Policies and Plitics: Can Evidence Play a Role in the Fight Against Poverty?” at the Center for Global Development (CGD).  I was not disappointed; she continues to pose and answer questions that totally tickle the brain.

Esther’s primary contention was that institutions are not as immutable as they seem.  In f act, significant policy changes can be made at the margin, even when the “politics” and the policy environment are poor.  This insight strikes me as incredibly important given the dichotomous nature of the debate around the power of government institutions to make change.   On one side,  it’s government vs. corporation.  On the other, and I contend that this perspective is more useful, the private sector, NGOs and government all have roles to play.  In any case, Duflo’s point of view gives us a reason to escape unilateral thinking with respect to government’s role in development.

What really blew me away though, was Duflo’s response to the query posed by Nancy Birdsall, Founding President of CGD, as a follow-up to Duflo’s formal remarks.  Nancy took the opportunity to couch Esther’s micro-level work in the context of a macro-level debate about the efficacy of aid.  The response made my lazy little neurons crackle with delight.  “Whether aid is good or bad is not the most important question,” Esther asserted.  “Given that aid is a small percentage of capital flows, the more important consideration is how we make policy work; make aid more effective; and spend it in a way that maximizes its ability to teach us something.”

For me this is hugely ironic, although appropriately so.  When I first started the struggle to find my vocational “true north”, one of the first questions I posed to myself centered on aid effectiveness.  Why?  Because quite selfishly, I didn’t want to devote my time and energy to pursuing useless interventions.  Eight years later I am happy to report that this was probably  the wrong way to frame the problem.  As a gentleman from USAID aptly expressed it, “VCs can be successful even if large numbers of interventions fail.” Had I been in the room, I would have asked the congregation for a resounding  “Amen!”  It is truly exciting to consider that the “right answer” may in fact be searching for it…the right answer that is.  Were this  underlying principle to be put in place, imagine the impact on the struggle against fear of failure, and arguably, accountability, in the development community?
If nothing else, this perspective helps de-emphasize the “conflation of opinion” that happens when aid is summarily declared good or evil.  Dambisa Moyo’s Dead Aid illustrates the tone of that discourse.

It got even better when Ngozi Okonjo-Iweala,  Managing Director of the World Bank and Nigeria’s first female Finance Minister,  supported Duflo’s perspective, indicating that “the aid debate is a bit passe”, because people are starting to focus on developing effective programs and policies in order to catalyze and/or scale change.  Two words: L-O-V-E it.

Equally as interesting is the notion, again espoused by the USAID representative, that aid-based financing can be used to align incentives (in addition to catalyzing and scaling change) at the margin.   Perhaps rather conveniently, this reminds me of the Acumen Fund’s use of philanthropic dollars as risk capital to fund business models that would not have seen the light of day.  I like the idea of aid money being leveraged in a similar way.

Happily, Duflo’s talk concluded with a comment that addressed a fundamental question in the debate about when and how to evaluate.  Essentially, she argued that it’s better to conduct a select number of high-quality evaluations when doing so will tell you something that you really want to know.  Otherwise, spend the rest of the money on smaller, process-based assessments. Now doesn’t that sound sensible?  Yup, I thought so too.

Thank you to Esther Duflo for inspiring me to write again, and encouraging all of us to think just a bit differently.

Posted by: Tayo Akinyemi | August 27, 2010

Transparency International Weighs in on Bruckner Debate

AidWatch published this commentary from Transparency International, defending Till Bruckner’s search for transparency.

Also, be sure to check out the list of relevant posts at the bottom of the page.

The ever dutiful reader, I try to catch as many of the comments associated with a blog post as possible, because they often add richness and color to the discussion.  This was especially true with a metrics conversation that occurred a few months ago on Social Edge Charles (Hipbone) Cameron (extra points for the nickname), kicks off the discussion by drawing a line in the sand between Quants, or “people who rely on quantitative analysis [and] use numbers to figure out what’s what” and Qualits, “the folk who know there’s more to human life than numbers can possibly capture.” the Given Cameron’s rather strong stance on the issue, i.e. “the value of social entrepreneurship is not reducible to economic or socioeconomic terms”, we can safely surmise that Cameron is a Qualit.  He then poses a series of questions to invite readers to debate the merits of “quality over quantity”, the utility (or lack thereof) of metrics, and why people support different approaches.

So, I’m going to spill the beans.  Yes, I’m a Quanlit!  (Thanks, Autumn Walden for the terminology.)  Truth be told, I don’t buy the “people vs. numbers dichotomy” and I believe that a Quant perspective has a place in communicating social impact. But this bias may driven by my ignorance about what it really means to be a Quant.  I’ve never conducted a randomized control trial or used quasi-experimental methodology, so perhaps I’ve got a “don’t look at the man behind the curtain” mentality.

Nonetheless, I can’t help but think that compelling stories aren’t enough.  Why?  It comes down to accountability, which I wrote about yesterday.  I  really think we have the responsibility to demonstrate, to the greatest extent possible, that we’re making a difference, and invite others to participate in the process.  Shared measurement/metrics, if designed well, should have the power to do that.  Also, the reality of the matter is that we’re operating in a constrained world.  Not all NGOs, social entrepreneurs, and initiatives get funded.  Ideally, we want to support those that make the change we want to see in the world.  To do that, we’ve got to get better at identifying them.

Finally, I’m skeptical of talk about how a metrics-heavy focus can be “dehumanizing” because we should be able to trust what we see happening with our own eyes.  I agree with the sentiment to a certain extent, but is the act of “seeing” really synonymous with witnessing the truth? Seeing (or feeling) for that matter,  is a cognitive process subject to mental models, biases, and interpretation, as are data.  That’s not to say that these ways of knowing are invalid.  It just seems to me that the more  (coordinated) mechanisms we have to interpret what we’re experiencing, the better.  But that doesn’t mean we can’t agree that some things can’t be measured well or shouldn’t be measured at all.  Sometimes numbers aren’t enough either.

Perhaps not surprisingly, this leads me to wonder about who’s saying what and why.  In my limited experience, practitioners seem to favor story and experience based-evidence while on-lookers—funders, capacity builders, and arm chair professors like me, seem to prefer “hard numbers”. Of course, this may also be a false dichotomy, but the point is that it’s worth examining incentives, i.e. why people have certain preferences and how we reconcile differences to meet people’s needs more effectively.

The Quant vs. Qualit dilemma also raises interesting questions about how social change should work.  I’ve suggested before that there seems to be a growing level of consensus around the possibility that social and economic value are connected.  This is clearly still up for debate.  However, it seems fair to say that at minimum, a large number of people share the notion that an economic system that leaves people out in the cold ain’t quite cutting it.  In reaction to that realization, I have encountered two  responses:

  1. a rejection of the notion that social value can be expressed in numeric/economic terms.  “People, not numbers” is a common refrain.
  2. an attempt to integrate social value into the existing economic system via techniques like SROI analysis, so that externalities are internalized, and consequently accounted for.

The first suggests that while resisting the Borg might be futile, charting one’s own path, preferably to a different star system, isn’t.  The second advocates that we infiltrate the Borg and encourage it to be more accepting of diversity. What I haven’t heard a lot about is comprehensive system change.  Now, I know that mentioning this elusive third option (which, just to be clear, is not a closeted call for socialism) is contentious and a little ridiculous.  It’s not like I have plans for a new economic system hidden under my bed.  Even if I did, implementation would require one heck of an instruction manual.  But I raise it for the sake of intellectual provocation.  You know, because I’m a bit of a trouble-maker.

Now here’s where my thoughts end and the real fun begins.  Although I’ve vowed both publicly and privately to reduce the length of my posts, I’m reluctant to leave out the goodies.  So, in an effort to take baby steps toward recovery, I’ve decided to split the difference. I’d like to  share several of the stellar comments that appeared in the before-mentioned Social Edge exchange, but don’t want to force you to keep reading.  That means if you’re tired, stop here.  If you’re keen for some tasty nuggets of knowledge, carry on.  I’ve enlisted some basic HTML to help you along.  Click on the link to get to get to the comment you’d like to read.  Enjoy!

  • John Piermont Montilla comments on the need to empower communities to lead and measure their own change;
  • Autumn Walden talks about how to be a good Quanlit;
  • Jim Kucher comments on the virtues of holistic measurement;
  • Daniel Bassill writes about the need to put numbers into an appropriate context;
  • Hugh Davidson introduces 1000 Minds and the use of preference modeling;
  • Mr. Hipbone himself emphasizes the utility of preserving the link between numbers and context;
  • Jon Griffith describes the need to be clear about what you’re trying to do and how you can do it better;
  • Derek Link explains the need for a balanced approach to evaluation;
  • Hildy Gottlieb raises questions to clarify what the ultimate goal(s) is and how evaluation helps us get there;

John Piermont Montill
…Measuring outcomes has been made for the use of external donors leaving communities (like young people) with unsustainable behavior change interventions without any tools of their own to assess their own risk, benchmark their own life goals, measure their own progress and celebrate own change. Our own mandates, causes for being and dreams were not achieved and seemingly we are running out from our own tracks to meet somebody’s else goals. Somewhat like a modern form of slavery where beneficiaries depends on NGOs and government subsidies and NGOs and Government depends on grants, and donor countries (while money from tax payers goes to corruption) while communities are left without the necessary skill to generate their own income to subsidize own needs (especially among impoverished communities) – of course to keep them poor in order for them to keep electing corrupt officials and keep them dependent on rich entities.

Their (funders) metrics are numbers: how many condoms distributed, how many peer educators trained, how many trainings conducted, how much money spent, how many peers reached and etc. How about the communities? do they measure their own progress or manage to sustain the change they want to be?. After project ends, these funders will wave their call for proposals (for someone to achieve their goals because they have no manpower to meet their mandates).

That’s why I am an emerging social entrepreneur: Transforming youth beneficiaries into social entrepreneurs who share their responsibility to generate financial and social returns while leading and sustaining change that gives them the power over their lives and make a difference. And my metric is the “self-measurement of change”.

PS. I do not want to be a quant, I’m a qualit because a qualit is someone who is honest and transparent, who demonstrate that the impact of a development intervention is not uniform to all target audience. A quant is fond of demonstrating achieved intended outcomes to please funders but not the community he/she serves. A qualit is a risk taker that includes demonstration of both intended and unintended outcomes, the failures and success that sometimes critique the way funders impose their policies that are not relevant to the needs and priorities of the communities being served (as per Paris Declaration of Aid Effectiveness). A qualit put funders, co-performers of a development intervention and beneficiaries into the same level partnering to achieve a common goal.

Autumn Walden
I must say I’m a bit of both: qualit/quant…quanlit? 🙂 I think that if we can eventually get past the superficial terms of “numbers” and “metrics,” then what we’re left with is our human desire to measure if we are really helping. A lot of us need proof, which in the traditional sense, means numbers and data. I work in a University where assumptions and hypothesis of change or impact also require research, data, trials, and validity. However, I can see how one might get caught up in the frenzy of a systematic approach. In the end, I believe having a system and infrastructure to work within can be a wonderful foundation for human beings to bring about change and also be confident that they are making a difference.

Jim Kucher
IMHO, This is the single biggest challenge that we as a movement will face, and must answer.  The reality is that you need to show that the venture works, that it is firing on all eight cylinders.  You must be able to demonstrate positive, lasting social impact as well as sustainable revenue and efficient operations. If not, you will never be taken seriously.  And, yes, qualitative factors can be measured and analyzed (ask any sociologist).  In reading and studying Greg Dees for a number of years now, I don’t think that this quote means that we should not be trying to measure.  I think it means that we measure differently, we measure holistically, and we balance the measurements against each other to develop a scorecard of overall effectiveness.

Since the members of this movement come from a number of different perspectives, each of us views metrics slightly differently. I’m hopeful that we can struggle together to build a dashboard that is relevant for all.

Daniel Bassill
I agree. I think numbers can help give more precise understanding to define what we’re doing and progress we’re making toward goals. However, I think they are more useful to the CEO of an enterprise than to the evaluator, donor, or customer. That’s because as CEO, I understand the context of the number. I know what the challenges were, what resources I could bring to bear, and what I’m trying to achieve. If my numbers are put in a column of numbers comparing me to organizations doing similar work, they don’t mean much because the context for each org is different, and the donor, volunteer, customer, probably has not spent nearly as much time as I have in understanding what I’m doing, what resources I have, and what my challenges are.

Hugh Davidson
At the start of an initiative I would engage all of my stakeholders to build a preference model based on key success criteria. Then use the preference model to rank the initiative as if it had met all of it’s desired outcomes. The preference model gives weightings to criteria through the 1000 minds process which preserves the importance of soft data. The initiative will gain a percentage score against the preference model. This can then be used as the yardstick of success. Then at whatever intervals desired asses the initiatives progress against the model again and it will give it’s current state as a percentage score against the preference model. The difference between the current score and the score the initiative might obtain as if it had reached it’s desired outcomes will give a meaningful comparison of success and progress.

Due to the processes accessibility and deconstructibility it becomes possible to adapt the model if need be by re-engaging the stakeholders. Issues should be easy to communicate because of the transparency the model affords.

Charles Cameron
I too think that numbers within context are useful — the problem arises when people *other than those directly involved with the initiative* remove the numbers from their context and consider them as though that’s all there is to consider.

So one of the contexts that’s important is the context of transmission — who it is that gets to hear the story, how clearly they understand the importance of the non-quantifiable elements, and whether they pass that aspect along when they re-tell the story…

I think these are the wrong questions, and the wrong answers. Mr Hipbone may be human. “We” (whichever “we” this is) may be human. But so are “they”. Everyone is human, including ‘quants’, bankers, etc. That’s the point of humanity – it’s by definition inclusive.

Jon Griffith
So, being human isn’t a difference which makes a difference to anything. The real differences between people, in the contexts being discussed here, are of two kinds: one, between people with different goals; and, two, between people with similar goals but different preferences about how to achieve them. The rest is noise, and it’s not going to make any material difference whether people and/or organisations call themselves social entrepreneurs or not, whether they have a warm fuzzy attitude, or embrace a tough guy ideology, whether they see themselves as qualits or quants (which is a silly distinction), and whether or not they believe in being “holistic”.

What matters is, what are you trying to do, how well are you doing it, and how can you do better?

Derek Link
As an evaluator, I know that often the best evaluation design is one that integrates quantitative and qualitative data. Without using both, it is often possible to discern what is happening, but completely miss the why of it.

Imposition of an imbalanced model of evaluation, as has been happening in education; in other words, complete reliance on quantitative data for assessing successful teaching and learning, is warping assessment of public education. A child who comes to school hungry, ill and lacking sleep because their guardians were up all night fighting isn’t going to score well on a test. Assessing this child’s performance based only on quantitative factors would be damning on the teacher, school and curriculum.

Accountability is fine, asking the right questions and designing the right data collection to answer them should not exclude either category of data.

Hildy Gottlieb
As I am reading, though, more questions arise for me than answers, the most important of which is, “What is the ultimate highest-potential end results we are trying to accomplish as organizations and as social entrepreneurs?” From there, “How can evaluation help us further those ends, and what kind of evaluation will do so?”

In some cases this may mean numbers. In virtually all cases it will mean stories. When we are dealing with changing the lives of people and communities, the most effective answer usually leans more towards “both/and” than “either/or.”

Which raises more questions for me:

  • How can we ensure our work is aimed at the change we want to create? (In my own experience, lack of social change results not from a failure to measure but failure to aim.)
  • How can we engage individuals and communities to develop their own indicators of success (and then engage them in creating that success, rooted in their own strengths and values)?
  • How can we create a spirit of learning and exploring and trying new things, to see what will work (and then measure that, to encourage more learning and exploring…)?
  • How can we ensure that the measurement approaches we use do no harm? (Many of us are indeed convinced that current measurement systems indeed do harm…)

Lastly, I am reminded of a quote by Chogyam Trungpa, who notes, “The basic problem we seem to be facing is that we are too involved with trying to prove something, which is connected with paranoia and a feeling of poverty. When you are trying to prove or get something, you are not open anymore, you have to check everything, and you have to arrange it “correctly.” It is such a paranoid way to live and it really does not prove anything.”

I hate playing catch-up.  Unfortunately, that’s the name of the game this week as I’ve devoted much of my time to the ‘ole job search.  However, I consider myself lucky to be following (however belatedly) two parallel debates on nonprofit transparency. I discovered the first, which is about the Social Innovation Fund’s selection process, at Tactical Philanthropy.  However, the debate has garnered several commentators, as evidenced by Adin Miller’s anthology and the live Twitter confab hosted by Matt Bishop yesterday  afternoon.  Others have summarized the controversy better than I, but my take is as follows.  Essentially, serious questions have been raised about the integrity of the SIF’s selection process, as well as the lack of transparency demonstrated when concerns were expressed. After a lot of back and forth including commentary from  Sean Stannard-Stockton, Steve Goldberg, Adin Miller and publications such as the New York Times, the Washington Post, and the Nonprofit Quarterly, the SIF has released information about its selection process and disclosed grantee applications.  (It’s important to note that several of those involved in the process, including reviewers and grantees, identified themselves voluntarily, ahead of the SIF’s disclosure.)

Over at AidWatch, an equally compelling dialogue has unfolded over the last  few weeks over the efforts of Till Bruckner,  a PhD candidate at the University of Bristol and former Transparency International Georgia aid monitoring coordinator, to examine how money was being spent by NGOs working in Georgia. He claims that his efforts to access budgetary information were frustrated by the NGOs themselves and/or USAID, although there are different accounts of what actually transpired.

Then I found this over at Ashoka about the role of accountability in education and Race to the Top.

At its very highest level, all of this dialogue points to a need to properly define accountability and transparency, and determine who’s responsible for what under what circumstances. Of course, these debates represent what happens when theory comes to life.  It’s dynamic and full of opportunities to learn at its best, and messy and confusing at its worst.  Regardless, there’s some utility in having a rubric to analyze what’s happening so we can figure out what to do about it.

Ironically, I spent part of last weekend reading Alnoor Ebrahim’s working paper, The Many Faces of Nonprofit Accountability, which dissects the primary elements of accountability: to whom an organization is accountable, for what it is accountable, and how it is accountable. (2)  Although I’m not suggesting that Ebrahim’s paper provides “the answer”, it does offer some insight into how accountability could work effectively in the real world. So let’s take some time to explore the implications for the dilemmas described above.

First off, one would be hard-pressed to find a better description of what is driving the controversy surrounding the Social Innovation Fund and Till Bruckner’s work than this:

At its core, accountability is about trust.  By and large, nonprofit leaders tend to pay attention to accountability once a problem of trust arises –
a scandal in the sector or in their own organization, questions from citizens or donors who want to know if their money is being well spent, or pressure from regulators to demonstrate that they are serving a public purpose and thus merit tax-exempt status. Amid this clamor for accountability, it is tempting to accept the popular normative view that more accountability is better. But is it feasible, or even desirable, for nonprofit organizations to be accountable to everyone for everything? The challenge for leadership and management is to prioritize among competing accountability demands. This involves deciding both to whom and for what they owe accountability.

Additionally, Ebrahim lists the “four core components of accountability” as described in the literature (Ebrahim and Weisband, 2007), three of which are relevant to the SIF and Bruckner cases:

  1. Transparency, which involves collecting information and making it available an accessible for public scrutiny;
  2. Answerability or Justification, which requires providing clear reasoning for actions and decisions, including those not adopted, so that they may reasonably be questioned;
  3. Compliance, through the monitoring and evaluation of procedures and outcomes, combined with transparency in reporting those findings; and,
  4. Enforcement or Sanctions for shortfalls in compliance, justification, or transparency. (3)

Not surprisingly, the source of accountability, i.e. whether it’s driven by an internal sense of responsibility or on external call for answers has implications for the type and quality of the “organizational response”.  (3)  Equally worth noting is that accountability is influenced by potentially lopsided relationships. … Accountability is also about power, in that asymmetries in resources become important in influencing who is able to hold whom to account. (7)  I think we see evidence of both challenges in the reluctance of the “accused parties” to release information, as well as the disagreements that occurred over who had the authority to disclose  information, the funder or the grantee.

In describing five mechanisms that can be used to support accountability—reports and disclosure statements, evaluations and performance assessments, industry self-regulation, participation, and adaptive learning— Ebrahim offers insights into the potential limitations of those  employed thus far in the SIF and Bruckner scenarios.  (11)

About reports and disclosure statements, an example of which is Form 990, he has this to say:

Such reports and legal disclosures are significant tools of accountability in that they make available (either to the public or to oversight bodies) basic data on nonprofit operations. …While no doubt important as deterrents, these external approaches have limited potential for encouraging organizations and individuals to take internal responsibility for shaping their organizational mission, values, and performance or for promoting ethical behavior. (13)

Although the documents in question in the SIF case (grantee applications and ratings) and the Bruckner scenario (non-redacted NGO budgets), are not the same as tax documents, I think the comparison remains valid.

With respect to evaluation and performance assessments, the following statement reflects the concerns raised by Paul Light (a proposal reviewer for SIF) because New Profit, a grantee he’d rated poorly in the first round of evaluation, was ultimately awarded a grant.

Some scholars have shown that funders can come to somewhat different conclusions about the same set of nonprofits as a result of how they frame their evaluations (Tassie, et al., 1998: 63). (15)

Finally, and perhaps most importantly, Ebrahim talks about adaptive learning as a mechanism for accountability.  I highlight this because  the way forward should involve creating space, both inside individual organizations and in the nonprofit sector generally, to learn, adapt and improve.  That is, after all, what the SIF was designed to do for its grantees.

Another process mechanism is adaptive learning, in which nonprofits create regular opportunities for critical reflection and analysis in order to make progress towards achieving their missions.  Building such learning into an organization requires at least three sets of building blocks: a supportive learning environment, where staff are given time for reflection and the psychological safety to discuss mistakes or express disagreement; concrete learning processes and practices that enable experimentation, analysis, capacity building, and forums for sharing information; and, supportive leadership that reinforces learning by encouraging dialogue and debate, andproviding resources for reflection (Garvin, et al., 2008).  Learning, as such, seeks to “improv[e] actions through better knowledge and understanding... (Fiol and Lyles, 1985: 803)  (20-21)

Let’s hope this kerfuffle, to use Steve Goldberg’s colorful terminology, creates opportunities to engage in some adaptive learning, a form of accountability that can do nothing but help the sector progress.

Source of Citations:
Ebrahim, Alnoor. (2010) “The Many Faces of Nonprofit Accountability”, Cambridge: HBS Working Papers Collection.

Posted by: Tayo Akinyemi | August 17, 2010

Stanford Social Innovation Review on Measuring Social Value

Normally when I read an article and comment on it, my approach is relatively straight-forward.  I summarize, extract, highlight, or extrapolate.  However with Measuring Social Value, an article written by Geoff Mulgan for the Summer 2010 issue of the Stanford Social Innovation ReviewI found myself in unfamiliar territory.  I was kicking tires, questioning assumptions, and pondering with a capital ‘P’.  Why?  Something about Mr. Mulgan’s thesis  seemed to land outside my stable of mental models.  And like the dutiful mind guardians in Inception, the ones who ruthlessly excise intruders, mine were in overdrive. But when I examined the argument more closely, I realized that Mulgan’s ideas weren’t as foreign as I’d originally thought.  They just raised more questions about underlying assumptions than I’m used to. This is a good thing.

So what did Mulgan have to say that caused so much discombobulation? Essentially, his main hypothesis (although there are several corollaries), is that social value is not an “objective fact”. (40)  Instead, it results from the interplay between what he describes as effective demand and effective supply. “Effective demand means that someone is willing to pay for a service or an outcome. That “someone” may be a public agency, a foundation, or individual citizens.  Effective supply means that the service or outcome works, is affordable, and is implementable.” (42)  He goes on to state that, “…more recently, most economists have accepted that the only meaningful concept of value is that it arises from the interaction of demand and supply in markets. In other words, something is valuable only if someone is willing to pay for it. This blunt approach upsets many people because it implies that there may be no economic value in a beautiful sunset, an endangered species, or a wonderful work of art.” (42)

These deceptively straightforward statements raised several questions for me, which I’ll address later.  But my primary quibble is that Mulgan seems to assert that social value can only be expressed in economic terms. Presumably, this is done by un-externalizing social value (to the extent that this is necessary) and integrating it into the current economic system. This introduces a  conundrum which I’ve raised before, i.e. whether social and economic value are the same.  If they are, as Mulgan implies, than his hypothesis about defining social value through market forces makes sense.  If not, then the problem becomes muddier.

I think that consensus is growing around the possibility that social and economic value are connected and have a positive relationship.  However, there seems to be much less clarity about whether social value can be expressed purely in economic* terms. Objections about prioritizing “people over profit” come to mind.  Perhaps I’m not entirely convinced that social value can’t be defined in other ways, despite believing that aligning the forces of supply and demand for social capital makes sense.  Despite this reservation, much of what Mulgan argues, which I’ll summarize briefly, fits my understanding of what a functional [social capital] marketplace should look like.

Let’s review the article’s main points:

  • Mulgan gives two primary reasons why metrics aren’t used to drive decision-making. The first is one that was previously mentioned–the non-objective nature of social value.  The second, which deserves to be highlighted, is that the three roles that metrics can play: “accounting to external stakeholders, managing internal operations, and assessing societal impact (40) are often conflated. (40)
  • Next Mulgan addresses the factors that make measuring social value difficult.  Perhaps not surprisingly, he points out that social science and practice lacks laws that enable one to predict outcomes. To complicate matters further, many of the fields of intervention, crime prevention for example, lack clarity about desired outcomes.  Finally, it’s very difficult to predict how much benefit an intervention will generate in the long term as compared to the costs required.
    (40, 41)
  • As you already know, Mulgan’s solution to the second dilemma is social value defined by the interaction of effective demand and effective supply.  How will this work?  Well, ideally “markets conversations, and negotiations will link people with needs and resources to people with solutions and services.” (42) The degree to which this happens efficiently depends on the maturity of the market.  Where supply and demand are well defined, measuring social value should be easier, whereas in markets where one or both is unclear, things get trickier.Because I’m a fan of a ‘systems based approach’ to problem solving, I’d like to highlight  Mulgan’s use of  holistic approaches as an example here.  Because they involve multiple buyers (from NGOs, government, etc.) and sources of supply (different points of intervention for a single problem) teasing out social value is an exercise in trial and error to figure out what works to link supply and demand. (42)

Okay, now on to the remaining quibbles:

  • Mulgan says that “few people use metrics to guide decisions.”  I’m willing to accept this assertion, but a little evidence would help for those of us who aren’t in the know. Just sayin’.
  • I struggled, believe it or not, with the definition of an objective fact.  The way it’s described suggest sthat it is static and provable/observable in some way. Or maybe it’s just the opposite of subjective opinion, however you choose to define that.
  • Mulgan states that “most economists have accepted that the only meaningful concept of value comes from the interaction of demand and supply.” (42) Again, I’m happy to get on board with this statement.  But I’d like to who these economists are and what facts and assumptions are underlying this statement.
  • The corollary to the statement above, i.e. that things like sunsets have no economic value, is a bit problematic for me. First off, although we do get sunsets for free, people will pay for real estate that provides better sunset views or vacations that enable them to ponder sunsets unencumbered, or literature that enables them to create  ‘sunset-infused’ states of mind.  Don’t these represent economic value? Also, shouldn’t we consider what prevents people from being willing to pay for things?  For example, when there’s a third party payer, or the current system doesn’t always demand a price (like with the original sunset example), willingness to pay is obscured.  This construction also assumes that people are  the only economic actors.  If so, what are the qualifications?   Because nature renders services for free, presumably because of our unwillingness to pay for them, or the mis-classification of the services as free.  Although admittedly, this is beginning to change.
  • Finally, Mulgan believes that the role of funders is not to measure value, but to create the [social] capital market, i.e. bring supply and demand together.  (43) This seems reasonable enough, but the suggestion reveals a “chicken and egg” problem.  Presumably, measuring social value can also connect supply to demand by naming or qualifying the value via price, therefore guiding potential buyers to an effective solution.

Wow, I’m burdened by the weight of my own verbosity.  I can’t imagine how you all must be feeling.  For those of you who made it to the end of this post, I salute you.  As they say in Nigeria, you are very well done.  Fewer “deep thoughts” in the next post for sure.

*I define economic value here, and throughout this post, in the same way that Mulgan does, i.e. defined in the market by the interaction of supply and demand.

Mulghan, Geoff. Measuring Social Value.  Stanford Social Innovation Review. Summer 2010.

The Multidimensional Poverty Assessment Tool (MPAT) was officially launched by the International Fund for Agricultural Development (IFAD) on March 23rd, approximately four months before the Oxford Poverty and Human Development Initiative (OPHDI) released the Multidimensional Poverty Index (MPI). Sadly, I would not have heard of the MPAT had it not been for Stephanie Jayne at Nuru, who gave me the heads up.  (Nuru is trying to develop its own shared measurement system called the Poverty Index, which Stephanie has blogged about.) Given the number of similarities that seem to exist between the two, I’m a bit surprised that no ‘compare and contrast’ has been done.  Of course, my reasons for wanting to see this are largely selfish.
As a non-practitioner, I’m not entirely clear what the meaningful similarities and differences between the two really are.  Nonetheless, I will probably undertake a rudimentary analysis sometime soon, in an effort to preserve my sanity.  You know how it is. Burning questions and all. (Certainly, I’d prefer someone like Duncan Green to do it, but you can’t always get what you want.) For now I’ll stick to describing the MPAT, which is a pretty big job on its own.

Who Created the MPAT?
The Multidimensional Poverty Assessment (MPA) Project, which birthed the MPAT, was conceptualized in 2007, launched in 2008, and funded through an Initiative for Mainstreaming Innovation (IMI) grant, IFAD-supported projects, and government entities in China and India. IFAD’s purpose in leading this effort was to develop  a new tool for rural poverty assessment. (22)

Why Was the MPAT Created?
Simply put, if policy makers, program designers, and government officials want to develop effective interventions to reduce poverty, they must understand its underlying causes, or why people are poor, in a holistic, context-specific way.  The creators of the MPAT assert that income-focused measures of poverty do not adequately capture the complexity of poverty.  Consequently, a multidimensional approach is more appropriate. The MPAT was designed to inform an understanding of rural poverty that will lead to effective action. (18)

Sounds Good, But What Does the MPAT Do Exactly?
The MPAT aims to provide an overview of what “human well being” looks like by identifying the constraints that mire people in poverty. In other words, the tool captures the dimensions of rural poverty, and by extension, the elements critical to its eradication. It also strives to reflect the 21st century challenges posed to today’s rural livelihoods, such as climate change and sociopolitical conflict. (26)

According to the creators,”The MPAT measures people’s capacity to do by focusing on key aspects and indicators of the domains essential to an enabling environment within which people are sufficiently free from their immediate needs, and therefore in a position to more successfully pursue their higher needs and, ultimately, their wants” [Cohen, in press]. (26)

Okay, And How Does It Do That?
The MPAT uses the MPAT Household Survey and the MPAT Village Survey to collect data on “the ten dimensions of rural livelihoods, highlighting where additional support or interventions are likely to be most needed.” (7) Responses from the survey data are given values that aggregate first into sub components, and then into ten larger components.  (20)  It is important to note that the MPAT is described as a thematic indicator, defined as “a grouping of indicators that measures values similar to a common theme or concept” rather than a composite indicator, which refers to “an amalgamation of different indicator values into a single value, or index, which seeks to represent those individual indicators.” (20) Why? Because one uses a thematic indicator to “understand a general construct, [without having] the values from each element blended together into one value.” (36) However, each of the  MPAT’s dimensions is its own composite indicator, even though they are not aggregated into a single index. (It goes without saying that I’ll have to return to this point when I present my version of MPAT vs. MPI: The Smackdown.)

The first six components of the MPAT are meant to represent critical needs, and as such are derived from Basic Needs theory (Streeten and Burki,1978, Streeten et al., 1981, Maslow, 1943). They are: food & nutrition security, domestic water supply, health & health care, sanitation & hygiene, housing, clothing & energy, and education.  (27-28)

The last four “go beyond immediate physical and cultural needs and address fundamental dimensions of rural livelihoods, life and well-being” as well as some the modern day challenges described earlier. (29) They are: farm assets, non-farm assets, exposure & resilience to shocks, and gender & social equality (29-30).

Because the MPAT is meant to provide an overview of rural poverty, it is necessary examine the underlying data and return to the field with context-specific tools to get the real story. (18) (Sound familiar?). To enable this, the MPAT survey can be customized to support additional data collection.

Why is the MPAT Built The Way It Is?
I’m glad you asked.  Essentially, it goes back to the reason why poverty reduction schemes are developed. It’s probably fair to say (or at least the MPAT creators think so) that most of these efforts are designed to help the poor help themselves out of poverty. In order to do so, it is necessary to create an “enabling environment” that will allow them to do so.  (24, 25) However, if people are struggling to meet their essential needs (Think Maslow’s hierarchy, y’all), then it’s difficult to address other challenges. The MPAT is meant to represent a “core set” of needs that once met, can serve as a platform to address others. In this sense, the MPAT is complementary to (but not based on) Sen’s Capabilities Approach. (25)

Great, But What’s the Real Value Add Here?
For me, this is the kicker. The folks who created the MPAT emphasized the importance of giving voice to the perspectives of the rural poor about their own poverty and translating this through a “quantitative lens”, i.e. the MPAT Household survey, in a way that would stimulate action.  (10-11, 36, 119) Additionally, the MPAT is both broad enough to reflect most rural livelihoods but specific enough to be provide useful information about particular contexts. (18)

Honestly, it’s times like these that I wish I were a development economist, or was really good friends with one. Any development economists out there want to be my friend? Sorry, that was a little desperate-sounding. In any case, if anyone has any additional light to shed on the MPA, I’m happy to be enlightened (as would Stephanie Jayne, I’m sure.) In the meantime, I’ll take a stab at comparing the MPAT and the MPI. Apparently, I like to live dangerously.

Source for Citations:

Cohen, A. (2009). The Multidimensional Poverty Assessment Tool: Design, development and application of a new framework for measuring rural poverty

Posted by: Tayo Akinyemi | August 10, 2010

To Aggregate or Not To Aggregate: The MPI Revisited

I blogged about the Multidimensional Poverty Index (MPI) developed by the Oxford Poverty and Human Development Initiative a few weeks ago.  Since then a robust debate has emerged around the multidimensionality of the index, a quality I believed to be valuable, if viewed from an Amartya Sen-friendly perspective.

However, a great deal of discussion, moderated by Duncan Green at From Poverty to Power,  has emerged around the real utility of aggregating the components of the index.  To paraphrase Martin Ravallion of the World Bank, why not just get the best data available on each dimension instead of smooshing them together?  Even more importantly, what is gained by aggregating these measures?  Hmmm….these sound like excellent questions to me.

Sabina Alkire, one of the MPI’s architects, responded in far more descriptive terms than I’ll use here, that the MPI reveals how someone is poor.  She likens this process to clicking on a drop down menu linked to a larger category.

As commentators much more knowledgeable than myself have provided excellent analysis and summaries of this debate, I’ll stop here.  However, this high-octane exchange reminds of something Jim Tanburn said at the Business Fights Poverty event, entitled Harnessing the Power of Business for Development Impact in London two months ago.  To paraphrase again, More often than not the most difficult part of impact measurement isn’t choosing the indicators; it’s identifying the appropriate process.  Indeed.

For those of you who want to retrace the discussion in its entirety (and I highly recommend that you do), please visit Gabriel Demombynes’ excellent round-up.  Also check out AidWatch, which is how I initially caught wind of the whole thing.  Finally, don’t forget to read the comments!  Many of them add a great deal of color to the debate.

Posted by: Tayo Akinyemi | August 5, 2010

But Wait, There’s More!

I have stopped the wordpress to report (terrible joke I know…couldn’t help it) that Greenbiz and UL Environment have joined forces to create a sustainability standard for manufacturing companies called UL 880 (via Fast Company).

The standard aims to “create a uniform, globally applicable system for rating and certifying companies of all sizes and sectors on a spectrum of environmental and social performance characteristics.”

Check out Ariel Schwartz’s post for the details.  Oh, and I retract my previous statement.  What we need is a standards directory.  That’ll do the trick.

« Newer Posts - Older Posts »