Category Archives: Philanthropy

Impact, Influence, Leverage and Learning (I2L2)

[In November 2015 ORS Impact and I revised and updated the I2L2 framework document. The new link is I2L2 and below].

Years ago while working at the Annie E. Casey Foundation we struggled with how to organize, summarize and communicate very diverse grant making strategies and results (from direct service work to community change, from technical assistance and capacity building to policy and advocacy). We had plenty of numbers and examples without a common framework for communicating them. This triggered both a return to basics around an intentional outcomes focus (with Results Based Accountability) as well as common definitions for the “types” of outcomes and results we were aiming for. There was a lot of experience with naming and describing outcomes for child and family well-being (e.g., increased employment, improved school attendance) but the struggle was often with summarizing more developmental outcomes like organizational capacity, changes in attitudes and beliefs, and early policy and advocacy investments. Collective memories may be fuzzy but I credit Miriam Shark at Casey with advancing a set of 3 result categories:

  •  Impact – results that were intended to achieve direct change and impact on people
  • Influence – results that described intended change in organizations, beliefs and behaviors, contexts, and policies and practices
  • Leverage – changes in resources and funding, in this case, especially where the foundation investments influenced others to change how they invest in the same or similar strategies

We later discussed including a second “L” for Learning outcomes–especially where there was intentional strategies to acquire knowledge needed to inform other work. This list may seem simple (and even obvious now in retrospect) but it provoked some key thinking and behaviors.

First, it helped program officers and grantees organize and report results in all 4 categories which was especially helpful for activities and investments that could not measure community impact directly or within a short time period. The influence and leverage results could describe the early evidence that change was happening on a path to impact outcomes. In addition, it allowed for not only results from different strategies within a portfolio or across the foundation to be consolidated, it also helped people to look at all four types of impact, influence, leverage and learning outcomes for individual grants or activities.

Second, it provided a prompt to everyone to define their intended results (and intentional strategies) for all four categories at the beginning of the planning and work. Again, this is certainly obvious within most results and outcome based planning but often the focus is only on the long term impact and less upfront attention is given to the earlier and needed influence and behavior change outcomes needed to achieve changes in people and places. What often happens is that impact results are defined up front and measured but if not fully achieved both foundation and grantee fell back on narrative or bullet-form examples of “other changes” that occurred–often influence and leverage–defined and documented in retrospect.

Starting with this initial set of ideas I asked ORS-Impact to prepare a tool for Making Connections community change sites and grantees to understand how to define and measure influence and leverage. This initial guide helped get the concepts and early definitions to a limited audience of grantees and it provided examples of indicators and ways to measure both influence and leverage. Later in 2006, we focused on the policy and advocacy aspects of influence which took the form of other manuals and guides, and also contributed to the growing policy advocacy evaluation work.

I continued to use the initial framework with the inclusion of learning outcomes in different work with multiple organizations and ORS-Impact also went back to it in work with other clients. Deceptively simple but helpful as an organizing framework when the work or array of investments and strategies have different levels of focus and change and operate in different timeframes but are meant to relate and be complementary. Certainly deeper and more comprehensive theory of change exercises help to define these same elements in different ways but these can be challenging to summarize and communicate to audiences not immersed in the work (like board members or the general public).

So we decided to go back to the original ideas and publications and spend time documenting good case examples of how the framework has been used and what organizations have gained from it. Jane Reisman, Anne Gienapp, Sarah Stachowiak, Marshall Brumer, Paula Rowland, and the ORS-Impact team worked with current and past clients and colleagues to assemble these examples. We also shared the examples at the American Evaluation Association conference and other meetings which helped to develop the version you can read here as I2L2.

We continue to receive positive feedback especially around the I2L2 framework’s ability to help organize thinking and definitions of expected change and results. Again, this doesn’t replace theory of change and other in-depth planning but when community change strategies and their intended outcomes are complex and highly interrelated (sometimes without distinct sequencing), I2L2 helps groups to organize, define, document and communicate the results they are aiming for and achieve.

So where are we now? We have spent a lot of time and effort on defining terms and examples for influence and leverage. (Others have also contributed their work on these categories–see Jim Casey Youth Opportunity Initiative‘s Assessing Leverage guide. We would like to focus on how to help people and organizations focus on defining intentional and planned learning results and the strategies to get there. Here we want to define learning to not be only the lessons acquired from (usually) failing to achieve impact or successfully reach targets but more importantly the intentional agenda for acquiring needed knowledge. Defined at the beginning and evaluated along the way.

Do you have examples of work defining learning results? Learning outcomes? How have you evaluated learning?

We’d love to hear from you.

I don’t like dashboards but am willing to date one

InsitesI probably have a few 2014 New Year resolutions that I have not fully engaged with yet (like blogging here more regularly) but I am having to tackle my priority one—developing a better organization-wide framework for monitoring performance and communicating the foundation’s impact across grantmaking, donor services, fundraising, and all our work.  I am not sure how often other evaluators get asked to help with organization-wide dashboards but in the world of philanthropy there is a lot of interest and demand.  I know my colleagues in other foundations get slightly nauseous looks on their faces when you ask about the dashboards they use in their organizations.  Not only are we not universally proud of them but often we just don’t like them.

Now we are all pro-data, pro-measurement cheerleaders and more importantly we are all advocates for the utilization of data and evaluation to promote learning.  And we know how good visual summaries and graphics of data can increase understanding and analysis by people.  So why are we so underwhelmed by the dashboards we have?  (And even admit hating to work on them?)

I am a committed outcome proselytizer and I enthusiastically promote Mario Morino’s Leap of Reason: Managing to Outcomes in an Era of Scarcity and Mark Friedman’s Results-Based Accountability (RBA).  I have seen how nonprofits and foundation staff are helped by clear process and outcome definitions, good and reliable measures, and simple summaries of change over time.  But I am still left underwhelmed, disappointed, and most often very worried that most performance dashboards are not getting at the “right” data that will help change behavior and achieve the results we want.

Dashboards in their parsimony do often lack context or enough context to satisfy a footnoting evaluator.  They do focus attention on key efforts and strategies and I strongly believe (as Morino has advocated) that more nonprofits need to focus on measuring their work and outcomes as a primary operating capacity.  But I still feel that most are missing more than just additional context. Recently I read Henry Doss’ assessment of how businesses need to apply more focus and measurement to the features they want to see in the ecosystem and not simply the outputs they produce and are incentivized to produce.  This reminded me of Pete York’s frequent admonition that nonprofits need to focus more attention on the proximate cause-and-effect relationships they can impact.  Most of the performance and outcome measurement I have seen and experienced with foundations and nonprofits are misaligned with incentives, target goals too distant from the efforts, and ignore the influences in and on the ecosystems around us.

Doss noted that our short-term performance incentives are often misaligned with our long-term vision.  And in an increasingly VUCA world (my 2013 word-of-the-year, not “selfie”) keeping the alignment between our mission and our strategies (and, therefore, our performance metrics) is increasingly difficult.  Despite their usefulness in driving performance, dashboards can “miss the mark” around overall organizational mission if they do not address not just what we do and what we think those effects are but also how we should be influencing and responding to the influences of the changing world around us while still driving towards mission and impact.  Dashboards can’t and shouldn’t do everything but how can I develop a dashboard that helps staff keep an eye on performance, quality, and effective implementation while also holding everyone together in our collective mission?

I have often wanted to include organizational values and “how we work” in performance measurement—It is not just how much we do but they way we work which is important.  Glenda Eoyang’s “Devaluing Values” made me appropriately cautious and skeptical of organizational values without naming the practiced and observable behaviors we need to see and incentivize.  Especially when these behaviors are adaptive and help the organization thrive in a changing environment when the bar keeps moving and yesterday’s performance target is no longer relevant.  It is these behaviors that I want to make sure get measured, reported, and incentivized.

And if I can get this kind of dashboard completed by the second quarter of 2014 I will attempt another of my resolutions and exercise more regularly.

For additional resources:

Examples of foundation dashboards  http://dashboards.wikispaces.com/Foundation+Examples

FSG’s The Foundation Performance Dashboard http://www.fsg.org/Portals/0/Uploads/Documents/PDF/Foundation_Performance_Dashboard.pdf?cpgn=WP%20DL%20-%20The%20Foundation%20Performance%20Dashboard

“Making Sense of Your Foundation Data with Dashboards and Scorecards”  http://www.gmnetwork.org/annual-conference/2010/sessions/making-sense-your-foundation-data-dashboards-and-scorecards

I hate the word ‘learning’

[Word cloud from http://voulagkatzidou.files.wordpress.com/2011/02/wordle.jpg]

I didn’t say it but at a meeting of several foundations this week convened by GEO I did say Amen when another foundation “learning” officer confessed it. I also have the word learning in my new title (along with knowledge and evaluation) and we are not being hypocrites or non-believers but it has been very frustrating to see foundations embrace and promote “learning” as the alternative to evaluation. It is partly understandable when in the past evaluations and evaluators have produced data and reports that have not included clear analysis or actionable knowledge. But I do not see how anyone can learn without data and evaluation, and we should not be measuring and evaluating without clear goals for decisionmaking and action.

Evaluators have struggled to put on a friendlier face of evaluation by emphasizing its contributions to learning and sometimes by de-emphasizing the measuring, monitoring, compliance, and judgement aspects of evaluation. However, I continue to feel more strongly that evaluation must be about learning and accountability. We must be accountable not only to the results we intend and promise to communities but we must also learn in an accountable way. Learning in and by foundations can be very selfish and self-serving if it results only in mildly more knowledgeable program officers that do not change or adapt their ideas and strategies. Learning that is not based on data and analyzed experience is what? Intuition? Hunch? Fond memories of an interesting grantmaking experience paid for with the public’s money held in trust? And learning that does not contribute to actionable knowledge, decisionmaking, and improvement is a waste of data collection and analysis efforts.

As I have tried to help foundations and foundation staff focus or define intentional learning goals it also became clear that individual or even group learning goals can be as self-serving as the learning experience itself. What we are interested in. What we would like to know. What would be interesting to find out. As helpful as these components might be to considerations for future strategy development, they still do not offer a clear path to making knowledge actionable, and more importantly, helping the group agree on a shared path forward.

It has been both focusing and freeing to identify and name the decisions that need to be made by foundation staff and concentrate the data and learning agenda to provide the information necessary to make those decisions. The learning agenda needs to support the decisions that lead to actions–continue this work, adapt what we are doing, change the strategy entirely, and even end funding. And learning can be focused on what we need to know and the information we need to have to influence the decisions of others. But all of this requires being explicit and intentional early about the decisions we need to make and the target audiences we intend to influence before planning and embarking on an evaluation and learning agenda.

Otherwise all our learning efforts will result only in interesting trivia used in cocktail party chatter.

    Related Resources

Marilyn Darling and Fourth Quadrant Partners Emergent Learning framework is one tool that I have used to help groups get to the action that needs to result from the learning. And at the recent American Evaluation Association meeting it was refreshing to see multiple presenters use the experiential learning trio of questions asking of data/analysis: What? So what? Now what?