Tag Archives: learning

Impact, Influence, Leverage and Learning (I2L2)

[In November 2015 ORS Impact and I revised and updated the I2L2 framework document. The new link is I2L2 and below].

Years ago while working at the Annie E. Casey Foundation we struggled with how to organize, summarize and communicate very diverse grant making strategies and results (from direct service work to community change, from technical assistance and capacity building to policy and advocacy). We had plenty of numbers and examples without a common framework for communicating them. This triggered both a return to basics around an intentional outcomes focus (with Results Based Accountability) as well as common definitions for the “types” of outcomes and results we were aiming for. There was a lot of experience with naming and describing outcomes for child and family well-being (e.g., increased employment, improved school attendance) but the struggle was often with summarizing more developmental outcomes like organizational capacity, changes in attitudes and beliefs, and early policy and advocacy investments. Collective memories may be fuzzy but I credit Miriam Shark at Casey with advancing a set of 3 result categories:

  •  Impact – results that were intended to achieve direct change and impact on people
  • Influence – results that described intended change in organizations, beliefs and behaviors, contexts, and policies and practices
  • Leverage – changes in resources and funding, in this case, especially where the foundation investments influenced others to change how they invest in the same or similar strategies

We later discussed including a second “L” for Learning outcomes–especially where there was intentional strategies to acquire knowledge needed to inform other work. This list may seem simple (and even obvious now in retrospect) but it provoked some key thinking and behaviors.

First, it helped program officers and grantees organize and report results in all 4 categories which was especially helpful for activities and investments that could not measure community impact directly or within a short time period. The influence and leverage results could describe the early evidence that change was happening on a path to impact outcomes. In addition, it allowed for not only results from different strategies within a portfolio or across the foundation to be consolidated, it also helped people to look at all four types of impact, influence, leverage and learning outcomes for individual grants or activities.

Second, it provided a prompt to everyone to define their intended results (and intentional strategies) for all four categories at the beginning of the planning and work. Again, this is certainly obvious within most results and outcome based planning but often the focus is only on the long term impact and less upfront attention is given to the earlier and needed influence and behavior change outcomes needed to achieve changes in people and places. What often happens is that impact results are defined up front and measured but if not fully achieved both foundation and grantee fell back on narrative or bullet-form examples of “other changes” that occurred–often influence and leverage–defined and documented in retrospect.

Starting with this initial set of ideas I asked ORS-Impact to prepare a tool for Making Connections community change sites and grantees to understand how to define and measure influence and leverage. This initial guide helped get the concepts and early definitions to a limited audience of grantees and it provided examples of indicators and ways to measure both influence and leverage. Later in 2006, we focused on the policy and advocacy aspects of influence which took the form of other manuals and guides, and also contributed to the growing policy advocacy evaluation work.

I continued to use the initial framework with the inclusion of learning outcomes in different work with multiple organizations and ORS-Impact also went back to it in work with other clients. Deceptively simple but helpful as an organizing framework when the work or array of investments and strategies have different levels of focus and change and operate in different timeframes but are meant to relate and be complementary. Certainly deeper and more comprehensive theory of change exercises help to define these same elements in different ways but these can be challenging to summarize and communicate to audiences not immersed in the work (like board members or the general public).

So we decided to go back to the original ideas and publications and spend time documenting good case examples of how the framework has been used and what organizations have gained from it. Jane Reisman, Anne Gienapp, Sarah Stachowiak, Marshall Brumer, Paula Rowland, and the ORS-Impact team worked with current and past clients and colleagues to assemble these examples. We also shared the examples at the American Evaluation Association conference and other meetings which helped to develop the version you can read here as I2L2.

We continue to receive positive feedback especially around the I2L2 framework’s ability to help organize thinking and definitions of expected change and results. Again, this doesn’t replace theory of change and other in-depth planning but when community change strategies and their intended outcomes are complex and highly interrelated (sometimes without distinct sequencing), I2L2 helps groups to organize, define, document and communicate the results they are aiming for and achieve.

So where are we now? We have spent a lot of time and effort on defining terms and examples for influence and leverage. (Others have also contributed their work on these categories–see Jim Casey Youth Opportunity Initiative‘s Assessing Leverage guide. We would like to focus on how to help people and organizations focus on defining intentional and planned learning results and the strategies to get there. Here we want to define learning to not be only the lessons acquired from (usually) failing to achieve impact or successfully reach targets but more importantly the intentional agenda for acquiring needed knowledge. Defined at the beginning and evaluated along the way.

Do you have examples of work defining learning results? Learning outcomes? How have you evaluated learning?

We’d love to hear from you.

I hate the word ‘learning’

[Word cloud from http://voulagkatzidou.files.wordpress.com/2011/02/wordle.jpg]

I didn’t say it but at a meeting of several foundations this week convened by GEO I did say Amen when another foundation “learning” officer confessed it. I also have the word learning in my new title (along with knowledge and evaluation) and we are not being hypocrites or non-believers but it has been very frustrating to see foundations embrace and promote “learning” as the alternative to evaluation. It is partly understandable when in the past evaluations and evaluators have produced data and reports that have not included clear analysis or actionable knowledge. But I do not see how anyone can learn without data and evaluation, and we should not be measuring and evaluating without clear goals for decisionmaking and action.

Evaluators have struggled to put on a friendlier face of evaluation by emphasizing its contributions to learning and sometimes by de-emphasizing the measuring, monitoring, compliance, and judgement aspects of evaluation. However, I continue to feel more strongly that evaluation must be about learning and accountability. We must be accountable not only to the results we intend and promise to communities but we must also learn in an accountable way. Learning in and by foundations can be very selfish and self-serving if it results only in mildly more knowledgeable program officers that do not change or adapt their ideas and strategies. Learning that is not based on data and analyzed experience is what? Intuition? Hunch? Fond memories of an interesting grantmaking experience paid for with the public’s money held in trust? And learning that does not contribute to actionable knowledge, decisionmaking, and improvement is a waste of data collection and analysis efforts.

As I have tried to help foundations and foundation staff focus or define intentional learning goals it also became clear that individual or even group learning goals can be as self-serving as the learning experience itself. What we are interested in. What we would like to know. What would be interesting to find out. As helpful as these components might be to considerations for future strategy development, they still do not offer a clear path to making knowledge actionable, and more importantly, helping the group agree on a shared path forward.

It has been both focusing and freeing to identify and name the decisions that need to be made by foundation staff and concentrate the data and learning agenda to provide the information necessary to make those decisions. The learning agenda needs to support the decisions that lead to actions–continue this work, adapt what we are doing, change the strategy entirely, and even end funding. And learning can be focused on what we need to know and the information we need to have to influence the decisions of others. But all of this requires being explicit and intentional early about the decisions we need to make and the target audiences we intend to influence before planning and embarking on an evaluation and learning agenda.

Otherwise all our learning efforts will result only in interesting trivia used in cocktail party chatter.

    Related Resources

Marilyn Darling and Fourth Quadrant Partners Emergent Learning framework is one tool that I have used to help groups get to the action that needs to result from the learning. And at the recent American Evaluation Association meeting it was refreshing to see multiple presenters use the experiential learning trio of questions asking of data/analysis: What? So what? Now what?