Tag Archives: evaluation

Impact, Influence, Leverage and Learning (I2L2)

[In November 2015 ORS Impact and I revised and updated the I2L2 framework document. The new link is I2L2 and below].

Years ago while working at the Annie E. Casey Foundation we struggled with how to organize, summarize and communicate very diverse grant making strategies and results (from direct service work to community change, from technical assistance and capacity building to policy and advocacy). We had plenty of numbers and examples without a common framework for communicating them. This triggered both a return to basics around an intentional outcomes focus (with Results Based Accountability) as well as common definitions for the “types” of outcomes and results we were aiming for. There was a lot of experience with naming and describing outcomes for child and family well-being (e.g., increased employment, improved school attendance) but the struggle was often with summarizing more developmental outcomes like organizational capacity, changes in attitudes and beliefs, and early policy and advocacy investments. Collective memories may be fuzzy but I credit Miriam Shark at Casey with advancing a set of 3 result categories:

  •  Impact – results that were intended to achieve direct change and impact on people
  • Influence – results that described intended change in organizations, beliefs and behaviors, contexts, and policies and practices
  • Leverage – changes in resources and funding, in this case, especially where the foundation investments influenced others to change how they invest in the same or similar strategies

We later discussed including a second “L” for Learning outcomes–especially where there was intentional strategies to acquire knowledge needed to inform other work. This list may seem simple (and even obvious now in retrospect) but it provoked some key thinking and behaviors.

First, it helped program officers and grantees organize and report results in all 4 categories which was especially helpful for activities and investments that could not measure community impact directly or within a short time period. The influence and leverage results could describe the early evidence that change was happening on a path to impact outcomes. In addition, it allowed for not only results from different strategies within a portfolio or across the foundation to be consolidated, it also helped people to look at all four types of impact, influence, leverage and learning outcomes for individual grants or activities.

Second, it provided a prompt to everyone to define their intended results (and intentional strategies) for all four categories at the beginning of the planning and work. Again, this is certainly obvious within most results and outcome based planning but often the focus is only on the long term impact and less upfront attention is given to the earlier and needed influence and behavior change outcomes needed to achieve changes in people and places. What often happens is that impact results are defined up front and measured but if not fully achieved both foundation and grantee fell back on narrative or bullet-form examples of “other changes” that occurred–often influence and leverage–defined and documented in retrospect.

Starting with this initial set of ideas I asked ORS-Impact to prepare a tool for Making Connections community change sites and grantees to understand how to define and measure influence and leverage. This initial guide helped get the concepts and early definitions to a limited audience of grantees and it provided examples of indicators and ways to measure both influence and leverage. Later in 2006, we focused on the policy and advocacy aspects of influence which took the form of other manuals and guides, and also contributed to the growing policy advocacy evaluation work.

I continued to use the initial framework with the inclusion of learning outcomes in different work with multiple organizations and ORS-Impact also went back to it in work with other clients. Deceptively simple but helpful as an organizing framework when the work or array of investments and strategies have different levels of focus and change and operate in different timeframes but are meant to relate and be complementary. Certainly deeper and more comprehensive theory of change exercises help to define these same elements in different ways but these can be challenging to summarize and communicate to audiences not immersed in the work (like board members or the general public).

So we decided to go back to the original ideas and publications and spend time documenting good case examples of how the framework has been used and what organizations have gained from it. Jane Reisman, Anne Gienapp, Sarah Stachowiak, Marshall Brumer, Paula Rowland, and the ORS-Impact team worked with current and past clients and colleagues to assemble these examples. We also shared the examples at the American Evaluation Association conference and other meetings which helped to develop the version you can read here as I2L2.

We continue to receive positive feedback especially around the I2L2 framework’s ability to help organize thinking and definitions of expected change and results. Again, this doesn’t replace theory of change and other in-depth planning but when community change strategies and their intended outcomes are complex and highly interrelated (sometimes without distinct sequencing), I2L2 helps groups to organize, define, document and communicate the results they are aiming for and achieve.

So where are we now? We have spent a lot of time and effort on defining terms and examples for influence and leverage. (Others have also contributed their work on these categories–see Jim Casey Youth Opportunity Initiative‘s Assessing Leverage guide. We would like to focus on how to help people and organizations focus on defining intentional and planned learning results and the strategies to get there. Here we want to define learning to not be only the lessons acquired from (usually) failing to achieve impact or successfully reach targets but more importantly the intentional agenda for acquiring needed knowledge. Defined at the beginning and evaluated along the way.

Do you have examples of work defining learning results? Learning outcomes? How have you evaluated learning?

We’d love to hear from you.

I hate the word ‘learning’

[Word cloud from http://voulagkatzidou.files.wordpress.com/2011/02/wordle.jpg]

I didn’t say it but at a meeting of several foundations this week convened by GEO I did say Amen when another foundation “learning” officer confessed it. I also have the word learning in my new title (along with knowledge and evaluation) and we are not being hypocrites or non-believers but it has been very frustrating to see foundations embrace and promote “learning” as the alternative to evaluation. It is partly understandable when in the past evaluations and evaluators have produced data and reports that have not included clear analysis or actionable knowledge. But I do not see how anyone can learn without data and evaluation, and we should not be measuring and evaluating without clear goals for decisionmaking and action.

Evaluators have struggled to put on a friendlier face of evaluation by emphasizing its contributions to learning and sometimes by de-emphasizing the measuring, monitoring, compliance, and judgement aspects of evaluation. However, I continue to feel more strongly that evaluation must be about learning and accountability. We must be accountable not only to the results we intend and promise to communities but we must also learn in an accountable way. Learning in and by foundations can be very selfish and self-serving if it results only in mildly more knowledgeable program officers that do not change or adapt their ideas and strategies. Learning that is not based on data and analyzed experience is what? Intuition? Hunch? Fond memories of an interesting grantmaking experience paid for with the public’s money held in trust? And learning that does not contribute to actionable knowledge, decisionmaking, and improvement is a waste of data collection and analysis efforts.

As I have tried to help foundations and foundation staff focus or define intentional learning goals it also became clear that individual or even group learning goals can be as self-serving as the learning experience itself. What we are interested in. What we would like to know. What would be interesting to find out. As helpful as these components might be to considerations for future strategy development, they still do not offer a clear path to making knowledge actionable, and more importantly, helping the group agree on a shared path forward.

It has been both focusing and freeing to identify and name the decisions that need to be made by foundation staff and concentrate the data and learning agenda to provide the information necessary to make those decisions. The learning agenda needs to support the decisions that lead to actions–continue this work, adapt what we are doing, change the strategy entirely, and even end funding. And learning can be focused on what we need to know and the information we need to have to influence the decisions of others. But all of this requires being explicit and intentional early about the decisions we need to make and the target audiences we intend to influence before planning and embarking on an evaluation and learning agenda.

Otherwise all our learning efforts will result only in interesting trivia used in cocktail party chatter.

    Related Resources

Marilyn Darling and Fourth Quadrant Partners Emergent Learning framework is one tool that I have used to help groups get to the action that needs to result from the learning. And at the recent American Evaluation Association meeting it was refreshing to see multiple presenters use the experiential learning trio of questions asking of data/analysis: What? So what? Now what?

Evaluation Colleagues, Mentors, and Friends

drbarbarasugland(Dr. Barbara Sugland 1960-2010)  As an internal evaluation director at a foundation, I no longer “practice” evaluation in the same way as when I was a consultant in my pre-philanthropy years in Washington, D.C.  I often worry that my skills can get a little rusty when my daily role is one of managing, translating, organizing, and planning resources for evaluations and other knowledge and data-related tasks.  I rely greatly on consultants and colleagues to keep me current (and honest) in how evaluation should be practiced.

I am grateful for the many consultants and colleagues who have helped me,  pushed my knowledge, and strengthened my approach.  ORS-Impact (Jane, Anne, Sarah, and team), Innovation Network (Johanna, Veena, Ehren, Ann, Kat), Center for Evaluation Innovation (Julia and Tanya), Community Science (David, Kien, Scott), TCC Group (Pete, Jared and Deepti), and Aspen Institute (Anne, Andrea and Pat).  And without a doubt all my Casey and HCF colleagues, especially Audrey Jordan.

And as many have noted about the recent AEA conference, our field’s best evaluators are often accessible and generous with their advice and interest in others’ work.  Eleanor Chelimsky was my first “teacher” and the GAO PEMD evaluation reports from the 1980s and 1990s were my first textbooks when I began as an evaluator in 1990 before attending grad school.  Beverly Parsons, Bob Williams, David Fetterman, Michael Patton, Hallie Preskill, Ricardo Millet, Patti Patrizi, Anne Kubisch, Prue Brown, and many others have always been considerate and generous with their time, feedback, comments, advice and even humor.

These colleagues continue to stimulate my work but I still spend part of every day (not just every workday) thinking about and missing the wisdom, ethics, commitment, and friendship of two brilliant evaluators who are no longer with us.  Dr. Mary Achatz and Dr. Barbara Sugland were two evaluators who taught me much about evaluation methods but even more about the purpose and mission of evaluation to contribute to improving the lives of the most vulnerable and driving towards social justice and equity.  The American Evaluation Association’s Guiding Principles for Evaluators assert respect for people, integrity, honesty, and responsibility for the public welfare–and these women were exemplars of these principles.  My own knowledge and practice were greatly influenced and informed by these two extremely modest yet enormously skilled professionals who gave the most attention and respect to the least powerful resulting in more genuine, appropriate, accurate, and influential evaluations.  They both also lived as guides, teachers, mentors and coaches and there are many others who can attest to their heart and skills.  And they are friends I miss every day.

See pages 20-21 in Leila Fiester’s case study on Plain Talk for an example of Mary’s respectful approach to working in and with community

Hear Barbara’s words on supporting and empowering our youth especially youth of color  http://vimeo.com/47253167

And the Dr. Barbara Sugland Foundation dedicated to continuing her legacy (where I am proud to be a director)

I think I blogged before “blog” was even a word

With the helpful advice and prompting by the excellent American Evaluation Association > #eval13 blogging panel of Ann, Chris, Susan, and Sheila, I am reinvigorating my Twitter efforts into longer blog postings on evaluation.

The low-pressure ease and comfort of occasional tweets filled what little bandwidth I thought I had left with a major relocation and job change. But now that I feel more settled and wanting to connect more with my evaluation colleagues now thousands of miles away in ANY direction, the gentle yet firm encouragement of our EVAL bloggers convinced me to go ahead. Which also reminded me that starting with my first personal computer I purchased in 1993 (leading to the first time I fell in love with a South Dakotan–an extremely patient Gateway tech support guy who stayed on the phone with me for 6 straight hours while I repaired my own broken hard drive), I had started writing overly long and somewhat sarcastic (snarky?) movie and restaurant reviews I distributed to friends on my free DC community email account. History is not clear if the term “blog” was invented or first used in 1994, 1997, or 1999, but I feel at least that I have some old skills I might be able to revive.