Category Archives: Learning

How to keep me scrolling

Thanks to The Foundation Center and their GlassPockets Transparency initiative for inviting me on their blog. (Posted on August 10, 2017)

How To Keep Me Scrolling Through What You Are Sharing
August 10, 2017

Tom Kelly Hi Res

Tom Kelly is Vice President of Knowledge, Evaluation & Learning at the Hawai‘i Community Foundation. He has been learning and evaluating in philanthropy since the beginning of the century. @TomEval  TomEval.com

This post is part of the Glasspockets’ #OpenForGood series in partnership with the Fund for Shared Insight. The series explores new research and tools, promising practices, and inspiring examples showing how some foundations are opening up the knowledge that they are learning for the benefit of the larger philanthropic sector. Contribute your comments on each post and share the series using #OpenForGood.

Hello, my name is Tom and I am a Subscriber. And a Tweeter, Follower, Forwarder (FYI!), Google Searcher, and DropBox Hoarder. I subscribe to blogs, feeds, e-newsletters, and email updates. My professional title includes the word “Knowledge,” so I feel compelled to make sure I am keeping track of the high volume of data, information, reports, and ideas flowing throughout the nonprofit and foundation worlds (yes, it is a bit of a compulsion…and I am not even including my favorite travel, shopping and coupon alerts).

It is a lot and I confess I do not read all of it. It is a form of meditation for me to scroll through emails and Twitter feeds while waiting in line at Aloha Salads. I skim, I save, I forward, I retweet – I copy and save for later reading (later when?). In fact, no one can be expected to keep up, so how does anyone make sense of it all, or even find what we need when we need it? Everyone being #OpenForGood and sharing everything is great, but who is reading it all? And how do we make what we are opening for good actually good?

Making Knowledge Usable

We have all experienced at some point Drowning in Information-Starving for Knowledge (John Naisbitt’s Megatrends…I prefer E.O. Wilson’s “starving for wisdom” theory). The information may be out there but rarely in a form that is easily found, read, understood, and most importantly used. Foundation Center and IssueLab have made it easier for people in the sector to know what is being funded, where new ideas are being tested, and what evidence and lessons are available. But nonprofits and foundations still have to upload and share many more of their documents than they do now. And we need to make sure that the information we share is readable, usable, and ready to be applied.

Hawaii Community Foundation Graphic

DataViz guru Stephanie Evergreen recently taught me a new hashtag: #TLDR – “Too Long, Didn’t Read.”

She now proposes that every published report be available in three formats – a one-page handout with key messages, a 3-page executive summary, and a 25-page report (plus appendices). In this way the “scanners,” “skimmers” and “deep divers” can access the information in the form they prefer and in the time they have. It also requires writing (and formatting) differently for each of these sets of eyes. (By the way, do you know which one you are?)

From Information to Influence

But it is not enough to make your reports accessible, searchable, and also easily readable in short and long forms; you also need to include the information people need to make decisions and act. It means deciding in advance who you want to inform and influence and what you want people to do with the information. You need to be clear about your purpose for sharing information, and you need to give people the right kinds of information if you expect them to read it, learn from it, and apply it.

Too many times I have read reports with promising findings or interesting lessons, and then I race through all the footnotes and the appendices at the back of the report looking for resources that could point me to the details of evidence and data or implementation guidance. I usually wind up trying to track down the authors by email or phone to follow up.

A 2005 study of more than 1,000 evaluations published in human services found only 22 well-designed and well-documented reports that shared any analysis of implementation factors – what lessons people learned about how best to put the program or services in place. We cannot expect other people and organizations to share knowledge and learn if they cannot access information from others that helps them use the knowledge and apply it in their own programs and organizations. YES, I want to hear about your lessons and “a-ha’s,” but I also want to see data and analysis of the common challenges that all nonprofits and foundations face:

  • How to apply and adapt program and practice models in different contexts
  • How to sustain effective practices
  • How to scale successful efforts to more people and communities

This means making sure that your evaluations and your reports include opening up the challenges of implementation – the same challenges others are likely to face. It also means placing your findings in the context of existing learning while also using similar definitions so that we can build on each other’s knowledge. For example, in our recent middle school connectedness initiative, our evaluator Learning for Action reviewed the literature first to determine specific components and best practices of youth mentoring so that we could build the evaluation on what had come before, and then report clearly about what we learned about in-school mentoring and open up  useful and comparable knowledge to the field.

So please plan ahead and define your knowledge sharing and influence agenda up front and consider the following guidelines:

  • Who needs to read your report?
  • What information does your report need to share to be useful and used?
  • Read and review similar studies and reports and determine in advance what additional knowledge is needed and what you will document and evaluate.
  • Use common definitions and program model frameworks so we are able to continually build on field knowledge and not create anew each time.
  • Pay attention to and evaluate implementation, replication and the management challenges (staffing, training, communication, adaptation) that others will face.
  • And disseminate widely and share at conferences, in journals, in your sector networks, and in IssueLab’s open repository.

And I will be very happy to read through your implementation lessons in your report’s footnotes and appendices next time I am in line for a salad.

 

 

Impact, Influence, Leverage and Learning (I2L2)

[In November 2015 ORS Impact and I revised and updated the I2L2 framework document. The new link is I2L2 and below].

Years ago while working at the Annie E. Casey Foundation we struggled with how to organize, summarize and communicate very diverse grant making strategies and results (from direct service work to community change, from technical assistance and capacity building to policy and advocacy). We had plenty of numbers and examples without a common framework for communicating them. This triggered both a return to basics around an intentional outcomes focus (with Results Based Accountability) as well as common definitions for the “types” of outcomes and results we were aiming for. There was a lot of experience with naming and describing outcomes for child and family well-being (e.g., increased employment, improved school attendance) but the struggle was often with summarizing more developmental outcomes like organizational capacity, changes in attitudes and beliefs, and early policy and advocacy investments. Collective memories may be fuzzy but I credit Miriam Shark at Casey with advancing a set of 3 result categories:

  •  Impact – results that were intended to achieve direct change and impact on people
  • Influence – results that described intended change in organizations, beliefs and behaviors, contexts, and policies and practices
  • Leverage – changes in resources and funding, in this case, especially where the foundation investments influenced others to change how they invest in the same or similar strategies

We later discussed including a second “L” for Learning outcomes–especially where there was intentional strategies to acquire knowledge needed to inform other work. This list may seem simple (and even obvious now in retrospect) but it provoked some key thinking and behaviors.

First, it helped program officers and grantees organize and report results in all 4 categories which was especially helpful for activities and investments that could not measure community impact directly or within a short time period. The influence and leverage results could describe the early evidence that change was happening on a path to impact outcomes. In addition, it allowed for not only results from different strategies within a portfolio or across the foundation to be consolidated, it also helped people to look at all four types of impact, influence, leverage and learning outcomes for individual grants or activities.

Second, it provided a prompt to everyone to define their intended results (and intentional strategies) for all four categories at the beginning of the planning and work. Again, this is certainly obvious within most results and outcome based planning but often the focus is only on the long term impact and less upfront attention is given to the earlier and needed influence and behavior change outcomes needed to achieve changes in people and places. What often happens is that impact results are defined up front and measured but if not fully achieved both foundation and grantee fell back on narrative or bullet-form examples of “other changes” that occurred–often influence and leverage–defined and documented in retrospect.

Starting with this initial set of ideas I asked ORS-Impact to prepare a tool for Making Connections community change sites and grantees to understand how to define and measure influence and leverage. This initial guide helped get the concepts and early definitions to a limited audience of grantees and it provided examples of indicators and ways to measure both influence and leverage. Later in 2006, we focused on the policy and advocacy aspects of influence which took the form of other manuals and guides, and also contributed to the growing policy advocacy evaluation work.

I continued to use the initial framework with the inclusion of learning outcomes in different work with multiple organizations and ORS-Impact also went back to it in work with other clients. Deceptively simple but helpful as an organizing framework when the work or array of investments and strategies have different levels of focus and change and operate in different timeframes but are meant to relate and be complementary. Certainly deeper and more comprehensive theory of change exercises help to define these same elements in different ways but these can be challenging to summarize and communicate to audiences not immersed in the work (like board members or the general public).

So we decided to go back to the original ideas and publications and spend time documenting good case examples of how the framework has been used and what organizations have gained from it. Jane Reisman, Anne Gienapp, Sarah Stachowiak, Marshall Brumer, Paula Rowland, and the ORS-Impact team worked with current and past clients and colleagues to assemble these examples. We also shared the examples at the American Evaluation Association conference and other meetings which helped to develop the version you can read here as I2L2.

We continue to receive positive feedback especially around the I2L2 framework’s ability to help organize thinking and definitions of expected change and results. Again, this doesn’t replace theory of change and other in-depth planning but when community change strategies and their intended outcomes are complex and highly interrelated (sometimes without distinct sequencing), I2L2 helps groups to organize, define, document and communicate the results they are aiming for and achieve.

So where are we now? We have spent a lot of time and effort on defining terms and examples for influence and leverage. (Others have also contributed their work on these categories–see Jim Casey Youth Opportunity Initiative‘s Assessing Leverage guide. We would like to focus on how to help people and organizations focus on defining intentional and planned learning results and the strategies to get there. Here we want to define learning to not be only the lessons acquired from (usually) failing to achieve impact or successfully reach targets but more importantly the intentional agenda for acquiring needed knowledge. Defined at the beginning and evaluated along the way.

Do you have examples of work defining learning results? Learning outcomes? How have you evaluated learning?

We’d love to hear from you.

I hate the word ‘learning’

[Word cloud from http://voulagkatzidou.files.wordpress.com/2011/02/wordle.jpg]

I didn’t say it but at a meeting of several foundations this week convened by GEO I did say Amen when another foundation “learning” officer confessed it. I also have the word learning in my new title (along with knowledge and evaluation) and we are not being hypocrites or non-believers but it has been very frustrating to see foundations embrace and promote “learning” as the alternative to evaluation. It is partly understandable when in the past evaluations and evaluators have produced data and reports that have not included clear analysis or actionable knowledge. But I do not see how anyone can learn without data and evaluation, and we should not be measuring and evaluating without clear goals for decisionmaking and action.

Evaluators have struggled to put on a friendlier face of evaluation by emphasizing its contributions to learning and sometimes by de-emphasizing the measuring, monitoring, compliance, and judgement aspects of evaluation. However, I continue to feel more strongly that evaluation must be about learning and accountability. We must be accountable not only to the results we intend and promise to communities but we must also learn in an accountable way. Learning in and by foundations can be very selfish and self-serving if it results only in mildly more knowledgeable program officers that do not change or adapt their ideas and strategies. Learning that is not based on data and analyzed experience is what? Intuition? Hunch? Fond memories of an interesting grantmaking experience paid for with the public’s money held in trust? And learning that does not contribute to actionable knowledge, decisionmaking, and improvement is a waste of data collection and analysis efforts.

As I have tried to help foundations and foundation staff focus or define intentional learning goals it also became clear that individual or even group learning goals can be as self-serving as the learning experience itself. What we are interested in. What we would like to know. What would be interesting to find out. As helpful as these components might be to considerations for future strategy development, they still do not offer a clear path to making knowledge actionable, and more importantly, helping the group agree on a shared path forward.

It has been both focusing and freeing to identify and name the decisions that need to be made by foundation staff and concentrate the data and learning agenda to provide the information necessary to make those decisions. The learning agenda needs to support the decisions that lead to actions–continue this work, adapt what we are doing, change the strategy entirely, and even end funding. And learning can be focused on what we need to know and the information we need to have to influence the decisions of others. But all of this requires being explicit and intentional early about the decisions we need to make and the target audiences we intend to influence before planning and embarking on an evaluation and learning agenda.

Otherwise all our learning efforts will result only in interesting trivia used in cocktail party chatter.

    Related Resources

Marilyn Darling and Fourth Quadrant Partners Emergent Learning framework is one tool that I have used to help groups get to the action that needs to result from the learning. And at the recent American Evaluation Association meeting it was refreshing to see multiple presenters use the experiential learning trio of questions asking of data/analysis: What? So what? Now what?