From Human Services Organizations to Organizations and Networks of Humans

AudreyJMy conversation with Audrey Jordan

One of the topics you will see addressed here in the future is the evaluation and measurement of civic and community engagement—a challenge I continue to struggle with in my work. A dear friend and generous colleague, Audrey Jordan, has worked on this and related issues for a while.  Although we no longer can walk into each other’s office and chat, we have been maintaining regular water cooler conversations via phone, email, and videochat and we decided to share a few of our conversations here.

As part of this work in community engagement, Audrey has been working with human service organizations who are starting to look at themselves less as “service provider professionals” delivering interventions to poor people but as partners with families in their own development.  This is a big mindset shift that has enormous implications for families and the organizations themselves.  I asked Audrey about her recent work and experiences with these organizations—how do they (and the rest of us, including evaluators) need to work differently.

 TOM:  Audrey, you spent considerable time over the last few years looking for and learning from organizations that provide human services and supports to families and community but who are trying to “flip” their way of thinking from providing social services to a needy client to being more of a partner with families in their own development and change. What is different about these organizations?

AUDREY:  This is a great question and I believe I have learned even more about the “why” of these organizations since Ties That Bind, the monograph published by Annie E. Casey Foundation in 2006.  People refer to it as “magic’ or “secret sauce”—it is not!  There are a set of known and knowable practices that clearly delineate what distinguishes organizations who engage people (a.k.a. participants, clients, consumers, customers, members, etc.)  as whole human beings and partners.  I’m basing my answers on the best of what I’ve learned over the years, giving appropriate shout-outs to those from whom I have learned—those who live and work in and value these principled environments.

There is much to say about the so-called mystery of what these organizations do, but to keep myself focused on the answer in a way that is most useful and useable I will refer to three resources:

1)       Ties That Bind: The Practice of Social Networks

This monograph is a compilation of learning by me and a team of colleagues funded by the Annie E. Casey Foundation that describes six organizations around the country who practice “positive social network building.”  These organizations are:

Although manifesting in unique ways, and not necessarily referred to as “positive social network building” (Casey’s term), all six of these entities revealed this common set of practices:

  •  Are Demand-driven – programs and activities exist because they are what consumers of resources/services want and need
  • Form Follows Function – structures are formed and exist as needed
  • The network of relationships is inclusive and constantly expanding
  • Leadership is an expectation for all members, and everyone is accountable
  • The network is non-hierarchical
  • Staff are Facilitators not Prescribers
  • Power in relationships is explicitly acknowledged and addressed (shared whenever possible)

I have come across many other groups that embrace these same principles since 2006, who engage in peer-learning with these and other organizations and indeed, a few of the organizations/programs on this short list of six no longer exist because of funding challenges (see below). Other groups that deserve a shout out for their great principle-driven work are:

2)       Notes from a Community Network Builders convening hosted by Bill Traynor & Frankie Blackburn in the fall of 2011

Two of my mentors in community-building, Bill Traynor & Frankie Blackburn [Bill's blog & Frankie's blog], hosted a convening of several like-minded organizations from around the country with some funding from the Knight Foundation back in Miami in November of 2011.  At this convening the group assembled a set of practice principles that are very similar to those described above.  They are:

  • Knock on doors, invite people in, and always build relationships
  • Humility, truth-telling, transparency, and respect for the community and believing members have something of value—we must listen to, support and care for each other
  • Mutual support and a respect of assets, knowledge and value that comes from different truths and experiences and being intentional about this—everyone has something to contribute
  • Continuous learning and acknowledging what is already there by lifting it up and building upon it—redefining and reframing all that contributes to the whole
  • Creating intentional spaces for innovation and trust-building so people can do this work together– how do you open up the intentional spaces that usually leave many people out?  (People gravitate to this simple, but profound practice)
  • Being self-aware and present but also stepping back and letting the behaviors become a natural practice
  • Co-creation of change, design, implementation and evaluation

3)       The Full Frame Initiative (FFI) Five Domains of Well-Being

The Full Frame Initiative, a national non-profit at which I now work, supports organizations that share a similar, common “DNA.”  These organizations, with FFI’s learning support, build upon practices that are based upon working with individuals and families“ in the full frame of their lives,” not in a snap shot or a cropped shot, using a video frame as a metaphor.  Unlike more conventional service organizations that work with individuals or families based upon their piece of the whole (e.g., mental health, or criminal justice), FFI organizations operate from the premise that all human beings – not just clients – seek a synergistic experience across five domains in their lives for optimal well-being: safety, social connectedness, meaningful access to relevant resources, mastery and stability.  When individuals and families are supported to maximize the assets and minimize the challenges in each of these domains – without having to make regressive trade-offs between them – they experience well-being.  And like a ripple, so do neighborhoods and communities.  The Missouri Division of Youth Services is a current example of a FFI partner that has begun to codify the five domains in their practice and policy—for example, rewriting treatment plans for youth and families using the five domains framework.

These three different sources all point to a short list of operating values:

  • Recognize and build on strengths of people, all people.
  • Understand that compassionate, trusting relationships are “the rails and roads” of transformative change
  • Believe in and practice reciprocity – based on the knowledge that everyone has something to give and to get
  • Emphasize every human being’s desire to be and the value there is in self-determination
  • Understand that attention to environment matters – both for the traumas and the gifts – and appropriately ameliorating the traumas and generating more of the gifts is “the work.”

TOM: The application of these principles and values in an organization is so critical I would think that evaluators need to be much more aware and observant of how they are expressed in practice. How do these organizations measure their success?

 AUDREY: These organizations all understand that a singular focus on individual measures of success of clients in programs will not make for transformative change – of communities or individuals in those communities.  And these organizations understand definitions for success cannot be imposed externally.  Furthermore, these organizations know that what is considered a success today is not what may be considered a success tomorrow; people and environments are too dynamic to believe that success measure are static. So these organizations:

  •  Have aligned success measures across individuals, organizations (including their own), and the communities in which the organizations operate that recognize strengths to build from and challenges to mediate at each level
  • Develop measures of success with clients/consumers/members, staff, and partners, and have a transparent process (with a timeline, result targets and deliverables) for learning and accountability in which stakeholders representing each of these constituencies learn and hold each other accountable together
  • Create touchstone-like roadmaps (i.e., theories or pathways of change) that lay-out consensus expectations for change at evolving stages toward meeting achievable goals, and use that roadmap at regular learning and reflective practice intervals.

Too often all three of these requirements are missing or compromised by the demands of funders, researchers, or just bad habits.  And each of these bullets requires capacity-building and long-term commitment – two investments few foundations and government agencies have shown themselves willing to make.  To add insult to injury, too many organizations who desire to build around the principles described above don’t have the resources to show their good work, and without the ability to show their good work they don’t get funded.  An exhausting and demoralizing vicious cycle.

TOM:  Why don’t we see more organizations like this?

AUDREY: Part of the answer to this question comes in the vicious cycle I just described.  The other part of the answer to this question is that building confident competence in the practices outlined at the beginning of this entry requires great investment of time and attention to (and documentation of) on-going “learning while doing.”  Which means this time and attention must involve the same people doing the work and living their lives.  Those who desire to invest in what works and are so focused on the end results could make a tremendous difference in all our learning and success by realizing that they must invest in the work and the “work behind the work” or the means to the ends, to get more of the success they want to see in the end.  And of course, the end is different today than it will be tomorrow.

When those groups who do the practice participate together in a learning community – those who are on-the-ground doing the work with everyday community folk who actually experience success (with all its stumbles)—and are conscientious and transparent in sharing their learning in real time, we ALL benefit greatly.  Especially the communities in the process of transforming.  I have had the privilege of participating in more than a few of these learning communities and it was the best work I have seen and been a part of (e.g., see Bond, Bridges and Braids by Fulton & Jordan).  To do this well requires resources and IMHO, the best resources those who really want to see success can invest.

Thank you, Audrey!  And readers: Do you agree? Have questions? Other examples? Please share and comment. And stay tuned for more of our conversations.

I don’t like dashboards but am willing to date one

InsitesI probably have a few 2014 New Year resolutions that I have not fully engaged with yet (like blogging here more regularly) but I am having to tackle my priority one—developing a better organization-wide framework for monitoring performance and communicating the foundation’s impact across grantmaking, donor services, fundraising, and all our work.  I am not sure how often other evaluators get asked to help with organization-wide dashboards but in the world of philanthropy there is a lot of interest and demand.  I know my colleagues in other foundations get slightly nauseous looks on their faces when you ask about the dashboards they use in their organizations.  Not only are we not universally proud of them but often we just don’t like them.

Now we are all pro-data, pro-measurement cheerleaders and more importantly we are all advocates for the utilization of data and evaluation to promote learning.  And we know how good visual summaries and graphics of data can increase understanding and analysis by people.  So why are we so underwhelmed by the dashboards we have?  (And even admit hating to work on them?)

I am a committed outcome proselytizer and I enthusiastically promote Mario Morino’s Leap of Reason: Managing to Outcomes in an Era of Scarcity and Mark Friedman’s Results-Based Accountability (RBA).  I have seen how nonprofits and foundation staff are helped by clear process and outcome definitions, good and reliable measures, and simple summaries of change over time.  But I am still left underwhelmed, disappointed, and most often very worried that most performance dashboards are not getting at the “right” data that will help change behavior and achieve the results we want.

Dashboards in their parsimony do often lack context or enough context to satisfy a footnoting evaluator.  They do focus attention on key efforts and strategies and I strongly believe (as Morino has advocated) that more nonprofits need to focus on measuring their work and outcomes as a primary operating capacity.  But I still feel that most are missing more than just additional context. Recently I read Henry Doss’ assessment of how businesses need to apply more focus and measurement to the features they want to see in the ecosystem and not simply the outputs they produce and are incentivized to produce.  This reminded me of Pete York’s frequent admonition that nonprofits need to focus more attention on the proximate cause-and-effect relationships they can impact.  Most of the performance and outcome measurement I have seen and experienced with foundations and nonprofits are misaligned with incentives, target goals too distant from the efforts, and ignore the influences in and on the ecosystems around us.

Doss noted that our short-term performance incentives are often misaligned with our long-term vision.  And in an increasingly VUCA world (my 2013 word-of-the-year, not “selfie”) keeping the alignment between our mission and our strategies (and, therefore, our performance metrics) is increasingly difficult.  Despite their usefulness in driving performance, dashboards can “miss the mark” around overall organizational mission if they do not address not just what we do and what we think those effects are but also how we should be influencing and responding to the influences of the changing world around us while still driving towards mission and impact.  Dashboards can’t and shouldn’t do everything but how can I develop a dashboard that helps staff keep an eye on performance, quality, and effective implementation while also holding everyone together in our collective mission?

I have often wanted to include organizational values and “how we work” in performance measurement—It is not just how much we do but they way we work which is important.  Glenda Eoyang’s “Devaluing Values” made me appropriately cautious and skeptical of organizational values without naming the practiced and observable behaviors we need to see and incentivize.  Especially when these behaviors are adaptive and help the organization thrive in a changing environment when the bar keeps moving and yesterday’s performance target is no longer relevant.  It is these behaviors that I want to make sure get measured, reported, and incentivized.

And if I can get this kind of dashboard completed by the second quarter of 2014 I will attempt another of my resolutions and exercise more regularly.

For additional resources:

Examples of foundation dashboards  http://dashboards.wikispaces.com/Foundation+Examples

FSG’s The Foundation Performance Dashboard http://www.fsg.org/Portals/0/Uploads/Documents/PDF/Foundation_Performance_Dashboard.pdf?cpgn=WP%20DL%20-%20The%20Foundation%20Performance%20Dashboard

“Making Sense of Your Foundation Data with Dashboards and Scorecards”  http://www.gmnetwork.org/annual-conference/2010/sessions/making-sense-your-foundation-data-dashboards-and-scorecards

20131107-185719.jpg

I hate the word ‘learning’

[Word cloud from http://voulagkatzidou.files.wordpress.com/2011/02/wordle.jpg]

I didn’t say it but at a meeting of several foundations this week convened by GEO I did say Amen when another foundation “learning” officer confessed it. I also have the word learning in my new title (along with knowledge and evaluation) and we are not being hypocrites or non-believers but it has been very frustrating to see foundations embrace and promote “learning” as the alternative to evaluation. It is partly understandable when in the past evaluations and evaluators have produced data and reports that have not included clear analysis or actionable knowledge. But I do not see how anyone can learn without data and evaluation, and we should not be measuring and evaluating without clear goals for decisionmaking and action.

Evaluators have struggled to put on a friendlier face of evaluation by emphasizing its contributions to learning and sometimes by de-emphasizing the measuring, monitoring, compliance, and judgement aspects of evaluation. However, I continue to feel more strongly that evaluation must be about learning and accountability. We must be accountable not only to the results we intend and promise to communities but we must also learn in an accountable way. Learning in and by foundations can be very selfish and self-serving if it results only in mildly more knowledgeable program officers that do not change or adapt their ideas and strategies. Learning that is not based on data and analyzed experience is what? Intuition? Hunch? Fond memories of an interesting grantmaking experience paid for with the public’s money held in trust? And learning that does not contribute to actionable knowledge, decisionmaking, and improvement is a waste of data collection and analysis efforts.

As I have tried to help foundations and foundation staff focus or define intentional learning goals it also became clear that individual or even group learning goals can be as self-serving as the learning experience itself. What we are interested in. What we would like to know. What would be interesting to find out. As helpful as these components might be to considerations for future strategy development, they still do not offer a clear path to making knowledge actionable, and more importantly, helping the group agree on a shared path forward.

It has been both focusing and freeing to identify and name the decisions that need to be made by foundation staff and concentrate the data and learning agenda to provide the information necessary to make those decisions. The learning agenda needs to support the decisions that lead to actions–continue this work, adapt what we are doing, change the strategy entirely, and even end funding. And learning can be focused on what we need to know and the information we need to have to influence the decisions of others. But all of this requires being explicit and intentional early about the decisions we need to make and the target audiences we intend to influence before planning and embarking on an evaluation and learning agenda.

Otherwise all our learning efforts will result only in interesting trivia used in cocktail party chatter.

    Related Resources

Marilyn Darling and Fourth Quadrant Partners Emergent Learning framework is one tool that I have used to help groups get to the action that needs to result from the learning. And at the recent American Evaluation Association meeting it was refreshing to see multiple presenters use the experiential learning trio of questions asking of data/analysis: What? So what? Now what?

Evaluation Colleagues, Mentors, and Friends

drbarbarasugland(Dr. Barbara Sugland 1960-2010)  As an internal evaluation director at a foundation, I no longer “practice” evaluation in the same way as when I was a consultant in my pre-philanthropy years in Washington, D.C.  I often worry that my skills can get a little rusty when my daily role is one of managing, translating, organizing, and planning resources for evaluations and other knowledge and data-related tasks.  I rely greatly on consultants and colleagues to keep me current (and honest) in how evaluation should be practiced.

I am grateful for the many consultants and colleagues who have helped me,  pushed my knowledge, and strengthened my approach.  ORS-Impact (Jane, Anne, Sarah, and team), Innovation Network (Johanna, Veena, Ehren, Ann, Kat), Center for Evaluation Innovation (Julia and Tanya), Community Science (David, Kien, Scott), TCC Group (Pete, Jared and Deepti), and Aspen Institute (Anne, Andrea and Pat).  And without a doubt all my Casey and HCF colleagues, especially Audrey Jordan.

And as many have noted about the recent AEA conference, our field’s best evaluators are often accessible and generous with their advice and interest in others’ work.  Eleanor Chelimsky was my first “teacher” and the GAO PEMD evaluation reports from the 1980s and 1990s were my first textbooks when I began as an evaluator in 1990 before attending grad school.  Beverly Parsons, Bob Williams, David Fetterman, Michael Patton, Hallie Preskill, Ricardo Millet, Patti Patrizi, Anne Kubisch, Prue Brown, and many others have always been considerate and generous with their time, feedback, comments, advice and even humor.

These colleagues continue to stimulate my work but I still spend part of every day (not just every workday) thinking about and missing the wisdom, ethics, commitment, and friendship of two brilliant evaluators who are no longer with us.  Dr. Mary Achatz and Dr. Barbara Sugland were two evaluators who taught me much about evaluation methods but even more about the purpose and mission of evaluation to contribute to improving the lives of the most vulnerable and driving towards social justice and equity.  The American Evaluation Association’s Guiding Principles for Evaluators assert respect for people, integrity, honesty, and responsibility for the public welfare–and these women were exemplars of these principles.  My own knowledge and practice were greatly influenced and informed by these two extremely modest yet enormously skilled professionals who gave the most attention and respect to the least powerful resulting in more genuine, appropriate, accurate, and influential evaluations.  They both also lived as guides, teachers, mentors and coaches and there are many others who can attest to their heart and skills.  And they are friends I miss every day.

See pages 20-21 in Leila Fiester’s case study on Plain Talk for an example of Mary’s respectful approach to working in and with community

Hear Barbara’s words on supporting and empowering our youth especially youth of color  http://vimeo.com/47253167

And the Dr. Barbara Sugland Foundation dedicated to continuing her legacy (where I am proud to be a director)

I think I blogged before “blog” was even a word

With the helpful advice and prompting by the excellent American Evaluation Association > #eval13 blogging panel of Ann, Chris, Susan, and Sheila, I am reinvigorating my Twitter efforts into longer blog postings on evaluation.

The low-pressure ease and comfort of occasional tweets filled what little bandwidth I thought I had left with a major relocation and job change. But now that I feel more settled and wanting to connect more with my evaluation colleagues now thousands of miles away in ANY direction, the gentle yet firm encouragement of our EVAL bloggers convinced me to go ahead. Which also reminded me that starting with my first personal computer I purchased in 1993 (leading to the first time I fell in love with a South Dakotan–an extremely patient Gateway tech support guy who stayed on the phone with me for 6 straight hours while I repaired my own broken hard drive), I had started writing overly long and somewhat sarcastic (snarky?) movie and restaurant reviews I distributed to friends on my free DC community email account. History is not clear if the term “blog” was invented or first used in 1994, 1997, or 1999, but I feel at least that I have some old skills I might be able to revive.