COMM-ORG Papers 2005



Strengthening Social Change Through Organizational Learning and Evaluation

 Andrew Mott

September 2003


Areas of Broad Agreement:
Assessments Designed to Meet the Organizations' Learning Needs
Assessments to Meet External Assessment and Learning Needs:
Current Areas of Exploration in Assessing Social Change:
Desire for Gatherings and Exchanges with Peers:
Possible Future Areas of Exploration and/or Collaboration:
Moving Ahead:
About the Author



In September, 2003, a remarkable cross-section of people came together in Canada for an international exchange on the different approaches they were using to strengthen the forces for positive social change in different parts of the world.  The Gray Rocks conference convened more than three dozen community organizers and activists, evaluators and other “learning partners” from eleven countries to focus on how they were building strong systems to help social change organizations analyze, explain and  strengthen their work.  Over three and a half days they discussed how, as organizers or outsiders, they were helping activist groups meet their internal need to keep assessing, reflecting on and strengthening their work so they could increase their impact, while also meeting their external need to help their partners, funders and others outside their organizations understand, evaluate and learn from their work.        

The Gray Rocks conference on Strengthening Social Change Through Assessment and Learning was sponsored by four organizations in the United States, Canada and the United Kingdom.[1]  Involving participants from Asia, Africa and Latin America as well as North America and Europe, it was coordinated by the Community Learning Project and supported by a generous grant from the Ford Foundation's Governance and Civil Society Program to the Pratt Institute. 

Conference planners had two primary goals for Gray Rocks.  First, the convening  created a unique learning opportunity for people working in different parts of the world to learn from each other's experience in setting up evaluation and organizational learning systems in the context of social change.  Second, it enabled participants to begin exploring steps they might take individually or collectively to expand support for assessment and learning strategies which are designed specifically to help organizations which are undertaking the enormous task of pressing for fundamental economic, social, and/or political reforms in their societies. 

Conference planners brought together people from very different worlds.  They invited a mix of activists, organizers, people from support organizations and networks, and evaluators and academics – people with very different roles, training and perspectives.  Approximately 60% of the participants were from the United States, Canada and Europe, and 40% were from the Global South and international “nongovernmental organizations”, or NGOs, which work in developing countries.  The stage thus was set for a rich dialogue among people with very different experiences and views. 

While the participants came from different worlds and play a variety of roles in social change, throughout the discussions it was clear that they were united in their commitment to fundamental social, economic and/or political reform that helps poor people and others who face discrimination, marginalization and exclusion.  They used different terminology – community organizing, social change, rights-based work, advocacy, democracy-building, and development – and there were differences in their approaches which deserve deeper exploration, but there was consensus on –

  • the necessity and great challenge of bringing about fundamental reform in their societies;
  • the central importance of strong movements and effective organizations in pressing for reform; and
  • the necessity of helping those forces develop and refine their systems for assessing their work so they can continually learn how to increase their effectiveness and impact and so funders and others can evaluate and learn from their experience.

The conference proceeded in stages, from the first day which was devoted to establishing common ground, through discussion of how activist groups, their partners, and evaluators assess key aspects of social change work, to exploration of how they might work together to support the growth of assessment and learning systems which meet their needs.  Both plenaries and small group sessions featured case studies, or “stories”, of how particular organizations or evaluators/learning partners were working to assess and learn from the experience of specific campaigns for fundamental reform.  Time was also set aside to enable participants to initiate sessions where they could share their experience and lessons or discuss pressing issues with others.[2] 

The meeting concluded with early exploration of how a cross-section of activists and academics, evaluators, other learning partners, and donors might work together to expand support for –

(1)   assessment and learning systems which meet the groups' internal and external needs and thus help strengthen the organizations which are leading the struggle for social change; and

(2)   evaluation approaches which help funders and others evaluate, communicate about, and learn useful lessons from the organizations' experience and which reinforce rather than overload or undermine the groups' own learning systems.    

This paper[3] pulls together the main threads and conclusions from the wide-ranging discussions at Gray Rocks, highlighting areas of broad agreement, areas around which tensions or disagreements were expressed, and areas for further exploration.  It concludes by summarizing the action priorities which participants discussed as the conference ended.

Areas of Broad Agreement:

There was broad agreement on four key points.  These common views were shared across international lines and among activists, evaluators and donors.    

First, it is particularly difficult to assess and evaluate work in the social change arena, and it requires innovative approaches to evaluation and learning.  Social change is an area of great complexity, with many actors, viewpoints, trends and countertrends.  It is a nonlinear process with many ups, downs and surprises.  It therefore is difficult to develop a shared understanding of what has occurred and why.  Tracing cause and effect is a major challenge. 

Furthermore, by definition “social change” requires overcoming the status quo, making innovations and taking risks, often against great odds.  It therefore involves trial and error, and messy, uncertain processes which are difficult to track and evaluate.  Questions raised during this discussion included –

  • What constitutes “success”? 
  • How do you evaluate progress fairly when negative trends and resistance to change are so great that it may be unreasonable to expect anything more than “holding the fort” or small victories against a landscape of losses?  
  • How do you allow for inevitable “failures” without unfairly damaging a group or jeopardizing a program officer's credibility within a foundation? 
  • How can you judge the relative importance of the contributions different actors make in a campaign involving many organizations?

Second, all the participants, including the funders, agreed that the paramount goal for evaluation and learning in the social change field should be strengthening the organizations which are leading efforts to bring about fundamental reforms.  Social change cannot be achieved without strong social movements or social change organizations, and those groups need access to feedback so they can analyze and improve their performance.  That assessment should be designed to meet their internal and external assessment needs, helping them reflect and learn how they can enhance their work, while also expanding the outside world's ability to evaluate and learn from their experience. 

Third, there was agreement that few funders currently give priority to meeting their grantees' needs when they require evaluations.  Far more often private and public funders develop evaluations with little consultation with their grantees.  Most show little concern with whether the assessments will provide information and analysis which will help their grantees learn and develop, or help them explain and demonstrate how they are having an impact.  There are, however, some funders who are committed to evaluation approaches which are participatory and sensitive to their grantees' needs, and who want to help other funders understand the advantages of developing better approaches to evaluating social change. 

Fourth, many agreed that social change organizations, evaluators and learning partners, and funders committed to fostering organizational learning and evaluation strategies which help strengthen social change organizations can benefit from working together over the long run.  They share a strong interest in producing credible information on the role and value of social change organizations and movements, evidence which will help good groups survive and grow with broader support and public backing.  They are interested in increasing support for assessment and learning approaches which will advance the work of the organizations, movements and individuals that lead reform efforts.  Several spoke of their commitment to fostering learning by the next generation of change agents and the general public.  And each is limited in its current influence with its colleagues in the activist, evaluation, and funding communities.  The conference concluded with a strong sense of the benefits of continued exploration of how people like those participating in Gray Rocks can collaborate to increase support for assessment approaches which serve these purposes, especially the need to increase the sophistication, capacity and power of groups pressing for the fundamental reforms our societies need so desperately.   

Assessments Designed to Meet the Organizations' Learning Needs:

During the discussion there was considerable emphasis on the fact that some participant organizations have established strong assessment and learning systems  by themselves or with evaluators or partner organizations.  Many community organizing groups, for example, have strong internal disciplines of reporting and reflection[4], and many activist organizations are pioneering various approaches to self-assessment, peer assessment, and collaboration with outside evaluators and learning partners.[5]  Furthermore, during the conference organizations described how they were setting up monitoring and assessment systems to create greater transparency and accountability to the grassroots people they serve, thus transforming power relationships and programs.[6] We are “attempting to be accountable in what we do but mainly downwards to the poor.”[7] One group described these systems as helping them function as a “school for the community” with residents feeling strong ownership of the knowledge and the organization.[8] 

Several groups have been particularly successful in integrating assessment and learning into their organizational culture.  Assessment has become an integral part of their daily work and strategic planning.  Furthermore, they have ensured that the learning leads to action by assigning someone responsibility and authority for guiding the learning and regularly monitoring implementation of the lessons.

 Such organizations want to increase the visibility and credibility of the assessment systems they have found useful so that funders will recognize their value and shape their own evaluation requirements to build on, rather than ignore or undercut, those valuable systems.  Discussion began on strategies for accomplishing this, including having groups publish case studies and other assessment reports on their social change work to provide outsiders with evidence of how well the groups' own systems were serving their need for documentation and analysis.  However, there was insufficient time at Gray Rocks to explore this question fully.

Because of time constraints conferees also did not explore the question of what can be done to persuade and assist other organizers and activist groups to create stronger systems for assessing, reflecting and reporting on their work. While those invited to Gray Rocks were selected because they were committed to assessment and learning, many other activists do not share that commitment.  They devote little time to self-assessment or reflection on their work, feeling pressed by the many demands on their time and/or skeptical that investing time in assessment and reflection will pay off for their organizations. While participants at Gray Rocks also cited the enormous pressures of time and resources, it was clear throughout the dialogue that they felt their organizations' investment in assessment and learning was well-spent. 

While some organizers said they were unconvinced of the value of outside help with evaluation, other participants cautioned about strictly internal assessment processes.  Participants pointed to the limited credibility such internal evaluations may have with outsiders.  Some stressed the danger that internal assessments might not question fundamental assumptions or bring new, independent, perhaps challenging perspectives and insights to an organization.  In discussing internal systems, conferees pointed out that an organization's internal culture will determine whether internal assessments will be sufficiently rigorous and revealing.  Participants cited the central importance of trust and openness to learning as prerequisites to candid internal assessments.  They cited several key questions --  

  • Does the organizational culture permit people to admit and embrace “failure”, seeing it as an inevitable and important part of organizational learning in an arena which requires creativity and risk-taking? 
  • Is there safety for people who challenge fundamental assumptions, overall strategies, straying from the organization's original mission, or people at the top? 

Some warned of “overly romanticizing” internal knowledge and missing the insights and unusual ideas which may come from others whose methodologies or knowledge supplement those inside an organization. 

It was evident throughout the discussion that a broad variety of organizations and individuals in addition to professional “evaluators” serve as “learning partners” for social change organizations. In addition to evaluators whose commitment and approach make them natural partners for groups wanting access to evaluation experience, skills and credibility, there are several less widely recognized types of people who help groups assess and learn from their work.  These include various kinds of networks, support organizations, technical assistance and training groups, and consultants that incorporate organizational assessment and learning as integral parts of their organizational development assistance to front-line organizations.  Some are “critical friends”, people close enough to the organization to be trusted, and distant enough to have some degree of independence and perspective.  Others are peers who through peer review and peer learning circles often include their assessments and reflections in the context of broader sharing of experience and lessons.  Time was not devoted to exploring how these learning partnerships, peer exchanges and internal self-assessment processes might be systematically supported and developed as important evaluation resources for the future, nor was time allocated to exploring the pros and cons of these different relationships.

Assessments to Meet External Assessment and Learning Needs:

An increasing number of funders are increasing their emphasis on evaluation and increasing their insistence on numbers.  This pressure is especially strong in developing countries where funding comes primarily from either government agencies or international institutions.[9]  These large bureaucracies have elaborate systems for tracking, reporting and evaluation, and their systems seldom are tailored to the particular groups and projects they are supporting.  In the United States, where most funding for social change comes from private foundations, pressure for evaluation is also increasing.  Several factors are converging to create this situation, including pressures from foundation boards and leaders for –

  • evidence of the impact their grants are having;
  • lessons from their grantees' experience; and/or
  • a better basis for deciding among competing grant requests during an era of cutbacks and tough choices. 

Donors stressed that the strongest philanthropic and public supporters of social change need credible proof of the value of these efforts so they can justify funding them.[10]  As a donor said, “The key to sustainability is to demonstrate effectiveness….  I have a keen interest in impact and what we're learning… But can we create a candid relationship?” An activist echoed that “the accountability, legitimacy, credibility of our work is essential for us.  We are not only supposed to BE results-oriented; we also have to APPEAR results-oriented.”[11] 

Conference participants repeatedly discussed several characteristics of some conventional funder-supported evaluations that cause problems for social change organizations. These problems are likely to increase as evaluation pressures grow.   

They include:

  • the reality that evaluations are often “add-ons”, adding work without adding resources, and imposing approaches which grantees see as flawed or burdensome and which bypass rather than reinforce the organizations' own learning systems;
  • the power imbalance between those who have the money and those who want it”[12] which makes it difficult for groups to be candid or to press for approaches to evaluation which they feel are more appropriate than a funder's approach;
  • the unrealistically short time-frame which is often used to gauge progress despite the fact that social change requires long time horizons;
  • the contrast between the nonlinear way social change happens, with many ups and downs, and many donors' corporate and bureaucratic frame of reference which is based on a linear view of how progress is made;[13]
  • the competition for funds and recognition among groups which undercuts their ability to work together to marshal sufficient collective power to make progress on common issues;
  • the difficulty of admitting “failures” – which are inevitable in the challenging and unpredictable world of social change -- to funding sources which must make and justify decisions about whether the group has been sufficiently “successful” to merit renewed funding;
  • the difficulty of depending upon corporate, government, or conservative funders who are not committed to social change, a dilemma which tempts groups to avoid discussing controversial aspects of their work and instead isolate activities which are “safe” -- a situation which can distort communications between funders and grantees, divert an organization from its priorities, and complicate candid comparisons of an organization's goals with its accomplishments;
  • many funders' desire to quantify results – an emphasis which gives little weight to such vital but hard to quantify issues as changes in the group's relative power to bring about long-range change, or increases in its leaders' sophistication, leadership capacity, confidence, vision, ambition and willingness to act as they set goals for organizing and politicizing the community or reforming policy and institutions;
  • the “flavor of the month” approach of some funders which change priorities frequently;
  • the increasing number of funders who “have their own agendas, making donees become subcontractors facing rigid and nonsensical reporting relationships”[14];  
  • the frequency with which funders ignore the results of their evaluations as they make funding decisions, including deciding to drop grantees after giving them high marks on their performance; and
  • the current tough funding climate in which evaluation is often used to cut programs.[15]

The central strategic question is how to achieve a fair and balanced approach which makes evaluation work for both funders and the groups they support.[16]  As one activist said to funders who are committed to social change and evaluation, “What can we do so you won't leave the foundation and we will survive?”[17]  There was agreement that this would require seeking major changes in the relationships and negotiations between funders and grantees regarding evaluation and learning.  Participants stressed the need to recognize the power imbalance between grantmakers and grantees and how that influences grantees' attitudes toward external evaluation.  They spoke of “democratizing the process” and helping nonprofits “negotiate in parity” with funders on the details of an evaluation. They discussed the need to increase the grantee organizations' power, enabling them to negotiate as full partners concerning how their work is assessed and their internal learning needs met.

There was general agreement that grantees can increase their influence on evaluation by taking the initiative and clarifying what they want to assess and how, and then using that plan as the basis for negotiating with funders. It is “useful to ask ourselves what change we would like to see happen in the next year, through our work.  This is an interesting way of starting to ‘detect' the small changes that we find meaningful on a daily basis.”[18] One donor spoke of seeking ways to consult with grantees as his foundation considers changing funding priorities.  He advised social change groups to get together to strategize about how to increase their influence with donors regarding their funding priorities and their approaches to assessment and learning – to seek ways to take the lead in developing a “knowledge development strategy” which serves everyone's needs.[19]

While there are examples of strong relationships between grassroots groups and professional evaluators, there clearly are tensions and obstacles which often complicate such collaboration.  There are considerable differences in style, skills and ways of thinking between activists and academics and other evaluators.  Pointing out that some groups have had bad experiences with academics and researchers, one participant said, “To some groups, the evaluator is like a dentist, you don't want to see him. Evaluators may be called in when the situation is desperate.”[20] Some evaluators have had equally unfortunate experiences with grantees they have been asked to assess.    

Some organizers and activists asserted their view that their internal systems are sufficient and questioned the need for external help in meeting their evaluation needs. One cited “a myth that lack of familiarity with a group and its work equals objectivity.”[21] However, other activists called for more help from evaluators.  One asked for professional evaluators who would stay on site sufficiently long to set up systems and train people in the organization to document and assess the group's work, thus helping the organization build its capacity while bringing additional skills and experience to bear on knotty evaluation questions.[22] 

Needless to say some evaluators strongly disagreed with the view that internal evaluations are sufficient.  They cited the advantages of bringing their methodological expertise, evaluation experience, and “distance” and perspective into the assessment process, but doing it in ways which respect the organization and its own learning systems and which are designed to be helpful.

All agreed that evaluators are in an awkward position when they are hired by funders.  Funders often impose rigid standards and data requirements or time limits, leaving little room for negotiating for changes which evaluators and the groups to be evaluated would prefer. As a matter of principle some have decided not to conduct evaluations for funders when this is the situation.[23]

Furthermore, funder-driven evaluations understandably raise groups' concerns about evaluators whose findings may threaten their refunding.  For their part some evaluators complained that groups frequently are uncooperative or, at a minimum, so busy or so uncommitted to an evaluation that they are unprepared to provide the level of reporting, access, candor and responsiveness the evaluators need to fulfill their responsibilities. 

Nevertheless, there appears to be common ground upon which stronger relationships could be built between activists and evaluators.  Evaluators frequently echoed the views of organizers and activists as they spoke of the dilemmas of being caught between funders and grantees who don't agree on evaluation and learning priorities.  As evaluators discussed the principles behind their practice, their views responded to many concerns raised by organizers and social change leaders. These principles include --

  • Social change organizations should be involved in developing, interpreting and communicating the results of the evaluation and receive adequate support to carry out those responsibilities;
  • Evaluation should be designed to be useful in improving the work of grantees, the field, and others;
  • Evaluation should build the group's internal capacity for self-evaluation, and/or build on existing mechanisms for reflection and self-assessment;
  • Evaluation should respect and acknowledge the context in which the organization is operating;
  • All the costs of conducting the evaluation should be fully funded; and
  • Candor should not be punished, inside an organization or by funders.[24]

In retrospect, it is unfortunate that, unintentionally, the allocation of time for presentations at Gray Rocks was somewhat imbalanced.  While there were several presentations by organizers and activists, less time was allocated for evaluators to tell “stories” about their work, including examples of how they have worked with   organizations and donors that are committed to strongly participatory approaches which strengthen front-line organizations and speed up social change.  It would have been very useful to discuss more examples of how evaluators and other learning partners have helped groups strengthen their internal capacity and systems for self-assessment and/or supplemented those approaches with complementary analyses and feedback.

While discussion at Gray Rocks surfaced issues concerning the relationship between social change organizations and evaluators, the group did not decide how to explore them further.  One suggestion was to create a task group of activists, evaluators and funders to explore the broad question of how they might work together to expand support for assessment and learning which fosters social change.  An alternative suggestion was for small cross-sector groups to focus on such concrete issues as –

  • thinking through alternative ways evaluators can help community organizing and social change groups develop stronger, more useful learning systems; or
  • analyzing how activists can help evaluators refine their strategies for building strong, mutually helpful relationships with the groups they evaluate.  

Throughout the discussion participants stressed the importance of sufficient flexible “core” and program funding to support the assessment systems groups need.  There are still few funders which provide core funding to cover the central costs of running an organization and developing the basic systems, internal strength, and flexibility it needs to function thoughtfully and effectively.  Furthermore, program grants frequently fail to cover the full costs of the assessment and planning which are essential to the organization's success, and underfunded evaluations can be so inadequate that they are harmful.  Activists, evaluators and funders agreed on the crucial nature of adequate funding to cover these costs and ensure the organizational health of groups leading the movement for social change. 

Current Areas of Exploration in Assessing Social Change:

Participants in the conference are doing pathfinding work on a series of tough assessment and learning challenges.  These include their analyses of an organization's progress in impacting issues of public policy and institutional change and organizational development questions.  

There was agreement on the importance of continuing to refine systems for documenting and assessing success in changing public policies, the practices of major institutions, or politics – an area which is central to social change and difficult to evaluate.  Several groups have set up quite elaborate internal reporting and reflection systems to track their progress on policy and keep improving their strategies and tactics.[25]  Others have informal but nevertheless rigorous internal processes for constantly assessing how they can increase their chances of victory on a particular campaign or longer range change strategy.[26]  Several conference participants had worked with an intermediary and researchers to develop a common theory of change which they used in developing  written case studies documenting their impact and how they achieved it.[27]  Others are testing ways of involving the people who are most affected by a policy or institution in defining which issues most concern them, identifying indicators for assessing progress, and then conducting an ongoing assessment themselves.[28] 

In this area of power, politics and policy the particularly difficult areas to assess include: 

  • causal issues including the relative contribution made to policy change by insiders and outsiders, and by organizations which play different roles in trying to promote change;[29]
  • the impact of the immediate policy work in educating and politicizing people, giving them hope and encouraging them to become active in policy arenas;[30] and
  • its impact in building the organization's power, expertise, alliances, influence and sustainability or in opening up new political space where it can work to expand its reach and influence.[31]    

Another area of discussion related to organizational capacity-building.  During discussion several organizers and activists detailed how they assess their progress in broadening and deepening their constituency and being accountable to it, or developing an expanding base of grassroots leaders which grows in numbers, knowledge, skills, political consciousness and influence.[32]  Some cited the statistics they keep to measure their success in, for instance, reaching new people, seeing them participate in meetings and actions, and having them begin taking leadership roles. 

Another challenging area relates to assessing changes in attitudes, including people's sense of personal empowerment.  “It's a challenge to talk about the level of empowerment:  we are trying to give a sense that they have power to people who did not think they had any.”[33]

Assessment and learning questions become more complex when several organizations work on the same or closely related issues or projects. If they work together, formally or informally, to achieve changes in public policy, it is extremely difficult for either insiders or outside evaluators to determine the relative impact and value of each party's contribution.  In achieving success, what is the relative importance of each group, the coalition itself, those doing the research or communications work, various back-up organizations, insiders and bridge people?   This is further complicated if a group is part of a network or collaborative which has been constructed to foster peer learning among groups involved in parallel activities or facing similar challenges.

Several other evaluation and learning challenges were mentioned during the discussions.  These included:

  • How do you foster multi-stakeholder learning when there are such power imbalances (between funders and their grantees, for example) and groups are concerned about their reputations and funding?
  • How do you sustain organizational learning when there is constant staff and board turnover?
  • How do you evaluate progress in rapidly changing situations?
  • How do you foster candid reflection in conflict situations when it may be best for people to avoid certain important subjects because they are too divisive or volatile?[34]
  • How do you use power analysis in evaluation and learning?
  • How do you weigh the value of action “to change things that never change – poverty, wealth redistribution, power relationships”?

Desire for Gatherings and Exchanges with Peers:

All of the sectors represented at Gray Rocks – organizers and activists, donors, evaluators and learning partners – expressed their desire to have more opportunities for discussions and collaboration with their peers. 

In particular, during the conference organizers and activists expressed a hunger for continuing exchange with others who are engaged in rights-based work in very different contexts and cultures.  Groups from all corners of the world feel isolated from each others' experience and see increased exchanges as having great potential in helping them reflect on their work, raise their vision and gain new inspiration and ideas.  They cited such exchanges as essential for sharing methodologies and building enduring relationships which would make ongoing collaboration possible. 

During the Gray Rocks conference this desire focused on creating new opportunities for international exchange and site visits on assessment and learning questions.  However, it was clear from the dialogue that groups from all corners of the globe also craved opportunities to broaden this exchange so they could compare different philosophical approaches and practical strategies for bringing about change.  Throughout the conference, key questions about social change kept surfacing, and time was insufficient to discuss them satisfactorily.  This left many wanting to explore these issues more thoroughly in the future.  What do different groups mean by “social change”?  What are the different approaches they are using to achieve it?  And what lessons can be drawn from experience with those different approaches?

Some professional “evaluators” from universities, consulting firms, and nonprofit support organizations also mentioned the advantages of having closer relationships with colleagues in their own countries and world wide.  They share a commitment to assessment and learning systems which are formative, participatory, and useful to social change organizations and their allies.  Because this commitment is uncommon in the evaluation field, they are isolated from others who share their values and approaches, and they would relish having more time with their colleagues, exploring how they might learn from each other and influence their field. 

Similarly, donors committed to social change expressed their interest in working with other funders to discuss issues of organizing and social change as well as assessment and learning and to explore joint strategies for working together.  Their goals are to help redirect assessment and learning approaches so they are more useful and used, and to increase financial support for groups which are tackling fundamental reform issues in different parts of the world.  A central question posed by one funder – Can we get a critical mass of funders to collaborate on this? [35]

Possible Future Areas of Exploration and/or Collaboration:

The meeting concluded with participants developing a list of four priorities for  possible future action.  Three of these issues were discussed by small groups which reported back to the full conference in the concluding session.  These report-backs overlapped on several points.  The conference ended with agreement on the importance of finding ways to enable people who are concerned with such issues to move forward and explore them further.  

The first discussion group concentrated on the importance of devising ways in which social change organizers and activists, evaluators, learning partners and donors who share the goal of strengthening social change through improving assessment and learning strategies can develop common and separate strategies to advance that goal.  This discussion led to agreement on a number of points:

  • the need to develop more shared, open, interdependent spaces in which people like those at Gray Rocks can develop relationships and strategies which will foster the growth of  “just evaluations”;
  • the need to identify and connect with others around the world who are discussing these issues within their sectors – funders, evaluators, social change groups – in order to enlarge the exploration and help sharpen, affirm, question and develop the many observations raised and recorded during the conference regarding assessment, learning and evaluation; 
  • the need to expand the base of people who are debating these questions and further diversify the voices, including reaching out to people who are excluded; “When you bring in excluded people it expands the thinking about how change happens, helping us understand better how can assess it.”[36]
  • the need to clarify our different and mutual understandings of what “social change” is, how change happens, and how we see it from where we “sit” so that we can then develop better ways of assessing change;
  • the advantages of spending time in each other's organizations to help build a history of mutual trust and greater understanding of each other's internal politics and processes; and
  • the need to shift the discussions away from differences and focus around concrete situations, perhaps in particular communities and in dialogue with grassroots groups, dealing with any differences in those practical contexts.

A second group began discussion by focusing on the conferees' interest in having new ways of sharing tools, techniques, strategies and lessons concerning assessment and learning in a social change context.  Stimulated by the stories and strategies which they had heard at Gray Rocks, participants stressed the importance of finding new ways to share lessons and practical advice with each other.  They emphasized that learning often works best when it is based on “stories” which allow people to understand the context in which others are working, how they are approaching issues, what obstacles and opportunities they face, how they are learning as they go ahead, and how their approach can be adapted to other circumstances.  They talked about the value of having a “virtual library of materials”, including case studies and “stories”. 

However, they quickly broadened the discussion to stress the value of having a virtual network which would provide space for continuing dialogue and learning on what it takes to be an effective social change organization and how learning and assessment can strengthen, validate and legitimize such organizations.  Such a network should facilitate exchange at both the practical and theoretical levels. It should be much more than just a passive repository for materials.  Instead it should be a hub for actively supporting exchange visits and facilitating dialogue which fosters critical thinking and learning.  It should be a place where funders, groups and learning partners can go to find innovative assessment and learning approaches and people.

This virtual network should also have an advocacy dimension.  It should provide a forum for developing and pursuing joint strategies to influence donors, the evaluation community, social change activists and others in ways which expand the use of assessment and learning approaches which meet the needs of social change groups and their allies.  It should raise the visibility and credibility of the systems which social change groups and their supporters are finding most useful.

This small group pointed out that these ideas for sharing would require shared leadership through committees as well as dedicated time and resources.  The group stressed the importance of avoiding premature decisions about how it might be housed and supported:  those decisions should be deferred until (1) there has been clarification of the common values, goals, and ways of operating for this joint effort, and (2) there has been joint development of criteria for selecting one or more appropriate “institutional hub(s)”.  The group recommended that each participant contribute “stories” which would enable others to learn from some aspect of their experience, and that there be a follow-up workshop among interested parties to further develop the values, goals and criteria for deciding where to house these activities. 

Another group concentrated on people's strong interest in devising international exchange programs which could help different actors in the social change process learn from experience in other countries.  This small group stressed several key ingredients to worthwhile exchanges.  These included: 

  • exchanges which are structured so that everyone learns, the visitors and the visited;
  • the importance of facilitation, especially in cross-cultural exchanges so that people can understand different contexts as they talk about issues of power, politics and social change and how lessons can be applied in their own contexts; and
  • the importance of careful preparation for each exchange, and serious reflection afterward.

They also discussed the advantages of alliances between grassroots groups and universities in fostering learning and documentation from the exchanges, and the need for donors to pool funds to support them. 

Although this small group focused on international exchanges, this interest may relate to the issue of peer learning and peer support which was stressed repeatedly throughout the conference. Activists, evaluators, and donors frequently emphasized how much they learn and reflect on their own work and the possibility of changing their approaches when they are in dialogue with their peers, locally or internationally.  As dialogue continues on these issues, those involved in the discussions should consider whether they want to broaden their discussion to encompass this broader set of issues. 

Fourth, there was agreement on the need to increase the number of people who have the values, skills and experience needed to evaluate social change efforts and to help social change organizations reflect on, assess and evaluate their work and put what they learn into action.  This shortage exists within social change organizations, in the evaluation profession, and among learning partners and donors.  There is a need to devise remedies for this problem and increase the numbers of people who can help as external evaluators or in building strong systems of self-assessment and peer assessment and learning.  However, because discussants on the concluding day gave higher priority to the other small groups, these issues were not explored in depth at Gray Rocks. 

These four topics rose to the top as participants made relatively quick decisions on their priorities for possible future action. Other possible follow-up ideas surfaced earlier in the conference and are reflected in these notes, and further exploration may well reveal additional thoughts regarding future priorities.  As the process for continuing

exploration of these questions unfolds, it therefore is vital that it be open, inclusionary, responsive to different viewpoints, and centered around the participants' commitment to –

  • advancing fundamental social change in countries throughout the world, and
  • finding ways to grow practices of assessment, critical reflection, evaluation, learning and action which maximize the chances that the social change will be fundamental and will benefit those who most need a stronger voice and greater opportunities.

Moving Ahead:

While the dialogue was rich and strong relationships were built, conference participants shared a frustration that, having reached an initial understanding about the key issues that unified this diverse group, they had just begun to share experiences on  many issues and there was little clarity about how dialogue might continue.   There was, however, a strong sense among many participants that those who were most interested in these questions should seek ways of working with others to continue the exploration and seek major changes in assessment and learning as it relates to organizing and social change.   

The group agreed on several key principles concerning possible next steps.

First, the next stage of exploration should start with far deeper discussion of values and the extent to which they are shared.  It should deepen discussion of what constitutes social change and how it is achieved.  It should clarify the extent to which there is agreement on these issues, and provide the basis for exploring what should be the most essential goals regarding future assessment and learning in a social change context.  There was consensus that these issues must be thoroughly discussed before people can decide how interested they are in proceeding farther with this exploration, with whom, and what kind of vehicle or vehicles would be most appropriate for moving ahead. 

Second, as participants grappled for terms to describe their vision of how they might best structure or support further exploration and possible collaboration, they spoke of “virtual” networks and institutional “hubs”. They spoke also of openness, inclusiveness, and transparency, of the necessity of shared values, of an approach which depends on individual and collective initiatives rather than being overly centralized, and which helps the separate sectors work on their own and together, as appropriate.  They also spoke of the need to devise some way for loosely managing these different activities toward the shared vision and goals.  

Third, they stressed the need to build trust and understanding over time. Without trust, people's capacity to reflect and learn will be limited.  They see a need for  sufficient trust and safety that people feel comfortable challenging each other and stimulating critical thinking, and being “critical friends”.  This very likely requires investing time in visiting and getting to know each other, working on relatively small practical projects together, and building relationships which will provide a firm foundation for more ambitious agendas in the future. 

The conference concluded with agreement that a summary report on the conference would be developed to capture thinking at Gray Rocks and furnish the base for further exploration.  There was also agreement that opportunities should be created for people to step forward to volunteer to participate in thinking through possible future action on the issues which most concern them. 


[1] The Pratt Institute for Community and Environmental Development in New York City, the Participation Group at the Institute of Development Studies at the University of Sussex in Brighton, England, the Institute in Management and Community Development at Concordia University in Montreal, and the Center for Community Change in Washington, DC.

[2] The conference planning and facilitation team consisted of Andrew Mott of the Community Learning Project, John Gaventa of the Participation Group at the Institute of Development Studies, Irene Guijt of Learning by Design in the Netherlands, Wesley Woo of the Center for Community Change, Victoria Creed of Learning Partners in Knoxville, Tennessee, Samuel Musyoki of the Participation Group, and Lisa VeneKlasen of Just Associates in Washington.

[3] This report was prepared by Andy Mott with input from several members of the planning and facilitation team.  Irene Guijt's extensive documentation of the conference was of immense help.

[4] Participants from ACORN, CLOUT and PICO, DART, and the Northwest Federation of Community Organizations in the US and ACFODE in Uganda spoke of such systems.

[5] Representatives of ActionAid Uganda, Community Development Resource Association in South Africa, the Virginia Organizing Project in the US, the Association of Indigenous Councils of Northern Cauca in Colombia, and the Center for Youth and Social Development in India cited their approaches.

[6] Uganda Land Alliance involving communities in monitoring the work of their paralegals as cited by Sarah Okwaare Otto of ActionAid Uganda

[7] Jenny Chapman, ActionAid UK, discussing their ALPS (Accountability, Learning and Planning System) process of self-assessment as well as assessment by their grantee grassroots groups.

[8] Ruben Daria Espinosa Alzate, Association of Indigenous Councils of Northern Cauca.

[9] Several groups in developing countries and the US mentioned that American government agencies are increasing the extent to which they assess groups on the basis of their stands on such policy issues as abortion and family planning and whether they are “faith based”.

[10] Tom David, Director of Organizational Learning and Evaluation, Marguerite Casey Foundation.

[11] Jagadananda, Member-Secretary of the Center for Youth and Social Development in Orissa, India.

[12] Andy Mott, Community Learning Project in Washington.

[13] Ron Shiffman, Professor of Planning, Pratt Institute, and former Executive Director of the Pratt Institute Center for Community and Environmental Design.

[14] Susan Soal, Community Development Resource Center in South Africa.

[15] Heather Weiss, Executive Director, Harvard Family Research Center.

[16] Susan Soal.

[17] Joe Szakos, Executive Director, Virginia Organizing Project in the US.

[18] Mireille Landry, Institute in Management and Community Development, Concordia University in Montreal.

[19] Tom David.

[20] Lisa VeneKlasen, Just Associates in Washington.

[21] Francois Pierre-Louis, of Queens College and the PICO Network in the U.S.

[22] Diana Bustamante, Executive Director, Colonias Development Council in New Mexico.

[23] Edna Co, National College of Public Administration, University of the Philippines.

[24] Heather Weiss and Tom David.

[25] SPARC in India, ACORN, Northwest Federation of Community Organizations, and others.

[26] Center for Community Change in the US, Future Ways in Northern Ireland, and others.

[27] Evaluation of the reciprocal relationship between community organizing and public school reform efforts (Anne Hallett, Cross City Campaign for Urban School Reform, partnered with Research for Action in the US).

[28] Involvement of newly elected officials and villagers in assessing the impact of elections on land issues (Sarah Okwaare Otto, ActionAid Uganda); the involvement of young people in evaluating youth organizing (Barry Checkoway, School of Social Work, University of Michigan).

[29] Evaluation of coalitions and technical assistance groups (Imoyase Group in Los Angeles).

[30] Highlander Center.

[31] Participation Group, Institute of Development Studies, University of Sussex; Center for Community Change.

[32] ACFODE in Uganda, Center for Youth and Social Development in India, CLOUT and PICO in the US, Just Associates in international consulting work.

[33] Lance Evoy, Institute in Management and Community Development. Concordia University.

[34] Karin Eyben, Future Ways, Northern Ireland.

[35] Michael Edwards, Ford Foundation

[36] Lance Evoy.

About the Author

Andrew Mott directs the Community Learning Project and is a Senior Fellow at the Wagner School of Public Service at New York University. He formerly was with the Center for Community Change for 35 years, providing assistance and policy support to grassroots community groups and coalitions throughout the country. Concluding his career at the Center as Executive Director, he also chaired several national coalitions on low-income housing, community development, and domestic social programs.