COMM-ORG Papers 2004

http://comm-org.wisc.edu/papers.htm

Bringing Evaluation to the Grassroots: Insights Gleaned from Studying the Mobilization for Global Justice

By

Margo Menconi

malyme@hotmail.com

8902 60th Avenue

Berwyn Heights, MD 20740

August, 2003 

 


Contents

Introduction
Issues
The Mobilization for Global Justice
Relevant Evaluation Approaches
          1. Feminist Evaluation
          2. Empowerment Evaluation
          3. Co Evaluation
          4. Evaluation of Popular Education
Evaluation Activities
          1. Planning and Management
          2. Record Keeping
          3. Data Collection
          4. Reporting
Summary
Bibliography


Introduction

There are many approaches to evaluation as well as many contexts in which evaluation has been carried out.  This paper describes possible issues in carrying out evaluations in activist settings; describes the evaluation efforts of one activist group; and then suggests possible relevant evaluation approaches that might be helpful to activist groups.

My background is in adult education and this paper is a take off from my grounded theory research project on social movement learning, which incorporates a lot of data from the April 2000 anti-IMF/World Bank protests in Washington, DC.  The coalition umbrella group that organized much of what happened that week called itself the Mobilization for Global Justice.  The week of events were called “a16”.

Issues

In writing about evaluation in the social work setting, Gabor, Uenray, & Grinnell (1998) identify two myths about evaluation that keep groups from carrying out evaluations.  The first myth is philosophical: 

Some of us maintain that the evaluation of social work services – or the evaluation of anything for that matter – is impossible, never really “objective,” politically incorrect, meaningless, and culture-biased. (p. 3)

This statement seems applicable to the activist setting. Activists tend to take ideology and philosophical (especially political philosophy) issues seriously.  Thus, this is an issue that would need to be addressed in the activist context.

One possible way to address this issue is by assuring that the evaluation approach fits the group’s philosophy.  Doing so should enhance the likelihood of the evaluation effort being accepted, carried out, and bearing impact on the group.  This is similar to Vella, Berardinelli, and Burrow’s third characteristic of an effective evaluation (1998, p. 12), that says that evaluation should match the organizational philosophy. 

Perhaps related to the philosophy issue as a matter of organizational culture, is the issue of formalization and professionalization. Many activist groups tend to be informal and less professional in structure and operations.  This is not all bad, in fact, and if evaluation is introduced in such a setting, one might want to consider the pros and cons of possible impacts on the group in this regard.  Systematic data collection and record keeping might tend to have a formalizing influence.

The second myth that these authors mention is that to many “the quality improvement process via evaluations is a horrific event whose consequences should be feared.” (Gabor, Uenray, & Grinnell, 1998, p. 6) While the authors acknowledge that some of the fears associated with evaluations are not completely unfounded, others have less basis in reality. Those that do have some basis, often do so because of the misuse of evaluation.  One example of this is the use of evaluation for individual performance evaluation.  While in the activist setting this might not necessarily always mean formal personal evaluation, it might mean putting an individual in a bad light.  Many authors evaluators believe that this is an inappropriate use of evaluation, unless, of course, the evaluation is specifically a performance review.  But to use a program evaluation, for example, for performance review purposes is inappropriate.  This fear should be alleviated by taking precautions in advance against the use of evaluation findings for personal review purposes.

The issue of inclusion is also a potential problem area for carrying out evaluation in activist settings.  Activists might want to exclude certain relevant stakeholders for lack of trust or because of internal power issues.  The adversarial stance taken by many activist groups might seem to preclude inclusiveness in evaluation efforts.  While this is understandable, as many stakeholders as reasonably possible should preferably be included in evaluation efforts.  Similarly, in-group power wielding is often of an informal nature, which can be more slippery and political than formal power.  This informal nature of power in activist groups might make inclusion in evaluations even within activist ranks difficult as well.  A lack of inclusiveness in the evaluation effort could result in a weak effort and relatively unhelpful evaluation results, including biased results, and missing important information and issues.  If tight control is maintained over the range of participants in an evaluation, the results will be skewed.

Another issue is the fact that many activist groups are staffed only or mostly by volunteers.  While many such activists are quite skilled, even otherwise professional activists, they frequently work within significant time constraints and they are also often not trained in evaluation specifically.  As volunteers, they may also be threatened by the thought of an evaluation, or it just might not fit into their motivation for joining the group, and they might consequently be inclined to use exit from the group. Usually, one thinks of performing an evaluation in a setting with at least a core paid staff, where exit where might be somewhat less likely.  On the other hand, one possible advantage in the activist setting is that many activists are highly committed to the cause and as such might be able to see evaluation as a way to improve the group’s effectiveness in reaching their activist goals, for example.  There would need to be some work with this, in order for the volunteer activists to see the benefit of the evaluation effort and want to participate in it.

While not unique to activist settings, the lack of financial resources, time, and skills to carry out an evaluation might also be a barrier to implementing an evaluation.  In most cases, activist groups will not be able to afford to hire an external evaluator, although there may be some funding available for this as well.  Therefore, training will probably be needed for the activists to carry out much of their own evaluation.  How much time to put into the evaluation effort will also need to be carefully considered, especially in balance with other activities by the group.

Another issue is the goal of the evaluation.  Evaluations can perform many functions, some of which are more appropriate than others. Some of the potentially relevant functions of evaluation in the activist setting might include:

  • For making program, organizational and other relevant improvements
  • As a decision-making tool
  • For reporting to funders and other stakeholders
  • For external and internal accountability
  • For organizational learning
  • For organizational re-organization, mission and vision clarification

Activists might have limited understandings of what evaluation is all about and should consider the range of potential uses of evaluation.  This might mean prioritizing the purpose(s) most relevant to the group and also considering how the group can act consistently with the demands they place on government and business groups, such as accountability and transparency.

In evaluation, there is also generally considered to be some standard or criteria for judging the acceptability of the evaluation process and findings.  There are many such criteria, and different approaches to evaluation often come with their own criteria as well.  These standards should be decided on by the group.  There are many reasons for being concerned about this issue.  For example, if outside “stakeholders” are expected to accept the findings, the results should be considered credible by them.  Also, power politics internal to the group might be such that different groups and individuals need to be satisfied that the processes and results are fair and balanced.  Four professional standards that Gabor, Uenray, and Grinnell (1998) suggest would be appropriate for activist groups.  They suggest that a solid evaluation should be characterized by utility, feasibility, fairness, and accuracy (pp. 329-331).

This being said, the Mobilization for Global Justice did carry out evaluation functions in the spring and summer of 2000.  We will now turn to them to see what they did in fact do in this regard.

The Mobilization for Global Justice[1]

According to my data, the Mobilization for Global Justice used four primary self-conscious methods for program evaluation. 

  1. After workshops they sometimes had informal evaluations
  2. After the week was over working groups met individually and then collectively to evaluate their efforts
  3. After the week was over individual activist-writers produced evaluations of the week.
  4. Much of the on-going research and writing by activists and activist groups might be considered a kind of evaluative policy analysis

Following individual training workshops and general meetings, facilitator-trainers sometimes ended with a request for some kind of verbal or gesture evaluation from the group.  The proposed agenda for the first general planning meeting included an evaluation of the meeting at the end. However, that was the only case in my data where the agenda included such, and end-of-meeting evaluation results were never included in any of the minutes I have copies of (cp., doc. 47).  Similarly, at the end of the first nonviolence training sponsored by Mobilization for Global Justice in Washington, DC there was an evaluation, which included “good stuff” and “stuff to work on”.  These were listed in the notes from the training (doc. 319).

Judging by this, it seems that there was only a very limited use of evaluation in meetings and trainings, and it seldom appeared in the notes or minutes from the meeting when it was included.  However, there was also one instance where the notetaker included her own evaluative comments at the end of the minutes on how the meeting went (doc 314).  Data collection in these efforts was simple and informal, and little effort was apparently made to record or consider the use of the evaluation.  Since the evaluation approaches tended to be mostly self-affirming anyway, it would probably have taken a major glaring issue to result in organizational learning or utilization of the results. 

After the week was over each working group was asked to meet in advance of a general meeting to evaluate the activities of their working group.  Each group was to use a simple pre-set discussion guideline to assist them in their discussions.  These guiding questions were as follows:

  1. Specific things that worked well
  2. Specific things that needed work/alteration
  3. Future work your group has identified to do/will do (doc. 862)

Some groups or individuals provided written reports of their discussions or evaluations (docs. 863, 893 & 941).  These varied from being essay-like to being simple lists of evaluative comments made at the working group meeting.  At least two of these reports in my data contain suggestions for similar future efforts, based on what was learned from a16. 

Since not all groups were able to meet before the general meeting, time was provided at the beginning of the general meeting for working groups to meet and evaluate their efforts, following which they then shared their “findings” with the larger group.  This was followed up with a discussion at the meeting evaluating Mobilization for Global Justice as a whole.  These results were posted in the meeting minutes on the list serve (doc. 963), but they were much briefer than the individual working group reports posted to the list. 

One thing about these working group and post-a16 evaluation efforts that might be worth noting at this point, is that while the word “evaluation” is used, it seems that “debrief” is used much more often in connection with these efforts.  The conference call for those having participated in lobbying during the week was especially referred to as being a debrief meeting (docs. 816, 857); but this term was also used frequently in reference to the working group evaluation efforts.

In evaluation circles, terminology, such as evaluation, auditing and research, get discussed fairly frequently.  Perhaps it might be helpful to also consider the implications of using “debrief” instead of “evaluation.”  Perhaps “debrief” is actually a more accurate term to describe what the activists did in this case.  But understanding the difference and the implications of these terms and approaches might be helpful.

As to the implications of these post-a16 evaluation efforts, it is unclear how much impact there was on the Mobilization for Globalization, which still meets to this day.  For example, two issues which were of great concern were diversity in the group and also outreach to the local Washington, D.C. community.  The last meeting of the Mobilization for Global Justice that I attended, in the fall of 2002, some two years later, had seen little progress on these issues.  Since then there has been more effort into reaching out to the local community, however. 

Accountability seems to not be an impetus for these evaluative/debriefing efforts, and is not mentioned in my data nor do I remember it from any of the meetings I attended.

Immediately following a16, there were also several articles published in activist and alternative media evaluating the event.  Since the events of a16 were more than just local happenings, this form of evaluation effort tended to reach national and international audiences and have a tone reflecting concern for the larger so-called anti-globalization movement and learning from what happened in Washington, DC and improving on it for next time.  These evaluation articles appeared in activist media such as ZMag (doc. 725, 726, 763), IndyMedia (doc. 802), Corporate Watch (doc. 809), and a weekly column “Focus on the Corporate” (doc. 821).  These were a sort of expert evaluation, written by leaders in the movement.  Since future events, after a16 occurred in rather diverse settings, from Prague (the next IMF/World Bank meetings), to Philadelphia and Los Angeles (Republican and Democratic conventions, respectively), for example, and also consisted of somewhat different issues and thus coalitions, it is unclear how much these evaluation articles influenced later activist efforts.  It is unclear how effective a tool this is currently for organizational/movement learning, whether at the local, national or international levels.

Another form of evaluation used quite extensively, and in a more developed form than the previous mentioned evaluation efforts, is activist use of policy analysis.  The material of this nature used by the activists of Mobilization for Global Justice was produced by people in think tank and academic settings.  Many of the Foreign Policy in Focus papers, for example, would fit this genre (docs. 151, 152, 153, 154).  From the academic standpoint, the activists included a couple of papers by Michel Chossudovsky on Brazil and on a Marshal plan for speculators and investors (docs. 164, 166).  Chossudovsky is an economics professor at the University of Ottawa.  Most of the papers used by the activists would not be considered evaluative or policy analysis; most of them were focused more on just educational functions or mobilizing activists.  However, this policy analysis function is probably the most developed form of evaluation used by the Mobilization for Global Justice.  However, the focus in this case is not on evaluation of the movement efforts, but on evaluation of movement opponents or targets.

In looking back over this overview, it would appear that much of the evaluation effort of the Mobilization for Global Justice was summative in nature, happening at the end of a program or event.  It was also completely carried out internally, rather than with the aid of an external evaluator.  These efforts would also be what we call “informal evaluation”’ While they were intentionally evaluative, they mostly weren’t terribly well thought out in advance.  The one exception regards the policy analysis element, which might be described as activists carrying out, in a rather professional manner, an external evaluation of government or para-governmental agencies, such as the World Bank.

Since the Mobilization for Global Justice brought together so many leader activists from different organizations, movements and cities around the United States (for example, leaders came from Minnesota, California, Washington state, etc.), as well as from abroad, it may well be that what we see by way of evaluation practices in my data is fairly representative of activist circles at least in the United States, if not other countries.

Relevant Evaluation Approaches

So where do we go from here?  While each activist group and situation warrants deciding for themselves how to approach evaluation, there are a few models and approaches in the evaluation literature that seem like they might be reasonable fits, or at least reasonably be able to inform decisions on how to approach evaluation in activist settings.

1. Feminist Evaluation

According to Hood and Cassaro (2002, p. 28), “Feminism as a paradigm for social inquiry falls under the genre of critical theory.  It utilizes poststructuralist notions that challenge assumptions of universal concepts and essential categories and acknowledges that “reality” is socially constructed.” While the authors of this special volume of New Directions for Evaluation on feminist evaluation underscore the fact that feminist evaluators use a variety of methods, there are several underlying concepts that unite them.  Sielbeck-Bowen et al. (2002) identify six such unifying principles or ideas:

  1. Feminist evaluation has as a central focus the gender inequities that lead to social injustice.
  2. Discrimination or inequality based on gender is systemic and structural.
  3. Evaluation is a political activity…
  4. Knowledge is a powerful resource that serves an explicit or implicit purpose…
  5. Knowledge and values are culturally, socially, and temporally contingent…
  6. There are multiple ways of knowing; some ways are privileged over others.” (pp. 3-4)

Feminist evaluation, with its roots in postmodernism, critical theory and respect for diversity, seems like it might be a reasonable fit for many activist situations.

2. Empowerment Evaluation

Empowerment evaluation focuses on capacity building of the program participants so that they can carry out their own program evaluation. According to David Fetterman, the originator of this approach (1997, p. 382), “Empowerment evaluation has an unambiguous value orientation – it is designed to help people help themselves and improve their programs using a form of self-evaluation and reflection.”  It is also a group, rather than individual, pursuit.  Like feminist evaluation above, this approach also recognized multiple worldviews, while at the same time being committed to truth and honesty, which leads to the use of checks and balances to assure that these principles are adhered to in the evaluation.  Empowerment evaluation has its roots in community psychology and action anthropology (Fetterman, 2000).

There are four steps to empowerment evaluation (Fetterman, 2000, p. 396):

  1. “taking stock or determining where you stand as a program including where you want to go in the future with an explicit emphasis on program improvement”
  2. “focusing on establishing goals, determining where you want to go in the future with an explicit emphasis on program improvement”
  3. “developing strategies and helping participants determine their own strategies to accomplish program goals and objectives”
  4. “helping program participants determine the type of evidence required to document progress credibly toward their goals.”

While empowerment evaluation does allow for a consultative role for the external evaluator, the focus on self-evaluation and the development of a “dynamic community of transformative learning” (Fetterman, 1997, p. 385), might be well-received in activist circles.

3. Coevaluation

Sandra Gray and associates developed coevaluation especially for use in the nonprofit world.  In this perspective, evaluation is not a one-time event, but an ongoing process.  According to Gray, “Coevaluation, is the means by which an organization continuously learns how to be more effectives.  It provides a means of organizational learning, a way for the organization to assess its progress and change in ways that lead to greater achievement of its mission in the context of its vision.” (1998, p. 4). 

Coevaluation consists of three steps:

  1. “asking good questions”
  2. “gathering and reviewing information”
  3. “sharing the information to foster good decision making”

In this view, coevaluation “is the responsibility of everyone in the organization”, “addresses the total system of the organization”, and “invites collaborative relationships” (Gray, 1998, p. 5). “Total system” and “relationships” include relevant external-to-the-organization elements, people and groups as well as internal ones.  Coevaluation is a process that is incorporated into the daily routine of the organization.

Coevaluation, as being a continuous (rather than one-time) evaluation, would probably be well accepted in activist circles.  However, there may be resistance to including external groups and certain systems aspects in the evaluation.

4.  Evaluation of Popular Education

Jane Vella et al’s approach to evaluation of popular education programs is specific to educational programs.  But it could easily be adapted to other programs, especially ones with philosophies compatible with popular education.  Many activist groups use popular education methods in at least some of their educational efforts. 

While the authors do not explicitly say so, their approach seems to take off from the logic model.  This approach to evaluation was based on the beliefs that effective evaluation…

  1. “must be objective”
  2. “should identify the important elements of an educational program”
  3. “should match the organizational philosophy”
  4. “measures should be identifiable and accessible”
  5. “should focus on both the outcomes and the process” (Vella et al, 1998, p. 12).

In this view, evaluation begins with the initial program planning stage, and not just in the middle of a program or at the end of it.  However, while evaluation in this perspective is all-pervasive, it appears to be mostly focused on the program objectives. 

Similar to the logic model, the chart used for this evaluation approach consists of six columns, which refer specifically to the adult education process and are as follows: 

  1. Skills Knowledge Attitudes (SKAs), “Content, and Achievement-Based Objectives”
  2. “Educational Process Elements: Learning Tasks and Materials”
  3. “Anticipated Changes *Learning * Transfer * Impact”
  4. “Evidence of Change * Content * Process * Qualitative * Quantitative”
  5. “Documentation of Evidence”
  6. “Analysis of Evidence” (Vella et al, 1998, p. 60)

These columns would probably have to be adapted to fit the not educative elements of activist efforts.  However, when it comes down to it, much of what activists do is educative, if not only popular education in form.

Each of these approaches to evaluation potentially have something to offer in the activist context.  But no matter which is chosen, or another approach, or some combination, evaluation should be considered seriously as a means of program and organizational improvement and as a tool for accountability.  Since activists generally have a great zeal for the issues they get involved in; strengthening the effectiveness of their efforts seems like it would be a worthwhile endeavor.  Likewise, as activists so often demand accountability and transparency of business and government, it would be consistent with their values to use evaluation as a tool for accountability for their own activities, as well.

Evaluation Activities

At this point, it might be helpful to discuss briefly specific evaluation activities in the activist context.  Reference will be made to the Mobilization for Global Justice case by way of example.

1. Planning and Management

The group Mobilization for Global Justice was just starting up during the time I collected the bulk of my data on them.  As such, they were going through early stages of organizational development.  While, key leadership in the organization consisted mainly of professional activists from various interested organizations (both local and otherwise), the fact that the group did not have a ready set infrastructure affected its planning and management.  The fact that they took much of their format and procedures from the Seattle anti-WTO protest efforts, however, did expedite organizational development considerably.  Nonetheless, another throwback on following in Seattle’s heels, was the fact that activists planning a16 had a tough act to follow:  Seattle was considered a smashing success.  Largely because of this, there was some fear of deviating from the program or organizational format established in Seattle.  Tactical planning is an example of this hesitancy, in which some people thought that you could never replicate an earlier event, but in the end the same strategies were used as in Seattle.

In this case, evaluation would have had to be sensitive to the organizational stage of development.  Perhaps someone in the temporary office that was eventually set up or a separate working group could have oversaw evaluation efforts.  As mentioned earlier, however, finding people with sufficient expertise and also with time to take this on, would have been a challenge.  Also, if this were something that wasn’t normally taken up in activist circles, one might have had to use great people and communication skills to be accepted by the leadership especially.

2. Record Keeping

Once the Mobilization for Global Justice office was set up, there was an effort to centralize record keeping.  A request was made for all working groups, for example, to submit relevant documents and information to the new office.  Similarly, certain working groups also asked for information from other working groups, such as the Media Working Group asking for information regarding media contacts made by other working groups.  Thus, it seems that there was already an understanding that recordkeeping and some level of centralization of the same was important and acted on.  This function of evaluation would probably have met with little resistance, unless perhaps it tested the capacities of time and people available.

3.  Data Collection

Judging by the data I have available from early functioning of the Mobilization for Global Justice, data collection in their case would probably have been spotty, inconsistent and irregular unless there was careful oversight and/or training of key individuals.  For example, meeting minutes, which involved collecting data by way of notetaking at planning meetings, were not always prepared and when they were they were inconsistent in quality and format, and their level of access or availability if prepared also varied. This is not to say that uniformity is always best in such activist settings, but for the purposes evaluation, such data collection would need to be such that it was useful and helpful in coming to evaluative conclusions. 

4. Reporting

Activist settings, it seems to me, allow for a great variety of reporting methods and styles.  For example, panel discussions, role plays and drama, drawings and other forms or visual art, poetry and song, web sites and blogs, museum-like displays, as well as the traditional written report, are all distinct possibilities that might be effective.  Reporting has the potential of being a very exciting part of the evaluation process in activist settings.

Summary

 The possibilities for program evaluation administration and implementation are seemingly as diverse as activist groups themselves. Hopefully, with a little foresight and desire to carry out evaluative functions for appropriate ends, evaluation can serve as a useful tool to help activist groups better reach their goals and to assist them in being and acting consistent with their values and interest.

Bibliography

 Fetterman, D. M. (1997). Empowerment evaluation and accreditation in higher education.  In E. Chelimsky & W. R. Shadish (Eds.).  Evaluation for the 21st Century: A Handbook, pp. 381-395.  Thousand Oaks, CA: Sage.

Fetterman, D. M. (2000).  Steps of empowerment evaluation: from California to Cape Town.  In D. L. Stufflebeam, G. E. Madaus, & T. Kallaghan. (Eds.).  Evaluation Models: Viewpoints on Educational and Human Services Evaluation, pp. 395-408.  Boston: Kluwer Academic Publishers.

Gabor, P. A., Uenray, Y. A., & Grinnell, Jr., R. M. (1998).  Evaluation for Social Workers: A Quality Improvement Approach for the Social Services, rev. ed.  Boston: Allyn & Bacon.

Gray, S. T. (1998).  Evaluation with Power.  San Francisco: Jossey-Bass.

Hood, D. W., & Cassaro, D. A. (2002).  Feminist evaluation and the inclusion of difference.  In D. Seigart & S. Brisolara (Eds.).  Feminist Evaluation Explorations and Experiences, pp. 27-40.  San Francisco: Jossey-Bass.

Seigart, D., & Brisolara, S. (Eds.) (2002). Feminist Evaluation Explorations and Experiences (New Directions for Evaluation, No. 96).  San Francisco: Jossey-Bass.

Sielbeck-Bowen, K. A., Brisolara, S., Seigart, D., Tischler, C., & Whitmore, E. (2002).  Exploring feminist evaluation: the ground from which we rise.  .  In D. Seigart & S. Brisolara (Eds.).  Feminist Evaluation Explorations and Experiences, pp. 3-8.  San Francisco: Jossey-Bass.

Vella, J., Berardinelli, P., & Burrow, J. (1998).  How Do They Know They Know ?  Evaluating Adult Learning.  San Francisco: Jossey-Bass.



[1] Documents referred to in this section are from my research project and are available upon request.