COMM-ORG Papers

Volume 14, 2008

http://comm-org.wisc.edu/papers.htm
   

Advocacy Evaluation: Review and Opportunities

Justin Whelan

 

justin_whelan@hotmail.com

 


Contents

Introduction
Advocacy Evaluation: a Review of the Literature
     Evaluation Frameworks
     Theory of Change
Opportunities for Further Research
Conclusion
References
Appendix
Notes
About the Author
Acknowledgments


Introduction

Despite the generally well-recognised importance of regular evaluation of efforts across all spheres of organisational life, interest groups and activists have tended to pay short shrift to evaluation of their advocacy efforts. There are a number of reasons to this, most notably a sense that there is a lack of available time. There are also a number of constraints on meaningful evaluation of advocacy due to the highly ambiguous and often irrational nature of the policymaking process. Nevertheless, in recent times a small but growing number of organisations, especially philanthropic bodies that fund advocacy work, have begun to explore frameworks for evaluating advocacy that takes account of the hurdles but does not become defeated by them.

In this paper I review the emerging literature in this field, noting the points of convergence and divergence. I then suggest some limitations of the frameworks and opportunities for effective evaluation that meets the needs of interest groups. In particular, evaluators would benefit from taking greater account of policy change theory and paying more attention to (even tentative) strategic evaluations of campaigns as a way of adding value.

Advocacy Evaluation: A Review of the Literature

Literature on advocacy evaluation [1] is so new that a review paper published in late 2005 could talk about the “fact [that] there is not yet a real ‘field’ or ‘community of practice’ in evaluation of policy advocacy.” (Guthrie, Louie, David, & Foster, 2005, p. 11). However things are moving very fast, and the Advocacy Evaluation Project’s online bibliography now contains over 100 articles, tools and reports of relevance to the field (Innovation Network, 2007).

Advocacy Evaluation as a field of inquiry arose out of an awareness that social science research into public policy and human services delivery was not being extended into evaluating advocacy campaigns. Early on it was noted one of the reasons for this is that the dependent/independent variable model of much social science research is unable to probe the complexity of the policymaking process and the role of interest groups in it (Reisman, Gienapp, & Stachowiak, 2007a, p. 8; Guthrie, Louie, David, & Foster, 2005, p. 7).

Recent literature has highlighted and discussed six key methodological challenges facing evaluation efforts (Reisman, Gienapp, & Stachowiak, 2007a, p. 7; Guthrie, Louie, David, & Foster, 2005, pp. 7-10):

  1. The complexity of public policymaking
  2. The role of external forces and conditions
  3. Problems of attribution
  4. The long time frame needed for changes to occur
  5. Shifting strategies and milestones
  6. Low capacity and interest in evaluation from advocacy organisations

Furthermore, there are differing perspectives on the purposes and required extent of evaluations. One report noted that many advocacy organisations believe their work cannot be measured, and that any attempt to do so diminishes the power of their efforts. In addition, some advocates are concerned about committing scarce resources to a ‘side issue’. On the other hand, philanthropic bodies that fund advocacy efforts are increasingly interested in documented evidence of results and a sense of accountability. Finally, there are different expectations about what constitutes a ‘reasonable’ level of scientific accuracy from those who believe only in ‘gold standard’ research to those who consider brief and anecdotal findings more worthwhile (Reisman, Gienapp, & Stachowiak, 2007a, p. 7).

In regard to these tensions, it is interesting to note that the emerging literature is largely driven by U.S.-based philanthropic organisations, not advocacy NGOs themselves. There is clearly a need for greater involvement in this debate by advocacy organisations to enure their needs are met by any evaluation processes.

Despite these challenges, both funders and NGOs have found there are many good reasons to evaluate their efforts. Both are keen to know if their investment of time and money is making any difference, for example. Evaluation research techniques can potentially offer reliable tools and processes to answer this question, and provide the basis for re-thinking strategies and tactics if things are not going as well as hoped (Reisman, Gienapp, & Stachowiak, 2007a, p. 9). And despite the differences in perspective noted above, it is notable that the funder-generated literature maintains the importance of organisational learning being the key purpose of evaluation – that is, that it builds capacity among advocates, not distract them from their task (Guthrie, Louie, David, & Foster, 2005, p. 12). As one evaluator put it, “the key is to remember that the primary user is the advocate and that the foundation is second” (Patrizi, 2006). There is even an awareness that too much focus on short-term outcome achievement to impress in evaluations attached to brief funding cycles may undermine the capacity to achieve more meaningful, longer-term change (Coates & David, 2002, p. 535).

Although as yet there is no consensus about the ‘right’ approach to advocacy evaluation, a study by the California Endowment found a set of common principles among those working in the field to deal with the challenges identified above (Guthrie, Louie, David, & Foster, 2005, p. 12):

  1. Expand the perception of policy work (and thus outcome categories) beyond legislative arenas and remember that advocacy involves both ‘offence’ and ‘defence’
  2. Build an evaluation framework around a theory about how a group’s activities are expected to lead to its long-term outcomes
  3. Focus on the steps that lay the groundwork and contribute to the policy change being sought
  4. Include outcomes that involve building capacity to become more effective advocates
  5. Focus on contribution, not attribution
  6. Emphasize organisational learning as the overarching goal of evaluation

A more recent study by Organizational Research Services has added and articulated an important seventh principle: select a practical and strategic approach to measurement that is relevant to the context (Reisman, Gienapp, & Stachowiak, 2007a, pp. 23-27). This is important because different campaigns require different levels of rigour in data collection, different time-scale needs and different questions to be answered. For example, the type and structure of the campaign can influence the best method of gleaning useful information – a loose coalition may have more disagreement over strategy and thus benefit from individual interviews, while a focus group debrief will be fine for a unified organisation (Stuart, 2007).

The California Endowment paper also proposed a ‘prospective evaluation approach’ in which the intention of evaluation is built in to the design of a campaign from the outset, allowing benchmarks and indicators to be agreed on in advance, and data collection to happen in-situ rather than after the event (Guthrie, Louie, David, & Foster, 2005).

In their follow-up paper, the same authors discuss the value of multiple interim reports that provide critical information to help shape unfolding campaign strategies rather than waiting to ‘tell the story’ after a campaign ends (in which the only value is to future campaigns). They suggest brief, informal ‘check ins’ at times when campaigners identify data will be most useful to them, allowing the final reports to focus on reflection rather than summation (Guthrie, Louie, & Foster, 2006, p. 23). This requires significant flexibility on the part of evaluators, as interim reports will not necessarily want to follow a neat schedule (eg. every six or twelve months), but rather occur after significant events, actions or milestones (Coffman, 2007, pp. 2-3).

Some recent articles have also begun to focus attention on the need to consider the policy context (eg. who is in government, how powerful are the opponents of change, etc) and theories of policy change when conducting evaluations. To some extent these may be implicit in the ‘theory of change’ or ‘critical path’ models referred to in detail below, but a good evaluation will make explicit reference to them as part of the explanatory process of making sense of the outcomes. This is an under-theorised element of advocacy evaluation [2] to which I will return later, but one promising article has discussed the way one organisation links its strategising and evaluation to John Kingdon’s influential model of ‘policy windows’ as an explanation of the agenda-setting stage of policymaking (Coffman, 2007b). This has led to an approach that asks questions about how the campaign is contributing to the advancement of the three streams Kingdon discusses (problems, proposals, politics) in order to create a ‘policy window’, while at the same time potentially providing scope for the evaluation to discuss external factors that are either supportive of or undermining the campaign’s efforts.

Lastly, an undercurrent in the discussion is the importance of ensuring that, wherever possible, the intended beneficiaries of advocacy work are involved in evaluations. This thread runs strongest in the international development literature, such as Mayoux (2003); Coates & David (2002) and Chapman & Wameyo (2001). This is seen as an important accountability mechanism to defend against the temptation of professional lobbyists to see a ‘successful advocacy effort’ when the intended beneficiaries see nothing changing, and is more important (but more difficult) the further the distance between advocate and intended beneficiary. It also functions as a form of empowerment for the latter by taking their views seriously.

Evaluation Frameworks

With these principles in mind, a number of frameworks have been developed to assist advocacy organisations with evaluating their work. These vary according to the type of work being done by the developers of the frameworks, and thus cover slightly different terrain regarding what dimensions of change to consider. In their review in 2006, The California Endowment (Guthrie, Louie, & Foster, 2006) summarised eight frameworks in a helpful table, two of which are described below, along with a more recent alternative (a summary table also appears as an appendix).

One of the early popular frameworks was developed by the Institute for Development Research (IDR) in the late 1990s for work in international aid and development. The IDR Framework asks evaluators to consider change outcomes in up to five dimensions (depending on the work done): policy change; private sector change; increased organisational capacity and stronger alliances; increased democratic space; and impact on the target group. This framework has the advantage of clearly identifying a range of outcomes (any of which could be negative rather than positive) and provides a simple table for summarising findings, although some people have found it pays insufficient attention to stages of the policymaking process (Chapman & Wameyo, 2001, pp. 12-14).

As their work is more geared to changing social norms and behaviour, the Women’s Funding Network (WFN) measures a different set of dimensions of change: a shift in definitions; a shift in behaviour; a shift in engagement; a shift in policy; and maintaining past gains. This model is built on an explicit ‘theory of change’ accepted by the WFN and has been converted into a very simple online reporting tool used by all their grantees. On the downside, capacity building is neglected in this framework (Women's Funding Network, 2005; Guthrie, Louie, & Foster, 2006, p. 29).

Most recently, Organizational Research Services has attempted to combine the common elements of the diverse frameworks into a six-part ‘menu of outcomes for advocacy and policy work’ that includes: shifts in social norms; strengthened organisational capacity; strengthened alliances; strengthened base of support; improved policies; and changes in impact. The ‘menu’ comes with examples of outcomes, strategies and units of analysis for each outcome category and is supplemented by a handbook on data collection techniques (Reisman, Gienapp, & Stachowiak, 2007a; Reisman, Gienapp, & Stachowiak, 2007b).

More recently, the Harvard Family Research Project has produced a ‘composite logic model’ that puts all the insights of previous literature together and builds them into a complete framework that covers both strategic planning and evaluation. The composite logic model represents a full range of inputs, activities, outcomes, and impacts that may be connected to an advocacy and policy change strategy, from which the strategist or evaluator selects those most relevant to their work. The model is at a birds-eye point of view, with each box able to be defined further and more detail added as needed. It also looks at contextual factors (such as changes in political, economic, and social climate) that are especially pertinent for advocacy organisations (Coffman 2007c). The logic model has already been used to evaluate an Australian advocacy campaign with great success (The Change Agency 2007a).

Theory of Change

Due to its prominence in more recent literature, it seems worthwhile to briefly discuss the second principle noted above, namely that any evaluation be based on a theory of how policy or social change occurs. In the evaluation literature, this is referred to variously as a ‘logic model’, ‘theory of change’ or ‘pathways of change’. Those more familiar with advocacy/activist literature might know it better as a ‘critical path analysis’ (The Change Agency, 2007). Regardless of the name, the idea is that organisations should, from the outset, articulate in simplified terms how they perceive change being won, and what steps are required along the way. This is often done in diagrammatic fashion, somewhat like a flow chart, showing strategies and intermediate outcomes leading to final ‘victory’ in the form of the campaign objective being realised (Reisman, Gienapp, & Stachowiak, 2007a, pp. 11-16; Guthrie, Louie, David, & Foster, 2005, pp. 16-25; Organizational Research Services, 2004).

Theories of change diagrams are extremely useful as strategic planning aids for advocacy organisations and are being increasingly utilised in Australia thanks to the renewed interest in strategic activism. They are also helpful for evaluators on two levels: they provide a list of intermediate outcomes to be measured for success (showing where along the pathways the organisation has managed to travel), and also demonstrate many of the strategies being adopted and the assumptions behind them. For example, some organisations have been known to factor in a change to a more favourable government as an intermediate objective, while others in the same field have not – showing divergent assumptions about the necessary conditions for success and contrasting placement of energies at the relevant times.

The theory of change diagram also assists an evaluator determine which outcomes to investigate. Thus, while it is ideal for advocacy organisations to map out their theory of change at the outset, even if this has not occurred it can be helpful for an evaluator to create one based on the information available. At the very least it will create a simple analysis of the information to be cross-checked with the organisation for accuracy before the task of measuring outcomes is begun.

Opportunities for Further Research

The process of engaging in advocacy evaluations using the suggested principles and frameworks has demonstrated the value of such a process, even in the absence of ‘gold-standard’ data collection techniques. (Reisman, Gienapp, & Stachowiak, 2007a, p10).  However, there are significant opportunities for linking with other relevant research in order to improve reliability of findings.

Bringing in theory to begin the process of analysing some of the reasons for success and/or failure represents an important way to add value to the advocacy organisation being studied. This is an area that is significantly under-theorised in the literature on evaluation of advocacy.

One possibility to link the campaigns to the extensive theoretical literature about the policymaking process in democratic states. The literature review uncovered a brief discussion of one organisation’s use of John Kingdon’s ‘policy windows’ framework (Kingdon, 1984) for understanding the agenda-setting stage of the policy process. The field of policy studies is full of literature that could enlarge the perspective of advocacy organisations seeking to make sense of what happened.

For example, some organisations have found themselves granted a ‘seat at the table’ in exchange for certain trade-offs that may undermine their own members’ interests. In the literature this is referred to as the process of ‘neo-corporatism’ (Downes 1996). Organisations looking for wider reference points about how this process works in general, its pitfalls and advantages, would benefit from this theoretical base. The ‘advocacy coalition framework’ developed by Sabatier and Jenkins-Smith (summarised in Sabatier (1991)) could also help advocacy organisations make sense of the relevance of one organisation to the broader push for policy change and the interaction of interest group advocacy and external factors in driving policy change.

This literature can also provide some common language and reference points for evaluators, to aid cross- fertilisation and meta-evaluations of common campaigns. For example, the ‘policy cycle’ model  (Bridgman & Davis, 2004) could be used in detailing the status of a proposed policy change (or the location of its breakdown). Given the ‘policy cycle’ model is often used to describe how the public service works, this seems a particularly helpful location to situate evaluations of advocacy campaigns (although see Colebatch 2006 for a detailed criticism of the model). Further, when describing the proposed (or successfully implemented) changes, reference to an agreed suite of ‘policy instruments’ (Howlett & Ramesh, 1995, pp. 80-101) may give insight into the strength of implementation approach being proposed or adopted.

Alternately, for social movement organisations, an understanding of some of the insights of ‘political process theory’ may illuminate strategic reflection and the reasons behind certain outcome successes and failures. In particular, research into the relevance of expanding (or the threat of rapidly contracting) ‘political opportunity structures’ could help to explain rapid success in mobilising grassroots activism, the theory on mobilising structures could provide a useful reflection on the strengths and weaknesses of certain organisational forms (including coalition structures), and a discussion about the process of ‘frame alignment’ could provide theoretical justification for analysis of the success or failure of certain ‘collective action frames’ in driving movement mobilisation (McAdam, McCarthy, & Zald, 1996). In short, the extensive study of social movements means that evaluations of social movement organisations need not happen in a theoretical vacuum (although see Goodwin & Jasper (1999) and Morris (2000) for a critique of the overly structural bias and neglect of strategic analysis of much political process theory).

Another possibility to link the campaigns to ‘activist theory’ (that is, strategic theory developed by activist researchers to explain how to make campaigns more successful) and public interest communications research to explain where campaigns had made wise and poor strategic choices. By bringing in this theory, an evaluator would be able to provide some analysis of the internal and external factors that help to explain relative rates of success and failure.

Examples of such theory includes the strategic principles of Bill Moyer’s influential Movement Action Plan (Moyer, 2001), Saul Alinsky’s seminal Rules for Radicals (Alinsky, 1969), and Chris Rose’s How to Win Campaigns (Rose, 2005). All three texts have been used repeatedly by social movement actors, having been found to provide essential strategic insights into the reality of the political process and how it can be manipulated.

Organisations may find this information very helpful as a way of connecting their intuitions to wider strategic theory about activism and advocacy. Making these connections would take the evaluation beyond a simple catalogue of outcome measures (useful as that is, when covering a wide range of outcome categories) into a strategic review that can drive future planning. Since the purpose of evaluations is, as The California Endowment says, to promote organisational learning and improve the effectiveness of advocacy (Guthrie, Louie, David, & Foster, 2005, pp. 7,12), making such connections seems highly beneficial.

Conclusion

Advocacy evaluation is a rapidly emerging field of inquiry. It is being driven by a desire to ensure accountability to funding bodies, but also to promote organisational learning and to share stories of success and the lessons to be learned from them. In this paper I have reviewed the literature in the field and found that there is a common agreement that there are a number of methodological challenges facing any attempt at rigorous social-scientific evaluation of advocacy for policy change. However, there is also agreement on the benefits of such work, even if results are necessarily tentative.

Some common principles have emerged that emphasise broadening outcome categories beyond simple legislative/policy success or failure to include such things as shifts in social norms, strengthened organisational capacity and alliances, and impact on the target group; linking outcomes to a ‘theory of change’ explaining the logic behind certain strategic choices; and focussing on the achievement of intermediate steps required for long-term social and policy change. A number of evaluation frameworks have also been identified and (briefly) explained.

Efforts to link the perceptions and knowledge of participants to wider strategic and policy process theory has the potential to add significant value to the advocacy organisation. This is an area worth further consideration by those with an interest in developing the field of evaluation of advocacy and policy change.

 

References

Alinsky, S. (1969). Rules for Radicals. New York: Vintage Books.

Bridgman, P., & Davis, G. (2004). The Australian policy handbook (3rd ed.). Sydney: Allen & Unwin.

Chapman, J., & Wameyo, A. (2001). Monitoring and Evaluating Advocacy: A Scoping Study. London: ActionAid.

Coates, B., & David, R. (2002). Learning for change: the art of assessing the impact of advocacy work. Development in Practice , 12 (3-4), 530-541.

Coffman, J. (2002). Public communication campaign evaluation: An environmental scan of challenges, criticisms, practice, and opportunities. Cambridge, MA: Harvard Family Research Project.< o:p>

Coffman, J. (2007a). What's Different About Evaluating Advocacy and Policy Change? Evaluation Exchange , XIII (1), pp. 2-4.< o:p>

Coffman, J. (2007b). Evaluation Based on Theories of the Policy Process. Evaluation Exchange , XIII (1), pp. 6-7.< o:p>

Coffman, J. (2007c). Using the Advocacy and Policy Change Composite Logic Model to Guide Evaluation Decisions, Harvard Family Research Project. Retrieved January 28,  2009, from http://www.innonet.org/index.php?section_id=101&content_id=633.< o:p>

Colebatch, H. (ed.) (2006). Beyond the Policy Cycle: The policy process in Australia, Sydney: Allen & Unwin < o:p>

Goodwin, J., & Jasper, J. (1999). Caught in a Winding, Snarling Vine: The Structural Bias of Political Process Theory. Sociological Forum , 14 (1), 27-54.

Guthrie, K., Louie, J., & Foster, C. C. (2006). The Challenge of Assessing Policy and Advocacy Activities: Part II - Moving from Theory to Practice. Los Angeles: The California Endowment.

Guthrie, K., Louie, J., David, T., & Foster, C. C. (2005). The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach. Los Angeles: The California Endowment.

Howlett, M., & Ramesh, M. (1995). Studying Public Policy. Oxford: Oxford University Press.

Innovation Network. (2007). Advocacy Evaluation Project. Retrieved June 5, 2007, from http://www.innonet.org/advocacy

Kingdon, J. (1984). Warpping things up. In Agendas, Alternatives and Public Policies (pp. 205-218). Boston: Little, Brown.

Mayoux, L. (2003). Advocacy for Poverty Eradication and Empowerment: Ways Forward for Advocacy Impact Assessment. Retrieved June 5, 2007, from http://www.enterprise-impact.org.uk/word-files/Advocacy.doc

McAdam, D., McCarthy, J., & Zald, M. (1996). Comparative Perspectives on Social Movements Opportunities, Mobilizing Structures, and Framing. Cambridge: Cambridge University Press.

Morris, A. (2000). Reflections on Social Movement Theory: Criticisms and Proposals. Contemporary Sociology , 29 (3), 445-454.

Moyer, B. (2001). Doing Democracy: The MAP Model for Organising Social Movements. Gabriola Island, BC: New Society Publishers.

Organizational Research Services. (2004). Theory of Change: A Practical Tool for Action, Results and Learning. Seattle: Annie E. Casey Foundation.

Patrizi, P. (2006). Using Information for Policy Change: The Only Reason to Do Evaluation (In This Context). Retrieved June 5, 2007, from http://www.innonet.org/client_docs/File/advocacy/patrizi_using_information.doc

Reisman, J., Gienapp, A., & Stachowiak, S. (2007a). A Guide to Measuring Advocacy and Policy. Baltimore: Organizational Research Services.

Reisman, J., Gienapp, A., & Stachowiak, S. (2007b). A Handbook of Data Collection Tools: Companion to "A Guide to Measuring Advocacy and Policy". Baltimore: Organizational Research Services.

Rose, C. (2005). How to Win Campaigns: 100 Steps to Success. London: Earthscan.

Sabatier, P. (1991). Toward Better Theories of the Policy Process. PS: Political Science and Politics , 24 (2), 147-156.

Sonnichsen, R. (1989). Advocacy Evaluation: A Strategy for Organizational Improvement. Science Communication , 10 (4).

Stuart, J. (2007). Necessity Leads to Innovative Evaluation Approach and Practice. Evaluation Exchange , XIII (1), pp. 10-11.

The Change Agency. (2007). Critical Path Analysis. Retrieved June 5, 2007, from The Change Agency: http://thechangeagency.org/_dbase_upl/critical_path.pdf

Women's Funding Network. (2005). Make Your Case: Market Your Organisation as a Powerful Agent of Social Change. Retrieved June 5, 2007, from http://www.innonet.org/client_docs/File/advocacy/wfn_mtc.ppt

Appendix

Table 1: Comparison of Benchmark Frameworks

Organisation

Outcome Categories

Focus

Strengths

Drawbacks

Institute for Development Research

  • Policy change
  • Private sector change
  • Strengthened capacity & alliances
  • Increased political space
  • Impact

Community

Widely applicable. Emphasis on capacity building. Does not neglect possible negative outcomes

No examples or benchmarks or indicators provided; not connected to a theory of change

Women’s Funding Network

Changes in:

  • Definitions
  • Behaviour
  • Engagement
  • Policy
  • Maintain current position

Campaign

Based on theory of change; online reporting tools available; sees policy change as a step towards social change rather than final outcome

No capacity-building outcomes;  online tool only available to WFN grantees

Organizational Research Services

  • Shifts in social norms
  • Strengthened organisational capacity
  • Strengthened alliances
  • Strengthened base of support
  • Improved policies
  • Impact

Campaign or Community

Order of categories based on implicit theory of change; widely applicable; built on best of previous frameworks; includes units of analysis and has companion data collection handbook

Daunting list of outcomes and units of analysis

Source: Based on Guthrie, Louie, & Foster (2006, pp. 28-29) which includes six other frameworks. ORS framework added and some changes made to original.

Notes

[1] The term ‘advocacy evaluation’ is sometimes used to describe an activist approach to evaluation that conducts internal advocacy as part of its reporting framework (see for example Sonnichsen (1989)). This is not the meaning intended here, which refers instead to evaluation of advocacy and policy change efforts.

[2] There has been greater effort to build in theory in evaluations of public interest communications campaigns to change individual behavior and/or public will, such as ‘don’t drink and drive’. See Coffman (2002) for a good summary.


About the Author

Justin works in social policy. His research interests include social movements, strategic activism and nonviolence. He lives in Sydney, Australia. Copyright Justin Whelan, 2008. 

Acknowledgments

This paper is presented with permission from JUST POLICY, March 2009.