Evaluating prevention

Database Filters

Assessing or evaluating your intervention is crucial to demonstrate and understand its level of success. Although very few existing programmes have undergone an adequate and robust evaluation, making a reliable assessment of your intervention is considered good practice, and helps further our understanding of how best to prevent child sexual abuse.

This chapter of our toolkit aims to illustrate how you can evaluate your own intervention and the different challenges you may face whilst doing so.

What is evaluation?

‘Evaluation’ refers to the systematic and objective assessment of the design, implementation, outcomes and results of an ongoing or completed project, programme or policy against specified criteria.

For many interventions, such criteria will relate to the short, medium and long-term goals outlined in the theory of change model produced at the start of your project, with a focus on gender-based violence and gender inequality. Ideally, these criteria will help you understand how your intervention has achieved a change in attitude, knowledge or behaviour amongst your target group.

For all preventive interventions, the main outputs you may wish to assess are:

  • Any change in the level of knowledge, resources and motivation to combat abuse
  • Any reduced level of risk and/or increases in protective factors (e.g. level of social support, coping skills, self-esteem, family response to disclosure)
  • Any change in the level of knowledge amongst children about recognising sexual abuse and exploitation, the sources of help and information used, rates of reporting and the use of relevant available services
  • The number and percentage of sexually abused and exploited children identified, protected and provided with help to allow recovery and reintegration =
  • The trend in those accessing help earlier and any decline in those with mental health or behavioural consequences following abuse (e.g. sexually harmful behaviour as adults)
  • The effective identification of perpetrators and actions being taken to prevent further offending

With a view to maintaining accountability, transparency and longevity in the delivery of your intervention, you should also consider:

  • Why was your intervention needed, and why was it relevant in the context of your chosen area and target group?
  • To what extent did your intervention achieve the goals previously set in your theory of change model?
  • How successful was your intervention in targeting those whom are at highest risk of repeated abuse or, or whom are ‘difficult to reach’?
  • How suitable was the structure and content of your intervention to your target group?
  • How efficient was your intervention? Would it be considered good value for money?
  • How sustainable is your intervention? Could it be continued?

Regular and systematic monitoring and evaluation is required for both good governance of your intervention, and to allow changes to be implemented as required to improve your success. Ideally, baseline data will be taken at the start of your programme to allow comparisons and trends to be visualised throughout (e.g. to assess participation or engagement levels with time).

What are the forms of evaluation?

There are various ways to approach evaluation. The most rigorous method remains to be randomised controlled trials (RCTs), where the outcomes of a target group participating in an intervention are compared to a ‘control’ group who have not. However, these studies are costly, lengthy and challenging. Therefore, RCTs are not usually feasible, particularly in low-income settings.

The approach you take to evaluation will depend on the form of your intervention, and as a public health issue, whether it is primary (preventing abuse before it occurs), secondary (reducing the impact of abuse that has already occurred), or tertiary (minimising the ongoing effects of abuse).

Generally, there are four main types of evaluation:

  • Formative evaluations, such as needs assessments, which are carried out during the development of a new intervention or following the alteration of an existing intervention to suit a new context. These assessments allow an understanding how an intervention should work and allows modifications to be made to maximise the likelihood that a programme will succeed.
  • Process evaluations, which are carried out during the implementation of an intervention and are used to determine whether this has occurred as intended. These assessments give an indication as to how well an intervention is working, and whether it is accessible and acceptable to its target group. These evaluations are important to allow contingency planning for any issues that may arise.
  • Outcome-based or impact evaluations, which are carried out after the intervention has been carried out with at least one person within a target group, and often at the end of a programme. These measure the impact and success of an intervention by assessing the progress made towards stated outcomes (e.g. counselling therapy to improve the mental health of abused children).
  • Efficiency or economic evaluations, such as cost-benefit analyses, which consider the cost, required resources and value for money of an intervention. These can be carried out throughout a programme and allow adequate planning to ensure an intervention is as effective as possible.

Most interventions will require some form of outcome-based or impact evaluation. As this is most common, this type will be the focus of this guide.

How should I approach my evaluation?

The exact form, structure and approach to evaluation will vary depending on the nature and context of your intervention. However, if you’re not sure where to start, consider the following five-step approach:

Stage 1 – Planning
Making a realistic and achievable plan for how you will carry out your evaluation is a crucial first step. You may have already considered this whilst forming your theory of change model.

Whilst planning, issues to consider include:

  • Which sources will you collect your data from? For example, do you intend on using a survey or conducting a focus group with your target group; or will you use anonymized service use data (e.g. the number of individuals calling a helpline)?  
  • How will this data be collected? For example, if using surveys, has your team produced and approved a survey that will allow an adequate understanding of the impact of the intervention?
  • Are there any ethical issues surrounding the data you plan to collect? This is particularly relevant when using surveys, questionnaires or any form of interview.
  • How will you store the data you collect securely, in line with local, national and international data protection policies such as the EU General Data Protection Regulation (GDPR)?
  • What indicators will you use to show change across the short, medium and long-term? Are these included in your theory of change model? Ensure these indicators are SMART (Specific, Measurable, Achievable, Relevant and Time-bound).
  • What is the timeframe for your evaluation? Will your data cover the whole length of the programme, or will it focus on a particular element?
  • How will you allocate tasks and resources to members in your team?
  • Is your plan realistic? Is the evaluation likely to be completed in good time? What issues might arise? Can you plan ahead for these?

Although planning forms a crucial first step in approaching your evaluation, this should be a circular process. Make sure to revisit your plan regularly to make sure you stay on track.

Stage 2 – Data gathering
After identifying the performance indicators you require in your planning; you should move towards collecting this data. The necessary approach will vary between interventions, but the best evaluations should aim to collect a mix of qualitative and quantitative data from multiple sources in order to draw the most meaningful conclusions.

Quantitative data is numerical and can be used to produce reproducible statistics or figures, often presented within graphs or tables. These ‘hard facts’ can then be used to prove or disprove a theory. For example, if a support helpline shows a spike in calls following an awareness campaign, the percentage increase supports the conclusion that the campaign was successful. However, it is important to be cautious, as correlation does not necessarily imply causation. There may be other factors could have also contributed to this spike.

Examples of quantitative data and its collection methods include:

  • Number of cases of sexual abuse or exploitation reported by participants, as determined by self-reported surveys; or within the general population, as determined from records held by other services (e.g. police, health service, NGOs)
  • Number of children able to recognise signs of abuse before and after an intervention, as determined by survey or questionnaires
  • Number of cases of rape, forced or coerced sex, early sexual debut etc as determined by self-reported surveys or demographic data
  • Number of women treated following rape, including those counselled or receiving post-exposure prophylaxis (PEP) medication, as determined by records held by health services
  • Number of cases of child marriage as determined from statutory marriage records
  • Number of prosecutions against perpetrators of abuse, and rates of re-offending, as determined by records held by other services (e.g. police, justice services).
  • Number of calls to an abuse helpline, as determined from helpline tracking data
  • Number of clicks on ‘report abuse’ buttons online, as determined from website tracking data

Qualitative data is expressive and aims to gather opinion to understand the subjective impact and experience makes on the individuals involved. Essentially, this helps understand the ‘living reality’ of those involved in your programme by considering how and why the intervention has (or hasn’t) made a unique difference to them.

Qualitative data has the supreme strength of allowing a recognition of the context an intervention takes place in. By following an interactive process in which individuals in your target group illustrate their experience in their own words, this minimises opportunities for assumptions or bias to disrupt the findings of your evaluation. However, it requires the active participation of a researcher to stimulate discussion, capture opinion, and ensure that the sample of individuals involved is representative of the group’s perspective as a whole.

Such data can be collected in a number of ways including:

  • Open-ended questionnaires or surveys, which might focus on experiences with the intervention itself and/or collect data regarding the frequency and prevalence of gender-based violence
  • Focus groups, which allow group discussion of the pertinent issues relevant to your intervention
  • Structured or unstructured interviews, which use open questions to allow an individual to describe their experience in their own words on a one-to-one basis
  • Direct observation, e.g. to offer feedback on the performance of a facilitator of an intervention

Consider the best means of collecting your data. This may include both quantitative and qualitative elements; and ideally should be collected at set timepoints throughout your intervention to allow trends to be visualised. For example, if you wish to use a survey, consider utilising this before and after your intervention to give baseline data for comparison.

Whilst drawing up the questions you intend to ask on a survey, interview or focus group, consider a mix of binary ‘yes/no’ or ‘true/false’ questions for quantitative evaluation, plus free text or comment questions to allow qualitative depth. Care should be taken in your approach to ask open-ended questions that allow discussion, but also ensure participants have the freedom to decide for themselves when and how to reply. Be aware that certain questions may provoke a disclosure of abuse or may cause a degree of distress. Have a plan in place to ensure facilitators know how to handle these situations should they arise.

Stage 3 – Data analysis
Analysing the data you have collected is likely to be the most time-consuming stage of evaluation. You should start by collating the data you have collected to form an ordered dataset.

For quantitative data, this will likely involve transferring data into a spreadsheet to allow quick and easy filtering and manipulation. Analysing such data produces objective statistics that allow your findings to be summarised, ideally without any bias. By converting your data into a visual format, such as graph or chart, this may also demonstrate any trends and patterns in your data, allowing the impact of your intervention to be more easily understood. Such measures are said to be ‘descriptive’ statistics.

However, some researchers may also wish to use quantitative data to show a statistically significant difference between two groups. For example, this could be used to show that victims of abuse within a therapy programme show significantly improved indicators of mental wellbeing compared to a control group of individuals who have not undergone therapy. This approach is said to produce ‘inferential’ statistics, where the data is used to infer conclusions based on probabilities (e.g. it is more likely the therapy programme allowed positive change rather than chance). If your evaluation aims to take this approach, you may wish to consider consulting an experienced researcher or statistician who can advise as to which analytical methods are most appropriate to your needs.

In contrast to quantitative data, qualitative data is varied and requires interpretation, using different techniques to construct an understanding of the data. During analysis, it is most common to take a deductive approach, starting by converting comments or recordings etc into a written form (transcription) and then categorising such data into different concepts, properties or patterns (coding) based on particular theories relevant to your intervention or in order to meet your evaluative objectives (e.g. to assess the aims in your theory of change).

After coding your data, you may then have a deeper insight into the themes or patterns that are most relevant to your evaluation. By then conducting a thematic analysis, using identified themes as a central focus, this allows you to develop a deeper insight into the impact and meaning of your intervention on an individual basis. It is important to note that whilst the thematic approach is the most common within qualitative analyses, other frameworks do exist, and may be more appropriate (e.g. content analysis, grounded theory). However, these will not be a focus of this document.

Finally, whilst conducting a thorough analysis of both your quantitative and qualitative datasets, you also continually validate your data. Validation aims to ensure that there are no flaws in the data you have collected. Therefore, throughout your analysis, you should ensure that tour data is both valid (i.e. the dataset is as complete and consistent as possible; and your approach to data analysis is accurate and appropriate) and reliable (i.e. your approach will produce consistent results that can be trusted).

Stage 4 – Reporting
After analysing the data you have collected you are ready to write your evaluation report. Before starting, consider the aims of the report, and its overall purpose. Generally, evaluations are completed to share good practice, and to identify the strengths and limitations of a particular intervention. This will allow recommendations to be made regarding ways that its outcomes could be further improved.

However, you should consider who this report is aimed at. A report aiming to share good practice with fellow practitioners will take a different slant than a report aimed at prospective funding bodies. Whilst doing this, consider what information this audience is looking for. For example, will the report focus on the impact of your intervention and the difference it is has made to its participants; or will it focus on the way you have delivered it? Consider also the accessibility of the report. Do you require to have an ‘easy read’ form, or will it need translated into another language?

Prior to writing, you might also wish to consider the structure of the report, and how you will present your data. Most reports will contain the following sections:

  • An executive summary, summarising your key findings and recommendations
  • An introduction, describing the purpose of the evaluation and your methodology
  • A results and discussion section, using the performance indicators of your theory of change model to discuss what you have found and how you have met your aims
  • A recommendations section, outlining actions that should be taken to improve your intervention on the basis of your findings

It is important to structure your report further under these headings, perhaps using your theory of change model to devise a subheading relevant to each goal. Try to keep data from all sources under each relevant heading rather than describing datasets separately.

Whilst writing, you should aim to both describe and interpret your data. This will involve presenting your data in an easily understood manner, for example as a graph, table or prominent quotation, then interpreting what this data actually means in terms of identified trends and patterns. For example, if your analysis of a counselling initiative showed improved indicators of mental wellbeing in participants versus non-participants, why might this be? You should also consider the significance of your findings – for example, 70% of abuse victims participating in an intervention may be excellent; but 70% of perpetrators participating in another might be considered poor.

Importantly: be selective, as too much information can overload the reader. Try to discuss only the most relevant areas, ensuring that your writing remains honest, accurate and clear. Beware of ‘easy’ pitfalls, such as presenting graphs or charts that could be considered misleading (correlation ¹ causation!). Remaining open and transparent in your reporting is just as important as your attempts to minimise bias during your data collection and analysis. Always ensure you follow good practice, for example by exploring alternative interpretations, avoiding ambiguity, reporting negative findings (i.e. what didn’t work?) and clearly stating your sample size and the limitations of your data. This will ensure that your recommendations are truly useful in improving and driving prevention efforts forward.

Finally, always acknowledge the contributions of others in the delivery of your intervention. Remember that the work of other partner organisations could be jointly responsible for any positive changes you find.

Stage 5 – Reflection
After completing your evaluation, take time to reflect on its findings as a team. Whilst it is important to think about how you could improve, and how you could implement the recommendations made in future interventions, it is equally important to recognise your achievements. Remember that your intervention helps take the next step to protecting the next child. Well done!

What challenges might complicate evaluation?

First and foremost, it is important to understand that showing any direct impact of an intervention on the overall prevalence of child sexual abuse is usually not possible. Although this remains the ultimate goal of all prevention work, implementing change in society is difficult and takes a great deal of time. Demonstrating global prevention is a task that requires long-term study. Indeed, this is hindered by the global lack of epidemiological data on child sexual abuse. If your intervention does not show dramatic short- or medium-term results, do not feel disheartened - this is to be expected.

Paradoxically, some primary prevention interventions promoting safety, healthy boundaries and positive relationships with children lead to more children disclosing experiences of maltreatment (as well as increases in parents and other adults reporting experiences of non-recent abuse in childhood). This is a very positive outcome, but without clear analysis and contextualisation an effective preventative intervention can appear to be making things worse by contributing to increased rates reporting of sexual crime.

Additionally, throughout your evaluation, you should consider how you can conduct your research ethically, particularly if using qualitative data from surveys, interviews or focus groups etc. Most importantly, you should consider the trust placed in researchers in this field, particularly by individuals who have been victims of abuse themselves. In your role as a practitioner, always consider the following challenges:

  • Informed consent – ensure you have documented consent from those taking part in your intervention, and that this is taken after a full discussion of the benefits and risks of participation. If your project involves working with children, always attempt to gain additional consent from a parent or guardian in a way that is appropriate to the age and culture of those involved. Remember  that children often agree solely to please adults!
  • Anonymity – ensure that the anonymity of participants is maintained at all times, particularly in the reporting of your findings. For example, always check and respect the wish for anonymity of individuals you will quote, or who will form the focus of a case study. Remember that anonymity may not always be achieved solely by using pseudonyms but may require you to change or withhold other identifying details (e.g. age, nationality, job title).
  • Confidentiality – whilst delivering your intervention, you have a duty to respect the wishes of your participants, including maintaining confidentiality, privacy and safety. However, recognise that this may become complicated in areas where mandatory reporting procedures are in place, or where you consider others to be at imminent risk of harm.
  • Risk of harm and child protection – undertaking research in this area will always carry a possible risk to the physical or emotional wellbeing of participants, inflicted by either themselves or others (e.g. due to surrounding social stigma etc). Always ensure that steps are taken to minimise harm, and that you work with other relevant agencies if you feel further action needs to be taken.

Where can I get help?

We understand that conducting a good evaluation can be as tricky. However, there are a number of resources that may help. Examples include:

  • Beer, Tanya and Kimberley Davis, Break the Silence: End child sexual abuse – M&E toolkit, United Nations Children’s Fund, New York, 2012.
  • Bloom, Shelah S., Violence against Women and Girls: A compendium of monitoring and evaluation indicators, MEASURE Evaluation, Chapel Hill, NC, 2008.
  • ChildONE Europe, Guidelines on Data Collection and Monitoring Systems on Child Abuse, ChildONE Europe, Florence, 2009.
  • Department for International Development (DFID), Guidance on Monitoring and Evaluation for Programming on Violence against Women and Girls: Guidance note 3, DFID, London, 2012.
  • Fluke, John D. and Fred Wulczyn, ‘A Concept Note on Child Protection Systems Monitoring and Evaluation’, Discussion Paper, United Nations Children’s Fund, New York, 2010.
  • International Initiative for Impact Evaluation (3ie). Available at: www.3ieimpact.org.
  • Pulerwitz, Julie, and Gary Barker. Measuring attitudes towards gender norms among young men in Brazil: Development and psychometric evaluation of the GEM Scale. Men and Masculinities, 10(3): 322-338.
  • Reproductive Health Response in Conflict (RHRC) Consortium, Gender-Based Violence Tools Manual for Assessment and Program Design, Monitoring and Evaluation in Conflict-Affected Settings, RHRC Consortium, Women’s Commission for Refugee Women and Children, New York, 2004.
  • UNICEF Evaluation Database. Available at: www.unicef.org/evaldatabase.

If you feel you would benefit from some additional advice or support in assessing your intervention, please contact the ECSA project directly. Full details are available on our ‘Contact Us’ page.