Using Social Media to Sell Safety
March 31, 2016
Projects Recognised at European Awards
May 20, 2016
Using Social Media to Sell Safety
March 31, 2016
Projects Recognised at European Awards
May 20, 2016
Show all

Evaluate, Evaluate, Evaluate......

Our Head of Research explains the importance of evaluation
More and more of my work at RSA is designing, managing and delivering evaluations of road safety interventions and the more of these projects we do, the more I see it as an integral part of any road safety scheme. It doesn’t make sense to me to not question whether or not a scheme works – if we don’t test and analyse our interventions, how do we know that we are not doing damage?

Thankfully, there are fewer and fewer instances these days of road safety practitioners declaring “I don’t need to evaluate as I just ‘know’ it works….” I can appreciate that some of those making this declaration have years of experience in delivering road safety education and that they have seen first-hand, the immediate reactions that their intervention has had (tears and fainting showing what an ‘impact’ it has had). But what this doesn’t taken into account are the unintended consequences and alternative behaviour change which can occur.

Here at RSA, we have recently been through a programme of re-assessing our pre-driver interventions and it wouldn’t have been possible without continual evaluation. Our original evaluation of a large scale theatre-style presentation was based on a pre-and-post design, measuring intended behaviour. This is a perfectly legitimate approach for many road safety schemes and the Theory of Planned Behaviour, (TPB) where attitudes towards the behaviour; subjective norms; and perceived behavioural control all influence behavioural intentions and thus actual behaviour is central to many interventions. However, what we were finding, time and again, with our large samples of pre-driver audience, is that they had positive intentions BEFORE the intervention. It meant that not only could we not demonstrate a significant positive effect of the intervention but also, we needed to re-examine WHAT we were trying to change:

• Was there no road safety requirement to change the behaviour of young people, because the TPB was suggesting that they were not going to engage in the behaviour?

• Or, were we measuring the wrong thing and we needed to assess our evaluation methodology before we terminated the intervention?

We plumped for the second option, especially when we were directed to the Prototype Willingness Model (PWM) as an alternative approach to explaining adolescents’ behaviour. This model suggests that there are two types of decision making involved in health behaviour: a reasoned path, such as the TPB, and a more social reaction path, often adopted by adolescents. It suggests that adolescent risk behaviour is often not planned or even intentional but instead predictors of behaviour can include behavioural willingness and the risk images they hold of the types of people who engage in the behaviour. (More information on the model can be found in this presentation Richard and I gave at the RoSPA Road Safety Conference in 2015). Using this model as a base, we designed a new evaluation questionnaire to test the components of PWM. By re-testing, we found positive effects on the respondents’ social norms and behavioural willingness – measurements we would never have observed if we had not critically assessed the evaluation process.

But the story doesn’t end there. We used the PWM as the basis of an evaluation of a pre-driver event we have designed and delivered in partnership with local stakeholders. It is much smaller-scale and hands-on than the theatre presentation but the overall objectives are similar and the behavioural model still applies. By being more hands-on and smaller, it afforded us the opportunity to take a more qualitative approach to the evaluation. We undertook exercises to explore the risk images they hold of the types of people who engage in unsafe driving behaviours. What we found, much like the behavioural intentions, was that, in general, this particular cohort (who were self-selecting so there is likely to be a safety bias) believed that those who abstained from risky behaviours had positive characteristics, whereas those who did drink and drive, or use their mobile at the wheel, or not wear their seatbelt, were not individuals they held favourable opinions of. According to PWM, if these young people thought that speeders were cool and popular then they themselves would be seen as cool and popular if they also speed when they drive. But they were not saying that – they were clearly stating that these people were stupid, dumb, dangerous – attributes they themselves would not want to have.

From an intervention designer point of view, this was great news. The risk images part of the model is perhaps the hardest to test quantitatively but maybe this finding suggested we didn’t need to test it and instead we should continue to concentrate on measuring willingness and social norms. All great. And the small scale intervention gave us an opportunity to explore these other components in more detail in the form of focus groups. And this was when we stumbled across the finding which could change everything and without the rigorous evaluation journey we’ve been on, we would never have found. The young people, who in the morning before the intervention had told us that they saw speeders and drink-drivers in a negative light, were then starting to excuse these behaviours. They explained how these things could be done by ‘mistake’, for example: “I see myself as a potential. I mean before I would never drink drive, but having gone through that workshop, I can see how it could happen.” Some of the focus group participants went as far as saying that the intervention concentrated too much on negative behaviours and suggested that all young drivers act like that: “I think it reinforced the stereotype of young drivers, if I’m honest. By saying that young drivers like this are going to crash.”

They seemed to be saying that by hammering home the message that young drivers are at risk of crashing because of bad behaviours, these bad behaviours (that they hadn’t intended to or had been willing to engage in) were now at the front of their minds. They were taking away from it that: ‘Young drivers crash because they do these risky things. You’ve told me that I am going to crash so I guess I will be doing these risky things’. We were seemingly normalising bad behaviour and when social norms are so important to this particular group, to suggest to those who were not going to engage in the behaviour that everyone else their age is, can only be a negative move.
So, now our minds were slightly blown – we’d used a behavioural model aimed at adolescents to understand evaluation findings and then refined our evaluation process. And then, because we used a different research method, we identified some negative unintended consequences. What do we do now? Well, we need to examine how we can improve and continue to measure. Because with no evaluation or if we had continued to repeat the intention questions (demonstrating little movement because of high baselines), we wouldn’t have been able to go on this journey. These findings have enabled us to examine the intervention and re-design it (as per Steve’s blog post). We won’t deny that these results were scary but it was a necessary process for us to understand the previous findings and to ensure that our interventions are effective as possible. So we are piloting a new approach and evaluating, evaluating, evaluating each stage, both quantitatively and qualitatively to measure effects.

Which brings me onto the major reason for evaluating. In an Evaluation training course we are currently piloting, at the beginning of the day, we ask delegates to list all the reasons for undertaking an evaluation. We discuss the need to demonstrate success (or not); about how evaluation results can help inform policy decisions and help to improve the delivery of an intervention. In the current climate, with reductions in public sector spending, evaluation can help practitioners share best practice and demonstrate value for money. I am not bothered if it transpires that a scheme does not have a demonstrable positive effect or that it needs to be revised in order to reach the right target audience or to be more cost-effective. Because the over-riding motivation for evaluation for me is quite clear – we need to be sure that we are doing no harm…….