Conversation

Frequently Asked Questions

A selection of questions and answers related to evaluating road safety education, training and publicity (ETP) interventions, and related to evaluation generally.


 

Q: Who is responsible for E-valu-it?  

A: E-valu-it and www.roadsafetyevaluation.com were created in 2010 by a project team comprising the Department for Transport (DfT), RoSPA and Local Authority road safety practitioners. It is run by RoSPA, with funding from the Department for Transport.


 

Q: What are the aims of E-valu-it?  

  1. To increase and improve the measures of effectiveness of road safety Education, Training and Publicity projects (ETP) by 2018. These projects may be delivered by a number of bodies including (but not limited to): local authorities, emergency services, road safety partnerships, schools and companies delivering Managing Occupational Road Risk projects or those delivering driver training.
  2. To increase and improve the measures of effectiveness of LASER projects by 2018.
  3. To improve the efficient use of LASER, MORR and road safety ETP resources.

 

Q: What does E-valu-it do?  

A: E-valu-it is a tool to help users plan and write up an evaluation of their road safety intervention. Users are asked a series of questions about their intervention, the issue(s) it's meant to address, its aims and objectives and a few other details. E-valu-it then provides recommendations for carrying out an evaluation of the intervention, plus a report template for recording the results of the evaluation.

Once an Evaluation Report has been completed, users have the option of publishing it on the website, and/or on their own website, or keeping it private (for example, if it has to be sent to a committee or funder for approval first). We encourage authors to publish their reports, so others can learn from their experiences.


 

Q: Who can access my project?  

A: Within the E-valu-it Toolkit you can give permission for other people to access your project if you wish. You simply add the email address of the person you wish to invite. You can choose to give them 'read-only' access, in which case they cannot make changes to it, or you can give them ‘read and write’ access, which means they can make changes to it. If you wish, you can give someone ‘read-only’ access to one project, but ‘read and write’ access to another. If you change your mind about the level of access you have given to someone, you can change it to ‘read-only’ or to ‘read and write’ or cancel their access completely, at any time.

To give some access to a project, to change their level of access, or to cancel their access, log-in to your E-valu-it account and go to Project Settings for the project concerned, and click on ‘Edit Project Access'.


 

Q: Why does E-valu-it ask for 'permission to contact' me?  

A: E-valu-it asks if you wish to allow RoSPA Road Safety staff to contact you about your project for two main reasons. Firstly, to help us monitor and evaluate how E-valu-it is being used and whether any improvements are needed. Secondly, if a project has been inactive for a long time, we might contact you to ask if there is anything we can do to help with that particular project. If you give permission for us to contact you, you will not be contacted for any other reason, such as marketing.

You do not need to give permission to contact you if you prefer not. You can also choose to give 'permission to contact' for some projects, but not for others. You can also change your mind at any time.


 

Q: Can E-valu-it analyse data for me?  

A: No. E-valu-it only makes recommendations on suitable evaluation designs and methods you could use. These recommendations are based on your answers to the E-valu-it questions about your intervention. It is then up to you or your team to do the evaluation, collect the data, and analyse the results. As you conduct your evaluation you can enter your findings into the Evaluation Report Template. Once your report is complete you can then share it with others through the website.


 

Q: How can we avoid asking leading or misleading questions in surveys/questionnaires?  

A: Leading questions bias your audience to answer in a particular way. You might make an assumption in the question or imply that there is a right or wrong answer. People generally want to agree and not be different from everyone else, so by asking these questions you bias the results.

Here are some examples of leading questions, with the suggestive words underlined:

  • Do you agree this workshop was enjoyable?
  • Do you think the legal penalty for using a mobile phone while driving should be increased?

The wording of these questions can be adapted to reduce their bias.

  • How enjoyable or unenjoyable was the workshop? (Very enjoyable, Quite enjoyable, Neither enjoyable nor unenjoyable, Quite unenjoyable, Very unenjoyable)
  • What do you think about the legal penalty using a mobile phone use when driving? (a. It should be increased, b. It should stay the same, c. It should be decreased)

There should also be an equal number of positive and negative options, if it is a closed question. For example, do you think the training was...

Excellent, Very good, Good, Fair, Poor

Very good, good, neutral, poor, very poor


 

Q: Could a common survey questionnaire be developed for all road safety interventions?  

A: As you know, the survey questions depend on your intervention's specific objectives. If interventions across the country had the same key objectives then yes, a standard or pro-forma questionnaire could be developed for anyone delivering that intervention to use. This assumes that the intervention does not vary depending on the local circumstances. For example, one young driver workshop may focus on using speed whilst another may focus on drink driving, in which case the questionnaire should reflect this.

If there are interventions that are common across authorities and have the same objectives then this is something that could be developed. However, the standard questionnaire would probably always need tweaking to suit the particular version of the intervention actually delivered. Also, the questionnaire may be slightly different depending on whether it's completed before and then again after the intervention, or if it is an after-only form.

The Society for the Advancement of Violence and Injury Research are building a library of academic safety questionnaires. You can search their database here.


 

Q: For an expensive ad campaign, the cost of an evaluation would limit the budget available for the campaign itself, which may result in a less effective campaign. What would your advice be?  

A: If a campaign is expensive then we would have to question running it without collecting scientific evidence of its effectiveness. At the very least a prospective cost-benefit analysis should be conducted before the campaign is agreed (what are the likely costs and benefits). Without evaluation there can be little accountability for the money spent on the campaign. We would advise running a pilot campaign with evaluation before deciding whether or not to roll it out wider.


 

Q: If course attendees fill in evaluation at the end of a training session it’s rushed but asking them to return one at a later date results in a lower response rate. Do you have any suggestions?  

A: This is the catch-22 situation we all face. If the day allows it, build-in extra time at the end of the training for the evaluation/feedback so that it is part of the training day, rather than squeezing it in at the end. That way the day still finishes at a reasonable time and the evaluation shouldn't be too rushed by those wanting to go home. For those who do take it home to return later, include an email address so they can scan it in and email back, ensure the return postal address is on the form and maybe even a stamped addressed envelope. You could also use an online version of the survey for those who want to fill it in later, as this will be easier for them to submit. Additionally, stress how helpful it is to have the forms returned and say how the feedback will be used.


 

Q: Should we use the same questionnaire to go back after say 3 or 6 months to assess knowledge or attitude change or should we change the questions?  

A: Yes, use the same questionnaire so you can track changes more accurately. You may need to add an additional question(s) to reflect longer term changes you are following up on. For instance, if you ran an event aimed at encouraging people to take up post-test rider or driver training, you would need to add a question asking about whether or not they have done so since the event.


 

Q: I have a small budget for my intervention(s). How can I do an evaluation with limited resources?  

A: The 'rule of thumb' is that 5-10% of the cost of an intervention should be set aside for evaluation. When the original intervention budget is small, however, it can be difficult to plan a cost-effective evaluation. Here are some tips that you may find useful:

  • Telephone interviews can be a cheaper alternative to focus groups – there are no travel or room-hire costs, which can become quite expensive.
  • Online surveys can be cheaper than postal surveys – there are no printing or postage costs. If you do not have an email list of who you would like to complete the survey, however, then it can become expensive to pay a company to send the survey to their list of contacts.
  • If you think you will need an incentive then prize draws can be a cheaper alternative to paying each respondent who takes part.

 

Q: Should incentives be used to encourage participation in the evaluation?  

A: The use of incentives can depend on a number of factors. They can help to increase the number of people taking part in the evaluation but it will also depend on whether there is the budget available to provide incentives.

Incentives are usually used when it is difficult to get people to take part in the evaluation. Recruiting attendees to a focus group, for example, may be more difficult than having them fill out a survey. Another reason to use incentives is if those in the target group are difficult to access. For example, getting the general public to reply to an evaluation of a flyer would be more difficult than getting replies from delegates on a training day.

If you don't have the budget to incentivise each respondent then a prize draw could be a more cost-effective alternative. Prize draws tend to work best with surveys rather than focus groups. This is because respondents will need to cover their costs of travelling to the meeting location and they may not win the prize draw.

Ideally incentives should be independent of the intervention or organisation rather than offering a 'benefit in-kind', such as a free driving lesson from your own organisation. This is because the evaluation could be seen as a way to promote your services, rather than a way of determining effectiveness of the intervention. The use of vouchers works well, especially if the respondents have a choice of where to use them (Love to Shop high street vouchers or Amazon online vouchers are examples).