National Citizen Service (NCS) 2013 – Evaluation – Main Report – Part 1 of 2

In a blog post on 14th August 2014 I read about an evaluation of the NCS 2013, which concluded the service had provided good value for money. It was the comment about good value for money that caught my attention. It’s abstract, meaningless and generally used when people can’t or won’t be specific

These extracts explain what the NCS 2013 is, give some details about it and provide an overview of the evaluation report.

National Citizen Service (NCS) is a Government-backed initiative that brings together young people aged 15 to 17 from different backgrounds to help them develop greater confidence, self-awareness and responsibility, with a view to creating a more cohesive, responsible and engaged society.

In total, 31,738 young people took part in NCS programmes in summer 2013, and 7,828 in autumn 2013. The Cabinet Office appointed Ipsos MORI to evaluate the impact and value for money of these programmes.

This report summarises the first stage of the 2013 evaluation. Baseline and follow-up surveys, conducted three months after NCS took place, were undertaken with NCS participants and control groups. Further follow-ups are planned to identify any longer-term impacts there may be.

The report under the heading “Aims of the evaluation” explains that,

The key objectives of this evaluation were:

    • To assess the impact of the summer and autumn programmes on four outcome areas: social mixing; transition to adulthood; teamwork, communication and leadership; and community involvement
    • To understand whether NCS represents good value for money and improves on the value for money of the 2012 pilots

Here I’m concentrating on the first of these objectives that is to assess the impact of the NCS programmes on four outcome areas. I’ve written about the second objective in a separate blog post.

Before I start, and slightly at a tangent, do you think aims and objectives are the same thing? Obviously the writer does because the heading is “Aims of the evaluation” and the first line under this is, “The key objectives …..” It would be more helpful to stick with one or the other, mixing the two is unnecessary and confusing.


Desired outcomes, not likely!

Did you notice that the objective of the evaluation is only to “assess the impact”? There’s nothing to say what the impact will be assessed against. I’d normally expect an evaluation of this type to determine if the impact of the service achieved its desired outcomes. But for the NCS 2013 the desired outcomes (not outcome areas or measures, they’re different) appear to be that each participant finishing the programmes should be:

  1. More confident;
  2. More self-aware; and
  3. More responsible

…. than they were before they took part in the programmes. A further outcome is that the collective impact of the participants should result in:

  1. A more cohesive, responsible and engaged society

What ‘more’ means for each of the four outcomes isn’t defined or quantified. The best interpretation I can come up with is ‘more than the NCS 2012’, ‘more than before the NCS 2013’ and ‘more than participants in the control groups’. If this interpretation is correct then it’s so easy for it to be a success you’d have to put in an awful lot of effort to derail it. However, could you, in all conscience, term the NCS 2013 a success if each outcome, after the NCS 2013, has a positive impact of just 1%, 10% or even 20% more than before?

There is also the difficulty of when the impact of the benefit has to be achieved by. The available clues are, Ipsos MORI says there will be further evaluations over the coming years and the monestised benefits rely on increases in life expectancy. The next 60 years then? What about the collective outcome, I can’t for the life of me understand how you evaluate whether or not the NCS 2013 created more a cohesive, responsible and engaged society.

I’m far from convinced with these outcomes and whilst they appear eminently worthy they are trivialised and overshadowed by H M Treasury’s need to convert the subjective impact of benefits into an equivalent in pounds and pence.



Before delving deeper I find it helpful to have an overview of the numbers of participants, and others involved, at different stages of the NCS 2013.

The numbers in the table are largely self-explanatory, but I’d like to emphasize the main points. 39,566 participants started the programmes but 2,300 dropped out, leaving 37,266 to finish the whole programme. 31,696 participants, of the 39,566 who started the programmes, completed a baseline survey before the NCS 2013, but only 4,401 of the 31,696 completed a follow up survey around three months after finishing the programme.

1,072 of the 4,401 participants put forward their parents for an online survey and 611 of these parents completed it. In addition 20 teachers of the participants were interviewed.

Some questions

I’m naturally curious so I have some questions the answers to which could help future NCS programmes.

  • Why did only 39,566 start the NCS 2013 when there was a previously stated capacity of 50,000?
  • Why did 2,300 participants start but not finish the NCS 2013?
  • If the NCS 2013 was as good as the evaluation report makes it out to be why didn’t more than 4,401 (11.8%) participants complete the follow up survey?
  • Why did only 1,072 participants (24.4%) put forward their parents to complete a survey given participants “highly positive personal impressions”?
  • Why did only 611 (57%) parents, out of 1,072 put forward, complete an online survey given parents “highly positive personal impressions”?

The questions that spring to mind from this and a general read through of the summary are, ‘How independent, objective and impartial is this evaluation?’ and ‘Does it give equal prominence to negative or no impact as well as positive impacts?’ In my view it isn’t and it doesn’t but let’s continue.


Experiences of participants

I’ve concentrated on the summary of the evaluation, as the rest appears devoted to statistical mumbo jumbo. This extract sums up participants overall experience and also sets the tone for the rest of the summary and the evaluation report.

Nearly all summer and autumn participants (97%) said they enjoyed their NCS experience. Similarly high proportions for both summer (95%) and autumn (96%) found NCS to be worthwhile overall. Reflecting their positive experiences, nine-in-ten (90% and 87% respectively) said they would definitely recommend it to others.

I’ve put the impact described in this paragraph into the table below. Showing actual numbers of participants rather than just percentages provides context and qualifies the impact it infers 97% of all participants are said to have experienced.

If we review one aspect of the above extract in more detail we find that 4,269 participants out of 4,401 who completed the follow up survey, from a total of 37,266 who finished the NCS 2013, said they “enjoyed their NCS experience”. It would have been infinitely more transparent, easier to understand and more independent if the report had concluded that …

4,268 participants out of the 4,401 surveyed said they enjoyed their NCS experience. We think 97% of the 32,958 participants that didn’t complete a follow up survey also enjoyed their experience.

This is the truth. You could replace ‘we think 97%’ with something about the statistical probability that the rest would have enjoyed their experience. It doesn’t even hint at misleading or misrepresenting the findings from the evaluation. The rest of the paragraph follows the same pattern, as the figures in the above table show. I have and I suspect many others will have great difficulty in accepting this as evidence that proves the NCS 2013 was a success, especially if they read the report carefully and then think about what they’ve read.


Outcome areas

My straightforward and simple view of how to determine if a service such as the NCS 2013 is a success is to formulate desired outcomes and then see if the service achieves them. I’ve described and commented below on the best I could find in the evaluation report that most closely resembles this. We’ll start with this extract.

Both the summer and autumn programmes were found to have statistically significant positive impacts in all four of the outcome areas explored in the evaluation.

What do you make of this sentence? Do you think it means the NCS 2013 achieved a significant impact on each outcome area? By the way here’s a quick reminder of the four outcomes areas, they are:

  1. Social mixing
  2. Transition to adulthood
  3. Teamwork, communication and leadership
  4. Community involvement

You’d be wrong if you did. What it actually means is there were statistically significant (different from just plain significant) impacts for some of 115 outcome measures, with at least one in each outcome area. Just out of curiosity does the word explored annoy you as much as it annoys me? They didn’t explore the outcome areas, they assessed the impact of the NCS 2013 on them, or at least that was what they were meant to be doing according to the first objective of the evaluation.

The extract below follows the previous extract above to complete a paragraph. What does it mean? What are highly positive personnel impressions? Wouldn’t we normally refer to them, in evidence terms, as anecdotal comments?

These matched participants’, parents’ and teachers’ highly positive personal impressions of what participants had achieved from NCS.

This extract infers that all participants’, parents’ and teachers’ had “highly positive personal impressions”. We know this isn’t true because only 4,401 participants (out of 37,266 that finished) completed a follow up survey, only 611 parents completed a survey and only 20 teachers were interviewed.

Whilst reviewing outcome areas I decided to try to link outcome areas to the desired outcomes. I say try because there isn’t an explicit link. The only clue was the mention of some desired outcomes in the narrative of the summary on each outcome area.

Outcome measures – impact of benefits

To assess the impact of the NCS 2013 on outcome areas there are 115 outcome measures. This table shows number of measures in each of two impact classifications. The first are measures showing a positive impact (Impact > 0pp) and then measures showing no impact or one that is negative (Impact =< 0pp).

The NCS 2013 appears to have had an unequivocal positive impact on only one outcome area, teamwork, communication & leadership. It appears to have had little impact on ‘Community involvement’ because 25 (73.5% – summer) and 23 (67.6% – autumn) of the measures from a total of 34 showed no impact. Whilst I’m aware that one outcome measure could represent more of an impact than another on an outcome area this is still rather overwhelming.

There’s another influential point to keep in mind, only 4,401 participants completed the follow up survey, out of the 37,266 that finished the programmes. Extrapolating results from 4,401 participants across the remaining 32,865 looks to be a fairly big leap.

Ipsos MORI were only commissioned to assess the impact of the NCS 2013 on the outcome areas. They were not commissioned to determine if the NCS 2013 was a success or not. However, they have written the evaluation report emphasizing the positive impacts and inferring that they are the views of all participants, all parents and all teachers and that the NCS 2013 has been a success.

This begs the question

Did the NCS 2013 achieve its outcomes?

To try to find out we first need to know how impacts were calculated for each outcome measure. This is the explanation from the evaluation report.

The impact on each outcome was then calculated as the change from baseline to follow-up among NCS participants minus the equivalent change among the respective control group (either in percentage points, or in mean scores).

However, it misses out that the evaluation report records the average impact. For example the evaluation results suggest an increase in the proportion of participants (out of the 4,401 who completed the follow up survey) intending to study for further education qualifications. This is borne out by a minimum impact of 2.5pp (summer) and 2.4pp (autumn) with a maximum of 8.2pp (summer) and 10.0pp (autumn). The recorded impact was 5pp (summer) and 6pp (autumn); these are straight averages.

Do straight averages accurately represent the impact of the NCS 2013? It’s difficult to say without knowing if the majority of participants were closer to the minimum or maximum.

Second we need to know what level of impact the NCS 2013 was expected to achieve such that it could be said to have achieved its desired outcomes. This isn’t in the report and I suspect was never and will never be made public, if it was considered at all. I’d like to hope they (the Cabinet Office?) would have formulated expectations as part of the case to spend £62m. However, how could you provide a service and not have an idea, before you start, of what it has to achieve for it to be considered successful.

This led me to try and find a way of deciding whether or not I think the NCS 2013 was successful, using the data/information in the report. After some thinking I settled on classifying all the recorded impacts (from Appendix B) in terms of their size. This table shows the impacts from the summer programme; it really is quite interesting.

The figures in the table above hardly give the NCS 2013 a resounding slap on the back. 58 (50.4%) outcome measures, out of 115, showed a positive impact of more than 0 percentage points (pp), unfortunately 48 had an impact of less than 10pp. What’s more disturbing is that 57 (49.6%, just under half) outcome measures had an impact of zero or less, 3 of these were negative.

The results of the autumn programme are slightly worse than those from the summer. Even if we keep in mind that this shows the impact from the 4,401 participants that completed the follow up survey it’s not a flattering outcome. With this evidence at this stage you’d hard pushed to call the NCS 2013 a success.

Thirdly how likely is it that these impacts would be the same or similar if all 37,266 participants had completed the follow up survey? This I don’t know. Although there’s probably some statistical method that says it is, in a way that appears suitably certain and therefore convincing. Let’s say the impacts are the same for 37,266 participants as they were for 4,401, which appears to be the inference throughout the report.

With the information we have above and that which you’ll see below we should be well placed to answer:

  • Does this constitute a success?
  • Could we achieve more of an impact?
  • Would £62m have been better spent on something else?

What follows is a brief look at the meaning behind each outcome area summary (in italics).


Social mixing

NCS increased participants’ trust in others. It also improved their attitudes and behaviours towards people from different backgrounds, including how comfortable participants were mixing with different groups, how often they had met those from different backgrounds socially, and how willing they would be to ask those from different backgrounds for help.

The NCS 2013 increased participants (4,401 at most) trust in others but by less than 10% and improved their attitudes and behaviours towards people from different backgrounds, but again by less than 10%.

Reflecting these impacts, eight-in-ten participants (84% in summer and 81% in autumn) said they felt more positive towards people from different backgrounds after NCS. Both parents and teachers – particularly those from schools with relatively low levels of diversity – also valued the social mixing aspect of NCS, and thought participants had benefited from it.

What this paragraph really means is, 3,658 participants out of the 4,401 that completed follow up surveys felt more positive (less than 10% more) towards people from different backgrounds. The 611 parents who completed a survey and the 20 teachers interviewed valued this aspect of the NCS 2013 and thought it benefited participants.

However, what it infers is eight-in-ten of the 39,566 participants that took part in the NCS 2013 felt more positive towards people from different backgrounds. The parents and teachers, of all 39,566 participants, valued the social mixing and thought it benefitted participants.

The inference that the percentage quoted in each summary for each outcome area applies to all the participants’ who took part in the NCS 2013 recurs throughout. Deliberately misleading? It’s difficult to tell, but what is certain is that each summary could have been much clearer.


Transition to adulthood

NCS improved participants’ short-term and long-term educational and career aspirations, as well as the level of control that participants felt they had over their future success. It also increased participants’ confidence in practical life skills, willingness to try new things, resilience when things go wrong, and sense of wellbeing. Short-term reductions in alcohol intake and smoking were also observed (to be followed up in the long term). 

These impacts were apparent to participants, with eight-in-ten (83% in summer and autumn) feeling capable of more than they had realised post- NCS, and three-quarters (76% and 72% respectively) feeling more confident about getting a job in the future. Most parents also thought NCS impacted positively on their son’s or daughter’s life skills and aspirations. In addition, teachers felt NCS gave participants a greater sense of independence.

After reading the above extract you’d think great things had been done. However, the truth, from the data/information in the report, paints a different picture. It shows that more than half of the outcome measures (51.9% summer and 57.4% autumn) show no impact at all for the ‘Transition to adulthood’ outcome area for the 4,401 participants that completed the follow up survey. If we assume this applies to all the participants that finished the NCS 2013 then it had no impact on 37,266 participants for over half the outcome measures.


Teamwork, communication and leadership

Some of the most substantial and consistent impacts of NCS were in this area. NCS improved participants’ confidence in leading and working in a team, and in putting forward and explaining new ideas to others. It improved their confidence in meeting new people, plus how well they felt they got along with others and treated them with respect.

Again, participants recognised these benefits, with nine-in-ten (92% in summer and 91% in autumn) saying NCS had helped them develop useful skills for the future. Parents and teachers both saw improved teamwork, communication skills and overall confidence levels as some of the most important and tangible benefits offered by NCS.

There were seven outcome measures for this outcome area. Five (summer) showed an impact of between 10 and 20 percent and the remaining two had an impact of less than 10%. Four (autumn) were between 10 and 20 percent and the remaining three were less than 10%. If this is some of the most substantial results then ….?


Community involvement

NCS had several positive impacts on attitudes and behaviour in this area. It improved participants’ knowledge and understanding of local communities and tackling local problems, and their belief in their own influence and capabilities when getting involved. Community engagement also improved, with participants doing more hours of formal and informal volunteering on average, and becoming more certain to vote at the next general election.

Three-quarters of participants (72% in summer and 76% in autumn) agreed they were now more likely to help out locally, and around six-in-ten (61% and 64% respectively) reported feeling a greater responsibility to their local community after NCS. Parents and teachers both agreed that NCS had positively affected participants’ attitudes towards community involvement, and some were able to give examples of where involvement had increased.

If the ‘Teamwork, communication and leadership’ outcome area gave some of the most substantial results then this outcome area ‘Community involvement’ gave some of the weakest. The summer programme had no impact on 25 (73.5%) out of the 34 outcome measures and it was similar for the autumn programme, 23 (67.6%) out of the 34. And yet, despite this, the summary for this outcome area is wholly positive. There’s no mention of the lack of impact, never mind the reasons for this. No impact is an important finding and the evaluation should take account of it.


Final comments

I’ll leave you with:

  • The evaluation assumes that the NCS 2013 has had the same impact on the 32,865 who finished the NCS 2013 but did not complete a follow up survey, as it did on the 4,401 that did complete a survey
  • There is an unhealthy, almost exclusive, focus on positive impacts when there’s a lot, probably more, to be learnt from the 57 (49.5% – summer) and 61 (53% – autumn) of the 115 outcome measures that had no impact or a negative impact
  • The predominate impact from the 58 (50.6% – summer) and 54 (47% – autumn) positive outcome measures (from a total of 115) was less than 10 percentage points; is this a success?
  • You could argue that any positive impact achieves the desired outcomes of participants simply being more confident, more self aware and more responsible, more than before or other non- participants?
  • There’s little in the evaluation to show that the NCS 2013 has helped to create a more cohesive, responsible and engaged society, again is this more than before or what?
  • The NCS 2013 may have had an impact on the 37,266 that finished it but the benefit of this impact won’t be known for perhaps 60 years


Leave a Reply