Michael Gove and David Laws have changed the examination system for post-16 students at English schools and colleges, and in doing so have altered the basis on which universities can make their conditional offers to applicants. To justify that change, they commissioned in-house research – published in May 2013. That research appears to be statistically flawed, however; Ron Johnston, Kelvyn Jones, David Manley, Tony Hoare and Richard Harris have been trying to check its validity. For a full examination of the flawed findings, see a supplementary article at the LSE Impact Blog.
Students in the academic streams at English schools and colleges typically take three sets of examinations in successive years: GCSE, usually in eight or more subjects at the end of their eleventh year of schooling (typically at age 16); AS-levels a year later, usually in four subjects; and A2-levels in the third year, usually in three or four subjects. The AS and A2 results are combined to give a single, final A-level grade in each subject taken over the two years.
Most students apply to universities during their final year of schooling; their AS-level results are usually known (a minority of schools choose to ‘cash in’ their students’ AS grades only at the end of their A-level courses) but not their A2 and overall A-level grades. Because of this, university admissions officers have to decide which applicants to offer places to on the basis of students’ AS results and their earlier performance at GCSE, plus teachers’ estimates of applicants’ likely A-level grades; most of those offers are conditional on the student getting prescribed grades at A-level. This system has long been considered undesirable and many have argued either that the A2 exams should be brought forward or that university admissions should be delayed so that decisions can be made on known A-level performance.
Those arguments have not succeeded, however, because of reluctance by the examination boards to alter their schedules. Michael Gove and David Laws (the former Secretary of State and the Minister of State at the UK government’s Department for Education – DfE) have moved in a different direction. They believe that splitting the A-level into AS and A2 modules is insufficiently rigorous intellectually and want to return to the pre-1980s situation when A-levels were two-year courses examined at the end only, with no intermediate, externally-set and/or -assessed testing. AS-level exams after one year of post-GCSE study would only be retained if the various Exam Boards decided to offer them; they could almost certainly not be used as a common standard on which universities could assess applicants’ potential, therefore.
University admissions’ officers will thus, from 2016 on, have to rely on GCSE performance – two years before application in most cases – when making offers to students considered likely to succeed on their degree programmes, plus the teachers’ estimates of A-level final grades. Some ‘elite’ universities have protested vigorously against dropping AS-levels because they are valuable in their own right, as either or both of: a stimulus to university application for students from widening participation backgrounds, whose AS performances can sufficiently raise their confidence levels to encourage them to think of a university application (the ‘Oxford argument’); and as a source of statistical evidence from which to form valid predictions of eventual degree success (the ‘Cambridge argument’).
The research and its results
David Laws commissioned research to form ‘part of the evidence base as to whether AS level results are necessary to predict University results, or whether GCSE results alone would suffice’. This involved an in-house statistical analysis of some 88,022 students who graduated in 2011 having finished their A-levels in 2008 and GCSEs in 2006. It sought to predict degree grade – a binary (whether or not students got a 2:1 or 1st-class degree – often termed a ‘good degree’ and required for entry to many post-graduate degree courses) from the GCSE and AS-level points scores both separately and together (using the standard procedures for converting the grades at each examination into an interval number). The results were published in May 2013. The research came to four conclusions, reproduced here in full:
- Neither GCSE or [sic] AS results predict whether a student will get a 2:1 or better with great accuracy (approximately 70% accurate)
- GCSE results are marginally better at predicting whether a student will go on to get a 2:1 or above than AS level results (69.5% accuracy compared to 68.4%)
- The effect of combining GCSE and AS level results adds a negligible degree of accuracy to the predictions
- Without AS level results, we can still predict degree performance to a similar level of accuracy based on GSCE grades alone.
The final conclusion is presumably the evidence base on which the ministers decided to act. These findings are not very surprising: indeed more surprising is the researchers’ disappointment that they could only predict correctly the degree performance for 70 per cent of the 88,022 students. (They call it a ‘weak level of accuracy’ – although such a level of predictive accuracy would normally be considered high in the social sciences.)
In the 4- (post-AS-level) or 5- (post-GCSE) year gap between those examinations and final degree assessments, much can happen (it is about one-quarter of the average graduate’s life-span!). Some students, having specialised in subjects they enjoy and/or are good at, may perform better than expected in their degree examinations; others may have made a wrong choice, or have been demotivated by their course, and done less well. Some may have focused their interests and time at university on other things and performed relatively badly as a result; some may have been ‘late developers’; some may have been ill – and so on. There are many ways in which a student’s trajectory changed between the examinations, and that some 70 per cent of the outcomes can be correctly predicted suggests a great deal of continuity in student performance. Those who perform well at school are most likely to get ‘good degrees’ – although note that because some 64.6 per cent of all of the students analysed obtained a ‘good degree’ a random allocation procedure would have been almost as successful!
The evidence base appears unproblematic and strong, justifying the proposed change. Or is it? To explore that, we set out to replicate the study. Recognising the government’s commitments to both evidence-based policy and open data, in June 2013 we submitted a Freedom of Information request for the data, on the assumption that it could readily be redacted so that confidentiality would not be threatened: all we needed for each of the 88,022 students was their GCSE score, their AS score, the university they attended, and their degree classification.
Finally, after a lengthy struggle, in May 2014 they agreed to send us the dataset they prepared and analysed and it arrived in June – one year after our original request. We were able to replicate their analyses exactly, and started to undertake our own, exploring the data further and applying better modelling strategies. We encountered a number of issues that led us to query the nature of the analyses undertaken, and so wonder about the veracity of the reported findings.
The research undertaken in the DfE to provide evidence as a foundation for the Gove/Laws abolition of AS-levels as the first stage of the A-level exam was, to say the least, unimpressive. The students included in the analyses were not representative of all who graduated in 2011 – many of those doing science and engineering were excluded as were all of those doing medicine and the great majority of those who attended Scottish schools and universities. The analyses included students who should have been excluded because of incomplete data; the modelling strategy was inappropriate; and some of the findings nonsensical. [For a full examination of the flawed findings, you can find a more thorough investigation at the LSE Impact Blog.]
Nevertheless, the reported finding that degree performance can be predicted just as well from GCSE as AS-level results is probably correct. But because there is a relatively small correlation between students’ grades at those two examinations, further exploration of the data – not undertaken by the DfE analysts – shows that, while the two sets of predictions may be equally accurate, they differ substantially in who is predicted as likely to get a ‘good degree’ and so is more likely to be offered a university place (especially a place at an ‘elite university’ with high entry standards).
If offers are to be made with quantitative evidence for GCSE performance only, then our re-interpretation of the DfE’s data suggests that as many as one-in-five students may be disappointed because their improvement in the following year cannot be taken into account. A policy based on such evidence seems backward rather than forward-looking, whatever the ‘intellectual’ arguments for abolishing AS-levels and returning to the pre-1990s A-level model. But our re-analysis is in that sense redundant because the one-year delay in releasing the data (no conspiracy of silence of course assumed) meant that the policy was in already in place.
Note: For further details on the DfE’s report and the re-analysis undertaken by the authors, see a supplementary post at the LSE Impact Blog. This article gives the views of the authors, and not the position of the British Politics and Policy blog, nor of the London School of Economics. Please read our comments policy before posting. Featured image credit: Pete CC BY 2.0
Ron Johnston is a Professor in the School of Geographical Sciences at the University of Bristol.
Kelvyn Jones is Professor of Quantitative Human Geography at the University of Bristol (since 2001), being Head of the School of Geographical Sciences from 2005-2009.
Dr David Manley is a Lecturer at the University of Bristol.
Dr Tony Hoare has been on the academic staff at the University of Bristol since 1976.
Dr Richard Harris is a Reader in Quantitative Geography at the University of Bristol.
Surely a serious question mark is the choice of a single criterion to test the hypothesis. What about student drop-out rates? Average degree obtained? etc. Who decided that ‘getting a good degree’ was the sole objective of our system, and why!
This is poor. Firstly, is the old end-of-two-year assessment pre-1980s or pre-1990s? Both are stated above. In fact, I took A levels in 1991 and the were assessed at the end of the two years. The step-change came with Curriculum 2000.
Secondly, some of the nits picked here are just silly: of course most students at Scottish universities weren’t included in the research – most students at Scottish universities are from Scotland and have taken Scottish qualifications, not A levels!
Thirdly, the begrudging admission, way down the page, that the original research conclusions were actually correct, undermines the rest of the piece. The authors were clearly desperate to disprove it but couldn’t!
The idea that a fifth of students may be disappointed because the improvement they would otherwise have shown is also flawed: what about students in the opposite situation, who have a rough Year 12 but then pull it back in Year 13? And what about the demographic factors which mean that students are now much more likely to get offers anyway than they were in the past – the ball is in their court in many cases, and universities generally err on the side of generosity in their offer-making.
All in all an unsatisfactory piece.