'Ofsted cannot disentangle the contribution of a school from the background of its pupils, so it's dangerous to draw conclusions from its reports'
Some of the statistically-driven stories we read about in education seem to come dangerously close to tautology, argues one education journalist.
A few years ago, I finished a talk at an assessment conference with an anecdote.
I recounted phoning someone who until recently had been a senior figure at one of the government’s education regulators, which relies heavily on statistics, and had recently spoken out about the number of pupils across England failing to do well at GCSE.
I had asked my interviewee whether part of the cause of that might be the normal distribution. This, as the statisticians among you will know, is a bell-shaped curve on a graph.
Traditionally, examiners have tended to design tests so that pupil results follow the normal distribution shape. This means that most students will score somewhere in the middle, but there will be a guaranteed, if smaller, number of test-takers given results at either extreme of the graph.
I didn’t say this explicitly to my interviewee. But by implication the question was whether the fact that there were a certain number of lower performers was just a reflection of how statisticians had ensured that the results would come out, due to the shape of this statistical curve.