They can learn the lyrics...

“They can learn the lyrics to loads of songs, but they can’t remember…”

I have heard a phrase like this many times in my career in schools, usually from one of two sources:

  1. People who are advocating for more “fun” in lessons – if we make lessons as engaging as the popular songs of the day then kids will remember the content as well as they remember the lyrics to songs.
  2. People bemoaning the lack of students’ retention of the subject matter they are teaching.

I am by no means a musical person. I never learned to read music, or to play any instrument beyond very basic keyboard skills. I typically have no clue if something is in tune, or ability to harmonise. I do, however, very much enjoy singing along to songs on the radio (much to my wife’s chagrin). It was whilst doing this recently on a drive that I reflected on the fallacy of the idea that learning song lyrics is akin to learning the content of a lesson or curriculum at school.

In my reflections I identified four types of songs that I can associate with my ability to reproduce the lyrics of: 

  1. Songs that I know the lyrics of completely – that is to say I can write out the complete and correct lyrics even in the absence of the tune or words.
  2. Songs that I know all (or nearly all) of the lyrics in context – these are the songs that I can comfortably sing along to, matching the words as they are sang by the artist (or get the timing right on the rare occasions I sing karaoke), but I require at least the tune, and probably the words, to be present in some form.
  3. Songs that I know some of the lyrics in context (and maybe some without the context) – these are the songs where I can sing along to bits of them (usually the chorus or an early verse), and may even be able to reproduce those words without the tune if asked to say or write them, but there will be other parts of the song that I don’t know and will have to stop singing to.
  4. Songs that I don’t know the lyrics of at all – speaks for itself really!

For me, and I suspect for the vast majority of other people, this is a list which grows as you go through it. In fact, there is only one song that I can confidently place in section 1 (“Another Day in Paradise” by Phil Collins if you are interested). There are a few songs I can place in section 2. More than that in section 3. And plenty in section 4. Most of these last ones will be songs that I have never heard, or not heard often, whilst those in 2 and 3 will be songs that I hear regularly.

Now, I have no empirical evidence for this statement, but I suspect that the vast majority of people will recognise that their own experience of this is quite similar. There will be some; professional musicians, those with eidetic memories, and those who are much more heavily invested in music than I am; who will, of course, have a much larger list in section 1 than me. But for the everyday person who enjoys music without being devoted to it, I would be willing to bet that these four categories ring true. And the key difference, for me, between sections 1 and 2 or 3 can be summed up by one word: cue.

It is simple really. When I hear certain tunes, my brain is cued to retrieve the lyrics of the song along with the tune. In cognitive psychology terms, the desired memory (the lyrics) have been associated with an appropriate stimulus (the tune of the song) sufficiently during the encoding phase of my memory of the lyrics that when I hear the tune the memory of at least some of the lyrics comes to the fore. Even the (for me) one song that sits in section 1 is somewhat like this; although I can recall the lyrics perfectly I do so by singing the song in my head – the words and the tune (or at least the rhythm of the song) are inseparable in my memory.

I can offer some evidence of this from the popular TV show, “Never Mind the Buzzcocks”. The last round in this show (for those that haven’t seen it) is called “next lines”, where the host Greg Davies (in its most recent incarnation) reads out a line from a song and the panel of contestants have to provide the next line. It is very common for a panel to fail to do this, even when the person who sang the song is on the panel. The cues for them to recall it are simply not there. Many other times, the members of the panel have to sing the song to themselves to cue the memory of the next lines.

In teaching however, one of our big goals is for students to retrieve information given little or no cue, or in situations where the stimuli are not the same as they were during the encoding phase of the memory to be retrieved. Students learn a particular piece of knowledge in one context, but then are asked to recall and apply it in many different contexts – they are asked to reproduce the song lyrics without the tune. Even worse, they are asked to reproduce the song lyrics whilst a different tune is playing. I know there will be many people out there with experience of the difficulty here – trying to recall the words of a song but being unable to because a different song is playing in the background. But that is precisely what we are often aiming for in teaching; transference of the knowledge or skills we teach into different contexts or domains. And this is a big reason why students find it much more challenging to recall what we teach than to sing a long to their favourite songs.

This isn’t to say that there aren’t useful things we can take away from kids’ ability to sing along. If we can identify useful cues for memory retrieval that are, perhaps less context specific, and get our students to associate them with the knowledge to be retrieved, this can be helpful. To this day I can’t hear the word “trigonometry” without thinking of the phrase “Some Officers Have, Curly Auburn Hair, 'Til Old Age” (the mnemonic that I was taught nearly 30 years ago to help remember which trig function is used with each pair of sides in a right-triangle). Of course, this is only helpful when I recognise something as requiring trigonometry (which, fortunately, I can do quite well now) – it doesn’t help me if the context is not one where I would think to use trigonometry in the first place.

There are also things we can take away by reflecting on what helps those songs (or in my case, song) get into category 1. In my case, a key one I can identify is that the song tells a story. Much has been written about the power of story telling in learning, and I think that definitely plays a role in why the lyrics to “Another Day in Paradise” are so memorable to me. Another aspect of the song is the chunked nature (again, something written about extensively in education circles). Each verse is only 4 lines long, and each line is only between 3 and 10 words – most of them very short words. There are probably other properties of the song that make it more memorable that a musician could identify that I cannot, but I do think these two have a particularly strong impact.

So, the next time someone says “Why can kids learn the lyrics to so many songs, but can’t remember…” or maybe tries to use it to justify increasing the “fun” in a lesson, you can answer them with “It is probably only because they are only trying to retrieve that knowledge when they have strong cues to help them. We are trying to get them to retrieve and apply that knowledge across many contexts so it will always be more challenging for us.”

How to revise for GCSE Maths without all the panic

A friend and colleague of mine, Helen Osmond, has produced a wonderful new book to support pupils in revising for GCSE maths.


Titled "How to revise for GCSE Maths without all the panic" this short book is potentially the perfect companion for your child or children that are struggling to revise effectively for GCSE maths.

The book is short and split into three easy to manage chapters, examining how the brain forms knowledge and memories, and how revision needs to work with this process, in chapter 1. 

In chapter 2 it then moves on to how to set up your revision space, standard revision techniques - including (importantly) why they work, how to maximise their effectiveness, and common mistakes people make when using each strategy. There is also a note on the use of AI in revision, how to use it well and what not to do, as well as how strategies might be updated for neurodivergent pupils. 

Chapter 3 then focuses on the foibles of GCSE maths in particular, looking at the different command words used in exam papers, the common mistakes that cost pupils marks in exams, and the basics of interpreting a mark scheme. There is then guidance on which of the strategies in chapter 2 might suit certain maths content, as well as some advice for what to do when you are actually in the exam itself. 

The book also contains a useful link at the end of the book which provides access to for further free resources to support revision.

As soon as I finished reading my copy I immediately passed it on to my daughter, who is sitting GCSE exams in this year, and told her she should read it. As a pupil guide, I really like this book. It contains lots of great advice and information, but in a simple and easy to digest form. Helen writes as if she is addressing the child directly, no doubt drawing from her years of experience having similar conversations with the pupils she has tutored over the years. I am sure many of the messages in the book have been delivered piecemeal by schools and parents over pupils' GCSE years, but having them all together in a single reference guide could be, in my opinion, invaluable for children to refer to as they are deciding on and using revision techniques. I just wish I had a copy a couple of months ago so I could give it to my daughter then!

The role of 'problems' in learning mathematics.

Yesterday I came across this quote from a podcast interview between Professor Anna Stokke from the University of Winnipeg and Professor Emeritus John Sweller, best known for formulating Cognitive Load Theory.


This seemed too binary to me. That a problem can either be impossible or simple depending on the schema that a person has seems to belie the complexity of how learners develop mathematical knowledge. Surely, there must be points where the solving is difficult, but achievable, and this difficulty lessens over time.

One example given in the podcast is a pair of simultaneous equations: x + y = 5 and 2x - y = 8. Now, of course, to someone who knows lots about simultaneous equations, the path to finding the values of x and y here is relatively clear. As Anna said in the podcast, you would add the equations together to eliminate y, find the value of x and then substitute to find y.

Clearly one needs enough knowledge of algebra to even interpret the question. If I don't have some knowledge of the concept of x and y as unknowns here, I won't even understand what the equations themselves mean, never mind what asking me to 'solve the pair of equation' means.

However, there are many other ways to solve this pair of equations. The podcast mentioned trial and error. Although not efficient, trial and improvement is valid. Other numerical methods such as the Gauss-Seidel method are also valid. Alternatively, we could plot the two linear graphs and look for their point of intersection. We could employ matrix approaches involving the inverse matrix or reduction to row echelon form. If I know anything about any of these approaches, the problem is not impossible even though my schema may not contain any knowledge of solving pairs of equations using elimination.

What I think Professor's Stokke and Sweller mean by 'impossible' in this case is actually 'unreasonable to expect learners to do using the approach intended by the teacher'. This I have more sympathy with. If I offered that pair of equations to pupils with the intention of them 'discovering' elimination as an approach without having ever manipulated pairs of equations, I don't think many (if any) of them would work out the approach for themselves. I am not, nonetheless, in 100% agreement with Sweller and Stokke's point of view. I think that, if I did teach pupils more generally manipulating systems of equations - showing them how to add, subtract, multiply and the like single equations or pairs of equations - without the goal of 'solving' the pair, and then explained what it meant to solve a pair without modelling or exemplifying the approach, I think it much more likely that some pupils would then 'discover' the elimination approach. 

To be fair, having listened to the podcast, I don't think Sweller or Stokke really think it is impossible anyway. The point they seem to be making is that it is simply not a good approach to ask learners to employ if the goal is them learning to recognise and appropriately deploy that strategy. The much more useful approach that will support more learners in achieving this goal will be to have an expert exemplify and model the approach, and then have learners practice application of what they have seen/studied to an increasingly complex array of carefully chosen and structured problems to support development of increasing fluency. This I do 100% agree with, given that the goal is the learner getting to the point where they recognise and can effectively deploy the strategy.

The bit that I have more of an issue with is that this should not be the only goal of a mathematics education - to be taught lots of strategies, how to recognise when to deploy said strategies, and then to deploy them automatically. In my opinion, there does need to be space created for learners at all levels to be able to grapple with uncertainty, deal with competing constraints, and examine the pros and cons of different approaches. There is a phrase that I first encountered in Colin Foster's MT article that comes from Japanese "problem solving" lessons; 'the lesson begins when the problem is solved'. As teachers, part of our goals for a mathematics education must include opportunities for engaging with authentic problems, not simply questions which are very closely related to a single mathematical approach or result that has been recently taught. One can argue that questions like solving the pair of simultaneous equations given above, once pupils have been taught elimination as an approach, cease to be a 'problem' in a mathematical sense rather than simply questions that should cue a particular recognition and deployment. Indeed, at GCSE, such a question would be considered an AO1, "use and apply standard techniques" question, rather than an AO3, "solve problems within mathematics and in other contexts" question. 

Contrast that with this question taken from the Corbett Maths website:


This problem has multiple possible approaches. Yes they all revolve around having equal amounts to compare - either equal volumes of Cola or equal values of money. However, there are a number of different volumes or amounts of money that could be in consideration here, 6 litres, 100 ml, 1 ml, 1p, 10p, £1 or more are all feasible. This is entirely the sort of problem I can see featuring in a Japanese style lesson, with the teacher introducing the problem and providing any necessary input around scaling of volumes and pricing or the like, before allowing pupils to approach the decision in their own way and generating meaningful discussion about how different approaches that pupils might take compare to each other. 

Another issue to consider is the role that engaging with problems prior to learning an approach specifically tailored to the problem type might have on motivation. This is a complex issue. It might be that, in certain circumstances, having to consider problems for which the solution isn't obvious provides a motivation to learn the techniques that will make the problem easier to solve. Conversely, it may be that this negatively impacts pupil motivation if they feel the concept is too difficult to grasp due to early exposure with challenging problems. I also recall a phrase from Skemp here, 'well is the enemy of better'. If pupils are able to solve the initial problem using an inefficient but adequate strategy, they may be less motivated to move away from that strategy even if it is more efficient. 

I think a lot of this is likely to do with how invested pupils are in the initial problem - either because of a positive attitude to maths in general or due to some 'hook' in the problem itself that piques pupil interest. I do believe, however, that the use of problems to motivate a need (or at least usefulness) to engage with new mathematical learning is one that is worth examining more fully.

In my response to the first Maths Horizons report I shared this image which I think contributes to the role of problems in learning maths:


Rather than problems being something that learners only attempt once fluency has been achieved, which is the view in some places, I strongly believe that teachers of mathematics need to recognise the cyclical relationship between developing fluency, reasoning and problem solving, This was based on a sesson I had done at a LaSalle MathsConf a couple of months earlier, where I shared this anecdote:

A personal anecdote:

It may just be me, but it seems that the pupils I am teaching are less prepared to try and think about an idea.

They seem to be expecting me to do all of the hard work, and show them every little aspect of everything.

I wonder if they have formed an expectation about maths lessons that all they have to do is sit and listen to the teacher and then try and do what they (we) do.

I wonder if this is because of their experience of maths lessons to this point.


Learning is effortful, and mathematical learning should require the employment of reasoning, deduction, conjecture and the like. I think that exposure to problems where the solution isn't obvious may have a role to play in impressing upon pupils that they are expected to think in maths lessons, expected to bring their reasoning skills alongside their prior knowledge to the table, and that mathematics learning is not simply watching the teacher do something and then try and regurgitate it.

So, what is the role of 'problems' in learning mathematics. I don't think there is a simple answer. They both build and test fluency. They affect motivation and provide points for discussion. They communicate something of what we value in learning mathematics and in seeking a mathematical education. This is probably what makes their role so debated and difficult - 'problems' and how they are used have many roles in the teaching and learning of mathematics and how and when to use different problems and problem types will depend a lot on the pupils you have in front of you.




ResearchEd Birmingham session: Front-Loaded Feedback and the 'I do, We do, You do'

So today I attended the excellent ResearchEd Birmingham event and re-delivered the session I ran at the ResearchEd National conference in September 2025 about the idea of front-loading opportunities for feedback into questions and tasks so that pupils have the opportunity to get feedback directly from the task about their approaches, and to force them to make their thinking more visible - which has prompted me to create my latest website www.front-loaded-feedback.co.uk. The slides for this presentation are linked at the bottom of the blog.

During the session I had alluded to some thinking about the 'I do, We do, You do' model of instruction related to the question at the bottom of this slide:


I had planned to expand on this thinking towards the end of the session, but due to time constraints I was unable to  - so I thought for clarity and posterity I would include it here.

I tend to think about most maths learning episodes (a distinction I draw as the 'lesson' is clearly not a useful unit of time in relation to securing learning) as going through through roughly four phases as we attempt to move learners through a continuum from novice to developing greater expertise. These phases are summarised in a slide I used in other sessions:

In the first phase learners have little-to-no knowledge of the concept of process that we are introducing, and so they benefit most of exemplification and modelling - any practice attempted in this phase will likely be unsuccessful.

In the second phase learners have (the exemplification and modelling) gained some inflexible knowledge of the concept or process. Practice in this phase is likely to be error prone, particularly if it strays too far from what learners have seen during the exemplification/modelling. This is the phase where guided practice is required, with learners still requiring significant support and immediate feedback on their attempts. This is where I see front-loaded feedback questions and tasks as having real value.

The third phase is one in which knowledge is moving from inflexible towards flexible, but the goal of practice now to expand beyond the the initial modelling to open other broader knowledge of the concept or process in order to target key elements of the structure of the concept or process and push learners beyond the comfort of what was initially modelled or exemplified. This is the phase where procedural and conceptual variation in questions and tasks are likely to be most prevalent and useful (although, depending on what is being taught, these could feature in all phases of learning). In this phase of deliberate practice we are still likely to see errors, and so immediate or near-immediate feedback is still a clear requirement. 

However, for me, the reason for the errors and the feedback required as a result is subtly different to the guided practice phase. In guided practice, errors happen because learners are still on shaky ground with what was initially introduced, and so feedback needs to identify and focus on what learners haven't quite grasped from the initial instruction and correct that. In the deliberate practice phase, learners should already be confident in mimicry - being able to do what they were originally shown to do or identify what they were taught to identify - but the errors now come when trying to extend that thinking, going down wrong paths or under-/over-generalising certain properties. The feedback here needs to highlight where thinking needs to change for learners to be able to move forward. A simple example of the distinction here is learners being shown how to solve the simple linear equation 3x + 2 = 8, and then being asked to solve the equations 4x + 5 = 13 and 4x - 5 = 13. The first equation of these two is structurally identical to the one modelled and should be part of guided practice. Any errors arising from this can be tackled by directing attention back to the initial modelling or worked example, and either highlighting (or asking pupils to reflect on) where their approach has differed from the approach given. The second requires a deviation from what was modelled; it requires learners to adapt from the modelling (assuming the teacher hasn't modelled a specific example of this structure beforehand) and so might be considered useful as part of deliberate practice. 

Depending on the complexity of the concept or process, guided practice might be as small as one question or might be several questions. It might involve front-loaded feedback, the use of mini-whiteboards or multiple-choice questions with immediate feedback, and/or backwards faded examples like those on Dave Taylor's excellent website (this is, of course, not an exhaustive list).

Whilst the deliberate practice phase might include something like increasingly difficult questions (another of Dave's websites), completion tables or similar - still completed on mini-whiteboards or otherwise monitored carefully so feedback and support can be given when needed.

The independent application phase only then comes once pupils have gained that high success rate (a la Rosenshine) in the deliberate practice phase - once learner thinking around the concept or idea is more secure and misconceptions arising from this and the guided practice phase have been dealt with. It will further stretch pupils by bringing in contexts, interleaving and/or interweaving other concepts or processes with the current one, or in general asking for wider applications of the knowledge studied.

The issue I have with the 'I Do, We Do, You Do' instructional approach is that, if we accept that learners will need to go through these, or similar phases, on their journey towards developing expertise, then this model seems overly simplistic to capture the range and nuance of practice opportunities that pupils need to engage with to develop the necessary expertise. I see and hear about teachers treating the 'You Do' as part of the independent practice, where it is, at best, the first in what should be a series of questions in the deliberate practice phase and more likely still part of guided practice. Alternatively, I see and hear about teachers moving straight into questions that might be considered more suitable in the deliberate practice phase, during the 'We Do', requiring learners to engage with adaptations before they have even got to grips with the concept or process as exemplified/modelled. And this is not to mention that the very approach of 'I Do, We Do, You Do' can be seen to imply teaching of process (several stages of doing things), which is clearly not applicable when teaching concepts (where comparing examples and non-examples are generally more suitable) or facts like a full turn is made up of 360 degrees (where repetition and reframing through choral response or similar might be more beneficial).

To me, there is a real danger that focusing on a structure like this without the conceptual underpinning of what different stages of practice are trying to achieve means that the practice that learners are offered will not lead to them developing the expertise we want them to develop.

For those reading who were in my session - you can probably understand why I didn't feel I had time to get into this in the session! If you have read through all of this to get to the slides (or skipped to the end to get to them) then your patience is rewarded here.




New consultation on accountability looks to shake up Progress 8 - but will it incentivise what it hopes to?

The long-awaited schools white paper, 'Every child achieving and thriving', has been published today. Leading the way are the reforms to the SEND system, as well as the consultation on those reforms, which I know many have been anticipating.

However, as a former assistant headteacher in charge of data, it was the consultation on secondary school accountability measures announced, that really caught my eye. The consultation proposes four major changes to the secondary school accountability measures.

Changes to Progress 8

Two of the changes relate to the Progress 8 measure.

Replacing the three EBacc and three open bucket slots in the current progress 8 measure with two science slots, two 'breadth' slots and two 'choice' slots

This is probably the biggest change announced in the consultation, as we see the final draft of what was originally proposed in the government's response to the Curriculum and Assessment Review in November, with some tweaks and further information.

Ostensibly this is to attempt to reverse the 'decline' in the take up of arts subjects since the introduction of the EBacc back in 2010. However, opinion remains divided as to how big an impact the EBacc has actually had on arts take ups.

As can be seen in these graphs (which I generated with the help of Google's Gemini AI tool), the only arts subject to experience a significant decline in the last 15 years is design and technology (DT). However the rate of this decline is similar in Wales (which does not have the EBacc performance measure) when compared to England. This is much more likely to be attributable to the increased costs to schools in offering DT at GCSE, and the significant fall in the recruitment of design and technology teachers meaning that some schools will simply not be able to recruit DT teachers to be able to offer DT as a GCSE option. This is not to mention the changes to the design and technology GCSE, the removal of the Food GCSE from the DT umbrella and the rise in vocational qualifications that mirror different aspects of the design and technology GCSE, which will all have some impact on the reported take up of design and technology at GCSE. The other subjects in these graphs all show similar rates of decline across England, Scotland and Wales (with the exception of drama in Scotland), suggesting wider societal factors are at play here than simply the introduction of the EBacc.

Even with the removal of the EBacc performance measure, it is hard to see how this can do much to improve the take up of arts subjects. I am sure there are some schools out there that will force pupils down an EBacc pathway simply to try and boost their EBacc take up figures, however I would suggest the majority of schools will be ensuring as many pupils as possible take an EBacc option because either:
  1. They believe in the messaging from the previous government that these qualifications are truly the gateway qualifications to further academic study, or
  2. Their curriculum and staffing is set up for offering more of the EBacc subjects through KS3 and KS4 than arts subjects.
This second point is not to be underestimated. To offer more creative subjects at GCSE, or to increase take up, schools need to spend more time at KS3 preparing pupils for GCSEs in these subjects. This means diverting time at KS3 away from other subjects (most likely the humanities), towards these subjects. This requires more teachers, more specialist equipment or larger spaces (in the case of drama and dance), that many schools will not be set up to provide. Smaller schools especially would struggle with the financial burden of these subjects compared to predominantly classroom based subjects such as history, geography and RS if take up of the arts were to significantly increase, and would almost certainly have to reduce their humanities staffing. These smaller schools are already likely to be reviewing their staffing following the government pledge during the aforementioned curriculum and assessment review to ensure that the three separate science GCSEs are available in every school - if these schools have to find extra money for science teachers and science equipment they are even more unlikely to be able to fund increases in arts teaching and equipment.

Simplified banding processes

Instead of the current banding process, which sees schools grouped into five groups based on the confidence intervals of their P8 figure, the government is proposing to simply chop schools into five quintiles based on their P8 figure, so the bottom 20% would be 'well below average', the next 20% labelled 'below average' and so on. This compares to the distribution of scores in 2024 shown below (note the image was actually produced in 2019, but the figures remained the same until 2024).



The government say this is to address issues created by confidence intervals, such as smaller schools having such wide confidence intervals that they can never be anything other than average. 

Whilst I appreciate that the current system is more convoluted, I can't help but feel that the replacement is too simplistic. The government have said that they will mitigate against the loss of confidence intervals by publishing three years worth of data alongside each other, as well as cohort sizes and an explanation about the inherent uncertainty due to cohort sizes, however it still feels off to me to have all of these categories be the same size. The figures above suggest an almost normal distribution of schools - in a normal distribution approximately 38% of the data is within 0.5 standard deviations of the mean, with about 15% then between 0.5 and 1 standard deviations on each side, and a similar proportion above or below 1 standard deviation from the mean.


Whether arrived at using the current methodology or using percentile (as opposed to quintile) measures, this distribution of schools feels right to me.

New measures introduced

Alongside these changes, the government is suggesting introducing two new measures for school accountability.

New measure for those that didn't meet the expected standard

It has long been recognised that a small number of pupils performing poorly can drastically alter a school's P8 score. The previous government went some way to address this by introducing a cap for how negative a pupil's P8 score can be, however this government is looking to go further by including a new measure of progress alongside P8 for those pupils that come to secondary school without having met the expected standard in English and maths.

The proposal is to calculate a best-fit progress score across all the subjects that a pupil sits individually - basically calculating a P3, P4, P5 etc. score and allowing the school to take the highest of these. It is hoped that this will allow schools to continue to encourage lower prior attaining pupils to attempt a broad curriculum, whilst allowing schools to highlight the progress pupils make in areas even if those pupils don't as well in other areas, or don't fill all eight of the P8 buckets.

I am sure secondary schools will welcome this move as a way of allowing them to highlight the good work that they do with struggling and disengaged learners, and I hope that schools will use this responsibly as a tool to support their planning for pupils that struggle with learning and make sensible decisions about the curriculum and assessment pathway that these pupils will follow.  

New additional achievement measure for high attainers

Alongside the current measures of percentage of pupils achieving grade 5+ and grade 4+ in English and maths, there is a proposal to include a new measure for the proportion of pupils achieving grade 7+ in English and maths. The government says that this should reinforce 'the incentives for schools to provide a rich and stretching education for all children'.

It will be interesting to see how schools respond to this measure. Much of the extra support and intervention that happens for pupils at GCSE is focused on ensuring as many pupils as possible secure grade 4 or grade 5 in English and maths. Whilst this is beneficial for schools in maximising their accountability measures, it is also beneficial for pupils as these are the grades that are most typically required for pupils to follow A-Level (or other level 3) pathways post-16. Will schools have the capacity to extend their intervention to focus on pushing grade 6 pupils to grade 7 alongside this? I hope this will not lead to less pupils getting the support they need to secure their college pathways if schools decide it is easier to try and maximise the 7+ figure than the 4+ figure (if they don't have the capacity to focus on both) - as it is generally recognised that it is easier to move a pupil from a 6 to a 7 than from a 3 to a 4.

I do think this will benefit maths in particular, however. For maths it is often the case that post-16 providers don't accept pupils for A-Level maths with less than grade 7, so this could well provide an extra incentive for schools to provide that extra stretch and challenge for pupils to achieve the grades that will allow them to go on and study A-Level mathematics.

All in all, I think it is probably right to show publicly how well the top-performing pupils go on to achieve in a school. However, context must always be taken into account. Where schools are failing to convert high-prior attainers into top grades at GCSE, this needs to be highlighted and challenged. Where schools are actually taking learners that stood little chance of reaching top grades and ensuring that they do go on to secure top grades, this also needs to be recognised and celebrated. Progress 8 can help to do this, but can be complicated by other factors. In my mind, it would be useful to compare these headline attainment figures not only to the local authority and national averages, but to other schools in similar contexts. Perhaps we could have a third comparison figure that compares to schools with similar disadvantage intake; we know that disadvantage correlates strongly with outcomes (although the government is working hard to change this) and so seeing how well a school supports pupils to achieve top grades compared to other schools with similar profiles of disadvantage would allow for more schools to be highlighted and recognised for the excellent work that they do in more difficult circumstances.

The missing piece of the puzzle here is how these new measures will feature in Ofsted's process for holding schools to account, particularly in their 'achievement' judgement on the new score cards. Given the focus throughout the latest framework on pupils with SEND or disadvantage, I would expect the measure for those that didn't meet the expected standard (which is over-represented by pupils with SEND or disadvantage) to feature prominently in their thinking.