The role of 'problems' in learning mathematics.

Yesterday I came across this quote from a podcast interview between Professor Anna Stokke from the University of Winnipeg and Professor Emeritus John Sweller, best known for formulating Cognitive Load Theory.


This seemed too binary to me. That a problem can either be impossible or simple depending on the schema that a person has seems to belie the complexity of how learners develop mathematical knowledge. Surely, there must be points where the solving is difficult, but achievable, and this difficulty lessens over time.

One example given in the podcast is a pair of simultaneous equations: x + y = 5 and 2x - y = 8. Now, of course, to someone who knows lots about simultaneous equations, the path to finding the values of x and y here is relatively clear. As Anna said in the podcast, you would add the equations together to eliminate y, find the value of x and then substitute to find y.

Clearly one needs enough knowledge of algebra to even interpret the question. If I don't have some knowledge of the concept of x and y as unknowns here, I won't even understand what the equations themselves mean, never mind what asking me to 'solve the pair of equation' means.

However, there are many other ways to solve this pair of equations. The podcast mentioned trial and error. Although not efficient, trial and improvement is valid. Other numerical methods such as the Gauss-Seidel method are also valid. Alternatively, we could plot the two linear graphs and look for their point of intersection. We could employ matrix approaches involving the inverse matrix or reduction to row echelon form. If I know anything about any of these approaches, the problem is not impossible even though my schema may not contain any knowledge of solving pairs of equations using elimination.

What I think Professor's Stokke and Sweller mean by 'impossible' in this case is actually 'unreasonable to expect learners to do using the approach intended by the teacher'. This I have more sympathy with. If I offered that pair of equations to pupils with the intention of them 'discovering' elimination as an approach without having ever manipulated pairs of equations, I don't think many (if any) of them would work out the approach for themselves. I am not, nonetheless, in 100% agreement with Sweller and Stokke's point of view. I think that, if I did teach pupils more generally manipulating systems of equations - showing them how to add, subtract, multiply and the like single equations or pairs of equations - without the goal of 'solving' the pair, and then explained what it meant to solve a pair without modelling or exemplifying the approach, I think it much more likely that some pupils would then 'discover' the elimination approach. 

To be fair, having listened to the podcast, I don't think Sweller or Stokke really think it is impossible anyway. The point they seem to be making is that it is simply not a good approach to ask learners to employ if the goal is them learning to recognise and appropriately deploy that strategy. The much more useful approach that will support more learners in achieving this goal will be to have an expert exemplify and model the approach, and then have learners practice application of what they have seen/studied to an increasingly complex array of carefully chosen and structured problems to support development of increasing fluency. This I do 100% agree with, given that the goal is the learner getting to the point where they recognise and can effectively deploy the strategy.

The bit that I have more of an issue with is that this should not be the only goal of a mathematics education - to be taught lots of strategies, how to recognise when to deploy said strategies, and then to deploy them automatically. In my opinion, there does need to be space created for learners at all levels to be able to grapple with uncertainty, deal with competing constraints, and examine the pros and cons of different approaches. There is a phrase that I first encountered in Colin Foster's MT article that comes from Japanese "problem solving" lessons; 'the lesson begins when the problem is solved'. As teachers, part of our goals for a mathematics education must include opportunities for engaging with authentic problems, not simply questions which are very closely related to a single mathematical approach or result that has been recently taught. One can argue that questions like solving the pair of simultaneous equations given above, once pupils have been taught elimination as an approach, cease to be a 'problem' in a mathematical sense rather than simply questions that should cue a particular recognition and deployment. Indeed, at GCSE, such a question would be considered an AO1, "use and apply standard techniques" question, rather than an AO3, "solve problems within mathematics and in other contexts" question. 

Contrast that with this question taken from the Corbett Maths website:


This problem has multiple possible approaches. Yes they all revolve around having equal amounts to compare - either equal volumes of Cola or equal values of money. However, there are a number of different volumes or amounts of money that could be in consideration here, 6 litres, 100 ml, 1 ml, 1p, 10p, £1 or more are all feasible. This is entirely the sort of problem I can see featuring in a Japanese style lesson, with the teacher introducing the problem and providing any necessary input around scaling of volumes and pricing or the like, before allowing pupils to approach the decision in their own way and generating meaningful discussion about how different approaches that pupils might take compare to each other. 

Another issue to consider is the role that engaging with problems prior to learning an approach specifically tailored to the problem type might have on motivation. This is a complex issue. It might be that, in certain circumstances, having to consider problems for which the solution isn't obvious provides a motivation to learn the techniques that will make the problem easier to solve. Conversely, it may be that this negatively impacts pupil motivation if they feel the concept is too difficult to grasp due to early exposure with challenging problems. I also recall a phrase from Skemp here, 'well is the enemy of better'. If pupils are able to solve the initial problem using an inefficient but adequate strategy, they may be less motivated to move away from that strategy even if it is more efficient. 

I think a lot of this is likely to do with how invested pupils are in the initial problem - either because of a positive attitude to maths in general or due to some 'hook' in the problem itself that piques pupil interest. I do believe, however, that the use of problems to motivate a need (or at least usefulness) to engage with new mathematical learning is one that is worth examining more fully.

In my response to the first Maths Horizons report I shared this image which I think contributes to the role of problems in learning maths:


Rather than problems being something that learners only attempt once fluency has been achieved, which is the view in some places, I strongly believe that teachers of mathematics need to recognise the cyclical relationship between developing fluency, reasoning and problem solving, This was based on a sesson I had done at a LaSalle MathsConf a couple of months earlier, where I shared this anecdote:

A personal anecdote:

It may just be me, but it seems that the pupils I am teaching are less prepared to try and think about an idea.

They seem to be expecting me to do all of the hard work, and show them every little aspect of everything.

I wonder if they have formed an expectation about maths lessons that all they have to do is sit and listen to the teacher and then try and do what they (we) do.

I wonder if this is because of their experience of maths lessons to this point.


Learning is effortful, and mathematical learning should require the employment of reasoning, deduction, conjecture and the like. I think that exposure to problems where the solution isn't obvious may have a role to play in impressing upon pupils that they are expected to think in maths lessons, expected to bring their reasoning skills alongside their prior knowledge to the table, and that mathematics learning is not simply watching the teacher do something and then try and regurgitate it.

So, what is the role of 'problems' in learning mathematics. I don't think there is a simple answer. They both build and test fluency. They affect motivation and provide points for discussion. They communicate something of what we value in learning mathematics and in seeking a mathematical education. This is probably what makes their role so debated and difficult - 'problems' and how they are used have many roles in the teaching and learning of mathematics and how and when to use different problems and problem types will depend a lot on the pupils you have in front of you.




ResearchEd Birmingham session: Front-Loaded Feedback and the 'I do, We do, You do'

So today I attended the excellent ResearchEd Birmingham event and re-delivered the session I ran at the ResearchEd National conference in September 2025 about the idea of front-loading opportunities for feedback into questions and tasks so that pupils have the opportunity to get feedback directly from the task about their approaches, and to force them to make their thinking more visible - which has prompted me to create my latest website www.front-loaded-feedback.co.uk. The slides for this presentation are linked at the bottom of the blog.

During the session I had alluded to some thinking about the 'I do, We do, You do' model of instruction related to the question at the bottom of this slide:


I had planned to expand on this thinking towards the end of the session, but due to time constraints I was unable to  - so I thought for clarity and posterity I would include it here.

I tend to think about most maths learning episodes (a distinction I draw as the 'lesson' is clearly not a useful unit of time in relation to securing learning) as going through through roughly four phases as we attempt to move learners through a continuum from novice to developing greater expertise. These phases are summarised in a slide I used in other sessions:

In the first phase learners have little-to-no knowledge of the concept of process that we are introducing, and so they benefit most of exemplification and modelling - any practice attempted in this phase will likely be unsuccessful.

In the second phase learners have (the exemplification and modelling) gained some inflexible knowledge of the concept or process. Practice in this phase is likely to be error prone, particularly if it strays too far from what learners have seen during the exemplification/modelling. This is the phase where guided practice is required, with learners still requiring significant support and immediate feedback on their attempts. This is where I see front-loaded feedback questions and tasks as having real value.

The third phase is one in which knowledge is moving from inflexible towards flexible, but the goal of practice now to expand beyond the the initial modelling to open other broader knowledge of the concept or process in order to target key elements of the structure of the concept or process and push learners beyond the comfort of what was initially modelled or exemplified. This is the phase where procedural and conceptual variation in questions and tasks are likely to be most prevalent and useful (although, depending on what is being taught, these could feature in all phases of learning). In this phase of deliberate practice we are still likely to see errors, and so immediate or near-immediate feedback is still a clear requirement. 

However, for me, the reason for the errors and the feedback required as a result is subtly different to the guided practice phase. In guided practice, errors happen because learners are still on shaky ground with what was initially introduced, and so feedback needs to identify and focus on what learners haven't quite grasped from the initial instruction and correct that. In the deliberate practice phase, learners should already be confident in mimicry - being able to do what they were originally shown to do or identify what they were taught to identify - but the errors now come when trying to extend that thinking, going down wrong paths or under-/over-generalising certain properties. The feedback here needs to highlight where thinking needs to change for learners to be able to move forward. A simple example of the distinction here is learners being shown how to solve the simple linear equation 3x + 2 = 8, and then being asked to solve the equations 4x + 5 = 13 and 4x - 5 = 13. The first equation of these two is structurally identical to the one modelled and should be part of guided practice. Any errors arising from this can be tackled by directing attention back to the initial modelling or worked example, and either highlighting (or asking pupils to reflect on) where their approach has differed from the approach given. The second requires a deviation from what was modelled; it requires learners to adapt from the modelling (assuming the teacher hasn't modelled a specific example of this structure beforehand) and so might be considered useful as part of deliberate practice. 

Depending on the complexity of the concept or process, guided practice might be as small as one question or might be several questions. It might involve front-loaded feedback, the use of mini-whiteboards or multiple-choice questions with immediate feedback, and/or backwards faded examples like those on Dave Taylor's excellent website (this is, of course, not an exhaustive list).

Whilst the deliberate practice phase might include something like increasingly difficult questions (another of Dave's websites), completion tables or similar - still completed on mini-whiteboards or otherwise monitored carefully so feedback and support can be given when needed.

The independent application phase only then comes once pupils have gained that high success rate (a la Rosenshine) in the deliberate practice phase - once learner thinking around the concept or idea is more secure and misconceptions arising from this and the guided practice phase have been dealt with. It will further stretch pupils by bringing in contexts, interleaving and/or interweaving other concepts or processes with the current one, or in general asking for wider applications of the knowledge studied.

The issue I have with the 'I Do, We Do, You Do' instructional approach is that, if we accept that learners will need to go through these, or similar phases, on their journey towards developing expertise, then this model seems overly simplistic to capture the range and nuance of practice opportunities that pupils need to engage with to develop the necessary expertise. I see and hear about teachers treating the 'You Do' as part of the independent practice, where it is, at best, the first in what should be a series of questions in the deliberate practice phase and more likely still part of guided practice. Alternatively, I see and hear about teachers moving straight into questions that might be considered more suitable in the deliberate practice phase, during the 'We Do', requiring learners to engage with adaptations before they have even got to grips with the concept or process as exemplified/modelled. And this is not to mention that the very approach of 'I Do, We Do, You Do' can be seen to imply teaching of process (several stages of doing things), which is clearly not applicable when teaching concepts (where comparing examples and non-examples are generally more suitable) or facts like a full turn is made up of 360 degrees (where repetition and reframing through choral response or similar might be more beneficial).

To me, there is a real danger that focusing on a structure like this without the conceptual underpinning of what different stages of practice are trying to achieve means that the practice that learners are offered will not lead to them developing the expertise we want them to develop.

For those reading who were in my session - you can probably understand why I didn't feel I had time to get into this in the session! If you have read through all of this to get to the slides (or skipped to the end to get to them) then your patience is rewarded here.




New consultation on accountability looks to shake up Progress 8 - but will it incentivise what it hopes to?

The long-awaited schools white paper, 'Every child achieving and thriving', has been published today. Leading the way are the reforms to the SEND system, as well as the consultation on those reforms, which I know many have been anticipating.

However, as a former assistant headteacher in charge of data, it was the consultation on secondary school accountability measures announced, that really caught my eye. The consultation proposes four major changes to the secondary school accountability measures.

Changes to Progress 8

Two of the changes relate to the Progress 8 measure.

Replacing the three EBacc and three open bucket slots in the current progress 8 measure with two science slots, two 'breadth' slots and two 'choice' slots

This is probably the biggest change announced in the consultation, as we see the final draft of what was originally proposed in the government's response to the Curriculum and Assessment Review in November, with some tweaks and further information.

Ostensibly this is to attempt to reverse the 'decline' in the take up of arts subjects since the introduction of the EBacc back in 2010. However, opinion remains divided as to how big an impact the EBacc has actually had on arts take ups.

As can be seen in these graphs (which I generated with the help of Google's Gemini AI tool), the only arts subject to experience a significant decline in the last 15 years is design and technology (DT). However the rate of this decline is similar in Wales (which does not have the EBacc performance measure) when compared to England. This is much more likely to be attributable to the increased costs to schools in offering DT at GCSE, and the significant fall in the recruitment of design and technology teachers meaning that some schools will simply not be able to recruit DT teachers to be able to offer DT as a GCSE option. This is not to mention the changes to the design and technology GCSE, the removal of the Food GCSE from the DT umbrella and the rise in vocational qualifications that mirror different aspects of the design and technology GCSE, which will all have some impact on the reported take up of design and technology at GCSE. The other subjects in these graphs all show similar rates of decline across England, Scotland and Wales (with the exception of drama in Scotland), suggesting wider societal factors are at play here than simply the introduction of the EBacc.

Even with the removal of the EBacc performance measure, it is hard to see how this can do much to improve the take up of arts subjects. I am sure there are some schools out there that will force pupils down an EBacc pathway simply to try and boost their EBacc take up figures, however I would suggest the majority of schools will be ensuring as many pupils as possible take an EBacc option because either:
  1. They believe in the messaging from the previous government that these qualifications are truly the gateway qualifications to further academic study, or
  2. Their curriculum and staffing is set up for offering more of the EBacc subjects through KS3 and KS4 than arts subjects.
This second point is not to be underestimated. To offer more creative subjects at GCSE, or to increase take up, schools need to spend more time at KS3 preparing pupils for GCSEs in these subjects. This means diverting time at KS3 away from other subjects (most likely the humanities), towards these subjects. This requires more teachers, more specialist equipment or larger spaces (in the case of drama and dance), that many schools will not be set up to provide. Smaller schools especially would struggle with the financial burden of these subjects compared to predominantly classroom based subjects such as history, geography and RS if take up of the arts were to significantly increase, and would almost certainly have to reduce their humanities staffing. These smaller schools are already likely to be reviewing their staffing following the government pledge during the aforementioned curriculum and assessment review to ensure that the three separate science GCSEs are available in every school - if these schools have to find extra money for science teachers and science equipment they are even more unlikely to be able to fund increases in arts teaching and equipment.

Simplified banding processes

Instead of the current banding process, which sees schools grouped into five groups based on the confidence intervals of their P8 figure, the government is proposing to simply chop schools into five quintiles based on their P8 figure, so the bottom 20% would be 'well below average', the next 20% labelled 'below average' and so on. This compares to the distribution of scores in 2024 shown below (note the image was actually produced in 2019, but the figures remained the same until 2024).



The government say this is to address issues created by confidence intervals, such as smaller schools having such wide confidence intervals that they can never be anything other than average. 

Whilst I appreciate that the current system is more convoluted, I can't help but feel that the replacement is too simplistic. The government have said that they will mitigate against the loss of confidence intervals by publishing three years worth of data alongside each other, as well as cohort sizes and an explanation about the inherent uncertainty due to cohort sizes, however it still feels off to me to have all of these categories be the same size. The figures above suggest an almost normal distribution of schools - in a normal distribution approximately 38% of the data is within 0.5 standard deviations of the mean, with about 15% then between 0.5 and 1 standard deviations on each side, and a similar proportion above or below 1 standard deviation from the mean.


Whether arrived at using the current methodology or using percentile (as opposed to quintile) measures, this distribution of schools feels right to me.

New measures introduced

Alongside these changes, the government is suggesting introducing two new measures for school accountability.

New measure for those that didn't meet the expected standard

It has long been recognised that a small number of pupils performing poorly can drastically alter a school's P8 score. The previous government went some way to address this by introducing a cap for how negative a pupil's P8 score can be, however this government is looking to go further by including a new measure of progress alongside P8 for those pupils that come to secondary school without having met the expected standard in English and maths.

The proposal is to calculate a best-fit progress score across all the subjects that a pupil sits individually - basically calculating a P3, P4, P5 etc. score and allowing the school to take the highest of these. It is hoped that this will allow schools to continue to encourage lower prior attaining pupils to attempt a broad curriculum, whilst allowing schools to highlight the progress pupils make in areas even if those pupils don't as well in other areas, or don't fill all eight of the P8 buckets.

I am sure secondary schools will welcome this move as a way of allowing them to highlight the good work that they do with struggling and disengaged learners, and I hope that schools will use this responsibly as a tool to support their planning for pupils that struggle with learning and make sensible decisions about the curriculum and assessment pathway that these pupils will follow.  

New additional achievement measure for high attainers

Alongside the current measures of percentage of pupils achieving grade 5+ and grade 4+ in English and maths, there is a proposal to include a new measure for the proportion of pupils achieving grade 7+ in English and maths. The government says that this should reinforce 'the incentives for schools to provide a rich and stretching education for all children'.

It will be interesting to see how schools respond to this measure. Much of the extra support and intervention that happens for pupils at GCSE is focused on ensuring as many pupils as possible secure grade 4 or grade 5 in English and maths. Whilst this is beneficial for schools in maximising their accountability measures, it is also beneficial for pupils as these are the grades that are most typically required for pupils to follow A-Level (or other level 3) pathways post-16. Will schools have the capacity to extend their intervention to focus on pushing grade 6 pupils to grade 7 alongside this? I hope this will not lead to less pupils getting the support they need to secure their college pathways if schools decide it is easier to try and maximise the 7+ figure than the 4+ figure (if they don't have the capacity to focus on both) - as it is generally recognised that it is easier to move a pupil from a 6 to a 7 than from a 3 to a 4.

I do think this will benefit maths in particular, however. For maths it is often the case that post-16 providers don't accept pupils for A-Level maths with less than grade 7, so this could well provide an extra incentive for schools to provide that extra stretch and challenge for pupils to achieve the grades that will allow them to go on and study A-Level mathematics.

All in all, I think it is probably right to show publicly how well the top-performing pupils go on to achieve in a school. However, context must always be taken into account. Where schools are failing to convert high-prior attainers into top grades at GCSE, this needs to be highlighted and challenged. Where schools are actually taking learners that stood little chance of reaching top grades and ensuring that they do go on to secure top grades, this also needs to be recognised and celebrated. Progress 8 can help to do this, but can be complicated by other factors. In my mind, it would be useful to compare these headline attainment figures not only to the local authority and national averages, but to other schools in similar contexts. Perhaps we could have a third comparison figure that compares to schools with similar disadvantage intake; we know that disadvantage correlates strongly with outcomes (although the government is working hard to change this) and so seeing how well a school supports pupils to achieve top grades compared to other schools with similar profiles of disadvantage would allow for more schools to be highlighted and recognised for the excellent work that they do in more difficult circumstances.

The missing piece of the puzzle here is how these new measures will feature in Ofsted's process for holding schools to account, particularly in their 'achievement' judgement on the new score cards. Given the focus throughout the latest framework on pupils with SEND or disadvantage, I would expect the measure for those that didn't meet the expected standard (which is over-represented by pupils with SEND or disadvantage) to feature prominently in their thinking.




An interesting property of linear sequences - inspired by the 1% club.

The 1% club is one of my favourite quiz shows. It is the only quiz show I have actually applied to be on (no success unfortunately) but I play along on the app all the time, and also regularly complete the daily question that comes through the app. Yesterday (27th January 2026) had a very interesting question (from a maths point of view) that sparked a little dive into linear sequences. I resisted posting it yesterday as I didn't want to provide spoilers for any readers that also play along.

So, the 1% club daily question yesterday was this: 

What two digit number replaces the question marks in this sequence of numbers:

92, 23, 53, 83, 14, 44, ??

What made this interesting was the way I achieved the correct answer was very different to the way the app explained how to arrive at the answer (if you want to try and answer before I reveal the solution then don't scroll down too far!)

.

.

.

.

.

.

.

.

.

.

.

.

The correct answer was 74. The reasoning the app gave was that if you reverse the digits of the list you get the sequence 29, 32, 35, 38, 41, and 44 and so the next value would be 47 which, when reversed gives 74. Which makes perfect sense. But it isn't how I arrived at 74.

I (as I am sure many other readers also) noticed that a lot of the jumps were +30 and that those that weren't were -69. There also seemed to be a regularity to when these jumps appeared; a jump of -69 followed by two jumps of +30. Given the jump of -69 from 83 to 14, I reasoned there would be a jump of +30 (although I was wrong about the regularity of the pattern of jumps as the next would actually be another -69).

Of course, once I realised that these two approaches both gave the same answer, I absolutely had to try and decide whether this was a property of this particular set of numbers, or whether it would be true for the reverse digits of all linear sequences made of two digit numbers.

Rather than diving in with the algebra straight the way (that is coming, don't worry), I decided to play with a few more sequences first to create further examples and see if this sequence was obviously a unique case (a very good problem solving strategy in general I find to allow for pattern spotting).

So I tried 30, 34, 38, 42, 46, 50 becoming 03, 43, 83, 24, 64, 05 - which quickly disabused me that there was any regularity to when a sequence went up or down, and then I tried 17, 24, 31, 38, 45, 52 becoming 71, 42, 13, 83, 54, 25.

It was at this point that I realised that the value of the differences were always 99 apart in the reversed sequences, in the first 30 and 69, in the second 40 and 59, in the third 70 and 29. It took me an embarrassingly long time to recognise that the subtractions were happening when the original linear sequence bridged a 10, or that if the linear sequence was going up in 3 (say) that the reversed sequence should be going up in 30.

I started to explore the algebra at this point a little, but quickly realised that I was getting confounded by the fact that I had only tried differences in the original linear sequences that were less than 10, so I tried 26, 39, 52, 65, 78, 91 becoming 62, 93, 25, 56, 87, 19 (which showed me it wasn't so simple as subtractions occurring when the original sequence bridged a 10, but was more about the units digit becoming smaller - which should have been obvious really) and also 12, 35, 58, 81 becoming 21, 53, 85, 18. This confirmed that the sum to 99 was still a thing - or more precisely that the subtractions were the positive differences subtract 99.

At this point I dived properly into the algebra, which I did as follows (again, if you want to try it first then don't scroll down):

.

.

.

.

.

.

.

.

.

.

.

.

(I added some text to show clearly what the algebra implied that I didn't write in my own scribblings).

In terms of this as a task for pupils, I think there would be something interesting in offering KS3 pupils a chance to explore 'reverse linear' sequences - probably at a distance from linear sequences themselves. I think it might reinforce some properties of linear sequences and it would be very interesting to see if they spot the 99 link and how they try and justify it.

I definitely think there would be something about using the proof with a GCSE/Further GCSE/A-Level class, either as an example of constructing a logical proof or as an exercise for them as part of their practise in creating a deductive proof.

Of course, the question remains about what happens with linear sequences that stray into 3 digit numbers (single digits are trivial as we can just treat them as two digit numbers with first digit 0). I have answered this question to my own satisfaction and so will leave it as an exercise for the interested reader with one hint, which comes from when I shared the initial problem with other maths teachers at Twinkl and one of them came up with a third approach to the original problem (which is equivalent to what I have outlined and also leads to the correct answer):
"Add 30 each time but if the answer goes over 100 add the 100s digit to the ones digit".



A mathematical curiosity?

 In writing my new book 'Practising Maths' I referenced a lovely result (you will have to buy it to see how) that sums such as 1 + 2 + 1, 1 + 2 + 3 + 2 + 1, 1 + 2 + 3 + 4 + 5 + 4 + 3 + 2 + 1, etc. all produce square numbers.

If you haven't come across this result before then feel free to have a look at it for a minute (even try and prove it) - if you are familiar with consecutive triangular numbers summing to square numbers, it is closely related.

The curiosity I noticed was that I knew 121 was also square. So I became interested in the fact that 1 + 2 + 1 is square, and 121 is square. I decided to look into the others, and it turns out they are also square! Well, the ones up to 12345678987654321 are square anyway.


...

This of course raised a question - is this a reflection of something deeper? You might like to spend some time exploring and coming to your own conclusion before you read on.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
I guess the truth is a little of both.
If we consider squaring polynomials of increasing order with unit coefficients we get the following:


These are, of course, the same expressions as above, but in base x rather than base 10. So, if we substitute x = 1 into the expressions we get the sums on the left of the above table. However, if we substitute x = 10 into the same expressions, we get the numbers on the right of the table.

In terms of a task, we could offer the first few rows of the table to pupils and ask them what they notice/wonder. They might explore when the pattern breaks and why. We might encourage them to write out the numbers using explicit base 10 notation, such as 1 × 100 + 2 × 10 + 1 and see what insights this brings out. Pupils with the necessary algebra skills might even explore the expansions given above. Or we might just show it to pupils as an example of a mathematical curiosity.