Sunday, 16 April 2017

Gradient of lines - a new approach

Recently I have been teaching the idea of gradient to Year 8, and I decided to approach things quite differently. In the past I would move quite quickly through the ideas of gradient as a measure of slope, finding gradients of lines plotted on a coordinate axes, then linking gradient and intercept to the equation of a line. From my experience this is a fairly standard approach and one that a lot of teachers use. My problem is that typically not too many pupils actually get success from this approach. It occurred to me that I could do a lot more to secure the concept of gradient, and I decided to spend significantly more time than normal doing this, with some surprising results.

The first thing I did was to talk about different ways of measuring slope. Normally I would only focus on the approach I was interested in, but this time I talked about angles to the horizontal and the tangent function. I talked about road signs using gradients as ratios or percentages. Then I talked about gradient measure on a square grid. I have used different ways of defining gradient throughout my career, starting with the standard "change in y over change in x" before I realised this definition was more about how to calculate gradient on a axes rather than what gradient actually is. I played around defining gradient using ratios and writing in the form 1:n, which had some success for a while, but became cumbersome as ideas became more complex. The definition I have settled on for now is "the vertical change for a positive unit horizontal change", or as I paraphrased for my pupils "how many squares up for one square right?" The reason I like this definition is that it incorporates the ratio idea, works for square grids that may not include a coordinate axes, and I can see how it will help highlight gradient as a rate of change later on.

From here we spent quite a number of lessons learning and practising the act of drawing gradients. We started with positive whole number gradients, drawing one short line, and then one line longer, so that we got pictures looking a little like this:
What was really interesting at this point was dealing with the early misconception that the gradient of the right hand line was larger than the left, even though pupils had watched me draw both in precisely the same way. There was an idea, hard to shake, that a longer line meant a steeper gradient; I suspect because the focus was very much on 'how many squares up' the line was going. This did give me the opportunity to reinforce the importance of the single square right; this is an idea we had to keep coming back to throughout the topic.

Once drawing positive integer gradients was secure, we turned our attention to negative integer gradients. Pupils were quick to grasp the idea of negative gradients sloping down instead of up, and I was sensible enough to throw some positive gradient drawing in with the negative gradient drawing so that we didn't get too many problems creeping in at this stage.

With integer gradients well embedded, attention was then turned to unit fractions. There was a great deal of discussion about drawing 'a third of a square up'  for a single square right. The beauty of our definition of gradient here was that it allowed us to use a proportional argument to build up to the idea of drawing 3 squares right to go single square up; if one square right takes you a third of a square up, then 2 right will take you two-thirds up and 3 right will take you three-thirds (i.e. one whole). What was very quickly showed up here was a lack of security with the concept of fractions and counting in fractions (this was Year 8 low prior attainers) and so I am sure that some pupils then started adopting this as a procedure. We were then able to build up to non-unit fractions, both positive and negative, all the time drawing one line short, and then at least one line longer (in preparation for the time where we would draw lines that span a whole coordinate axis).

It was only after we had really secured the drawing of gradients of all types that we moved onto finding gradients of pre-drawn lines, which was simply then the reverse process, i.e. how many squares up/down for one square right? Again a nice proportional argument was used when the gradient was fractional. By the end of this there were pupils in the bottom set of Year 8 able to find and draw gradients like one and three-fifths.

The next part of the sequence wasn't nearly as effective. I went back to the idea of linking gradient and intercept to equations, and although pupils were identifying gradients with ease, and drawing gradients with ease, the extra bits of y-intercept and algebraic equations wasn't so thoroughly explored and the kids struggled. I almost feel like I would have liked to have left this and then come back to it as an application of the work we had done on gradient later in the year; when I design my own mastery scheme I will almost certainly separate these parts and deal with gradient as a concept on its own before looking at algebra applied to straight line geometry at a different point in the scheme.

My advice to anyone dealing with gradient would be to spend time really exploring this properly and not just rushing to using it to define/draw lines.

Saturday, 15 April 2017

The importance of evidence informed practice

I wanted to title this post the importance of evidence informed practice, but I cannot put bold words in the title unfortunately. There has been much discussion about this idea on edu-twitter recently, some of which I have involved myself in, and so I thought I would take the time to flesh my points out more fully in a blog.

One of the quotes that I have seen that created a bit of controversy around this issue was used in the Chartered College of Teaching conference in Sheffield. The session delivered by John Tomsett, Head teacher of Huntington school in York and author of the "This much I know..." blog and book series. The quote was taken from Sir Kevan Collins, CEO of the Education Endowment Foundation:

"If you're not using evidence, you must be using prejudice."

This quote caused quite a bit of disagreement, with some people very much in favour of the sentiment, and some taking great exception to the provocative language used.

I had an interesting discussion on twitter about this quote, with my interlocutor seeming to hold to the viewpoint that because all children are different that any attempt to quantify our work with them is best avoided. Their argument goes that the perfect evidence-based model for classroom practice is an unobtainable dream, and so the effort to create one is wasted. To me the point of evidence informed practice is not to try and create the perfect evidence-based model, but rather to ensure teachers can learn from the tried and tested approaches of their peers; to stop them falling into traps that people have fallen into before, and to allow teachers to judge the likelihood of success of different possible paths. To bring another famous quote into the mix, "If I have seen further it is by standing on the shoulders of Giants." (Isaac Newton). In the same vein, we don't every new teacher to have to reinvent the wheel, we want them to be able to learn from those who have faced similar challenges and found solutions (or at least eliminated possible solutions).

One of the accusations that has been levelled at educational researchers is that they are 'experimenting on kids'. This is one of my least favourite arguments against evidence informed practice as its proponents must either be ignorant of how researchers operate or be feigning ignorance in order to make a point that isn't worth making. At some level everything we try in the classroom has a risk of failure; even the best practitioners don't get 100% understanding from every child in every lesson. The big point here though is that no one goes into the classroom with anything other than an expectation that what they are going to do is going to work, and this goes for researchers as much an any other professional, and is true in fields other than education. It would seem that some of the critics of evidence-based practice see researchers as a bunch of whacked-out lunatics wanting to try their crazy, crackpot theories out on unsuspecting pupils. In fact most researchers are following up on promising research that has already been undertaken, and so in theory their ideas should have a greater chance of  success than a teacher whose view of the classroom is not informed by evidence. Even when researchers are trying totally new approaches, they are tried from a strong background and with a reasonable expectation of success. It is precisely the opposite of the view that some seem to hold, and in fact it is those who don't engage with educational research that are more likely to have some crackpot idea and then not worry so much about its success. 

One of the situations I posed on twitter was the situation of the teacher new into a school, and therefore taking on new classes. Let us further suppose that said teacher is teaching in a very different setting to that which they are used to; perhaps a change of phase, a change of school style (grammar to comprehensive may well become more prevalent), or even just a change of area (leafy suburb to inner-city say). Now this teacher has two choices in order to prepare for their first day in their new classroom. Their first choice is to read something relevant and useful about the situation they entering, They could talk to teachers in their network that have experience in their situation, including in the school they are going to be working. They could inform themselves about the likely challenges, the likely differences, and the ways that people have handled similar transitions successfully in the past and then use this to make judgements about how they are going to manage this change. Alternatively they could not, either sticking blindly to their old practice, or making up something completely random. I know which one I would call professional behaviour. 

When faced with this situation, the person with whom I was having the conversation sidestepped this choice and suggested that all would be well because they have a teaching qualification. Of course this ignores what a teaching qualification aims to do; the whole point of a teaching qualification is to lay down patterns for this sort of professional practice. This is one of the big reasons I was very much against the removal of HEI from teacher training. The idea of teacher training is to try and provide this dual access to practical experience through school placement along with skills in selecting and accessing suitable research and evidence from outside of your experience to supplement the gaps in your own practice. A teaching qualification has to be the starting point of a journey into evidence-informed practice, not the end point. One doesn't emerge from the ITT year as anything approaching the effective teachers that they have the potential to become; and the only way they will do so is by engaging with the successful practice of other teachers and using this to develop and strengthen your own practice and experience.

One other criticism levelled at those engaging with research and using it as the backbone of their practice is that the outcomes measured in order to test the success of the research are very often the results of high-stakes tests, and that these may not be the most appropriate measures of success. I have some sympathy with this point of view; I can see for example why people would baulk at the idea that the impact of using Philosophy for Children can and should be measured by their combined KS2 maths and English scores, which is what is happening in the EEF funded trial. However if we bring it back a notch we should ask ourselves what we are trying to achieve from the intervention. Ultimately I could argue that the purpose of any intervention in school is to try and make pupils more effective at being pupils, i.e. being able to study and learn from their efforts. Whether the intervention is designed to address gaps in subject knowledge, problems with learning behaviours or improve development in a 'soft-skill', the eventual intent is the same; that these pupils will be able to take what they have learned and use it to be more successful pupils in the future. Now I am not going to stand up and say that the way we currently measure outcomes from education is an effective way of doing so, but what I will say that is that however we choose to measure outcomes from education, any intervention designed to improve access to education has to be measured in terms of those outcomes. I am also not going to necessarily stand here and say that every single thing that goes on in schools should be about securing measurable outcomes for education (and I know many educators who would make that argument) but then I would argue that these things should not be attracting their funding from education sources. If an intervention is expected to benefit another aspect of a pupil's life, but it is not reasonable to expect a knock-on effect on their education (and when you think about it like that, it becomes increasingly difficult to think up sensible examples of interventions that might fit that bill) then it needs to be funded through the Health budget, or the Work and Pensions budget, or through whichever area the intervention is expected to impact positively.

Schools are messy places, subject to a near-infinite number of variables, very few of which can be controlled. It is virtually impossible to ensure that any improvement in results is due to one specific intervention; often several factors are at play. Does this mean, however, that we shouldn't experiment in the classroom, provided we have a reasonable expectation of success? Does this mean that we shouldn't attempt to quantify any success that we have that could, at least in part,be attributed to the change we made? Does this mean that we shouldn't share the details of this process, so that others can adopt and adapt as necessary, and then in turn share their experiences? To me this is precisely how a professional body of knowledge is built up, and so if teachers are going to lay claim to the status of 'professionals' then engagement with this body of knowledge has to be a given (provided they are well supported to do so). If you have the support to access this evidence, and then simply refusing to do so, then I would argue you certainly are using prejudice; either prejudice against the idea of research impacting your practice at all, or prejudice against the teachers/pupils that formed the research from which you might develop. Prejudice has no place in a professional setting, and no teacher should ever allow their prejudices to stand in the way of the success of the pupils in their care.