Author Archives: dmthomas90

About dmthomas90

Maths teacher in West London.

The 1 Big Secret to Good Behaviour

Good behaviour is all about self-control. It’s about the self-control to delay the gratification of having a chat/staring out of the window/arguing with Mr Smith/stabbing Jamie with a compass, in favour of the much less immediately appealing orderly learning environment. Students with self-control can resist these temptations to follow rules and learn.But self-control isn’t that simple.

It’s not a fixed trait that you either possess or don’t. Self-control is more like the power in a rechargeable battery: it empties with usage, before replenishing when plugged in. One ingenious experiment showed this by getting university students to attempt some impossible geometry puzzles. However before the puzzles they sat in a waiting room, and on the table was a bowl of radishes, as well as a bowl of freshly-baked chocolate cookies. Some students were allowed the cookies. Others were instructed to resist the cookies and eat the radishes instead.

The cookie-eaters attempted the puzzles for an average of 20 minutes before giving up. The cookie-resisters only held out for 8 minutes. Resisting the cookies used up their self-control.

Stress depletes self-control.

This means that a salesman who was stuck in a traffic jam will be less able to deal with a tricky customer, or a parent with a wailing baby will be less able to deal calmly with an overly playful child. It also means that often we will face students whose self-control is already significantly depleted by the time we see them. This might be because it’s the end of the day, and Period 4 French used up the last bit; or it might be because home is a chaotic hothouse of stress at the moment, and there’s no space to recharge. Either way, their ability to make good decisions has been depleted.

Depleted self-control means poor behaviour.

The tragedy is that the students who most need to value each minute of their education are often those whose self-control has depleted the most. Yet they will be the least able to resist temptations and behave well. It’s not that they don’t have self-control – they’ve just used it all up.

Students need to bypass the self-control system.

There’s no way of fixing their depleted reserves of self-control. They need to bypass the decision-making system, the one that sets up two options and asks them to choose. This system demands self-control, and there’s none of that left.

Habit is the cheat that unlocks good behaviour.

When you act out of habit you don’t need to stop and think, or to weigh up options and make a decision. You just act. Habits are driven by a different part of the brain (they’re tucked away in the basal ganglia, just above the top of the brain stem), and by a different neurological system. If students behave out of habit, then depleting self-control stops being a problem.

Habits are formed of cues, routines, and rewards.

Charles Duhigg’s book, the Power of Habit, teaches us about the habit loop. He says that every habit starts with a cue from the environment, is followed by a routine of behaviour, and culminates in a reward for completion.

For example, a habit of entering the classroom might begin with a cue of being greeted at the door by your teacher, followed by a routine of fetching your folder and immediately beginning the Do Now, and then finished with a reward of being verbally recognised and of completing the first task successfully. A student could choose to do this by using up self-control, or they could launch into autopilot as soon as they are greeted at the door – when habit kicks in and takes over.

The 1 Big Secret to good behaviour is to build habits.

In all schools, but particularly the most challenging, students will come to you with their self-control depleted. They will choose a course of bad behaviour, unless they have a habit of good behaviour that takes over. But there is no one habit called good behaviour – it’s a set of lots of small habits that deal with different cues.

One type of habit is the classroom routine.

Unless you work in an exceptionally well-organised school, you build this yourself. Decide the routine you want the class to have, then decide on a simple and clear cue as well as an appropriate reward. Once established, this habit will make sure your students perform standard tasks just as you wish, regardless of their self-control situation at that time.

Another is the ‘coping strategy’.

Because school isn’t all about predictable situations, students need habits that give them routines for the unexpected. A classic example is ‘counting to ten’ (cue = anger, routine = stop and count to ten, reward = increased calm/reduced risk of regretting a rash action). These are harder to design and teach, particularly for the lone teacher. However if done well they are the most transferable habits, and most useful for students’ futures.

Self-control depletes, habit rescues.

The more stressful situations a student has been through, or difficult decisions they’ve had to make, the lower their self-control will be. This makes it harder for them to behave well if relying on them to make good choices. It is this phenomenon that lies behind much bad behaviour. Habit can rescue students from this problem. It takes over their behaviour, avoiding the need for delayed gratification or tough decisions. If the right habits are in place, depleted self-control is no longer a problem.

How to build habits of good behaviour – coming soon…

Next week’s blog will be on building habits for classroom routines. Then in the coming weeks I’ll cover some different behaviour habits for more general situations, to help students around school and outside of it.

For more reading:

Good, But Not There Yet – A verdict on the NAHT report on assessment

I tried to discuss this on Twitter with Sam Freedman, but as his blog title points out, sometimes 140 characters isn’t enough…

The NAHT recently released the findings of their commission on assessment. They have attempted to set out a general framework for assessing without levels, including 21 recommendations, their principles of assessment, and a design checklist for system-building. All in all the report is a good one, capturing some of the most important principles for an effective system of assessment. However there are some significant problems to be fixed.

Firstly, the report relies on ‘objective criteria’ to drive assessment, without recognising that criteria cannot be objective without assessments bringing them to life. Secondly, the report places a heavy emphasis on the need for consistency without recognising the need for schools to retain the autonomy to innovate in both curriculum and assessment. Thirdly, the report advocates assessment that forces students into one of only three boxes (developing, meeting or exceeding), instead of allowing for a more accurate spectrum of possible states.

Here are my comments on some of the more interesting aspects of the report.

Summary of recommendations

4. Pupils should be assessed against objective and agreed criteria rather than ranked against each other.
This seems eminently sensible – learning is not a zero sum game. The potential problem with this, however, is that ‘objective criteria’ are very rarely objective. In “Driven by Data”, Paul Bambrick-Santoyo makes a compelling case that criteria alone are not enough, as they are always too ambiguous on the level of rigour demanded. Instead, criteria must be accompanied by sample assessment questions that demonstrate the required level of rigour. So whilst I agree with the NAHT’s sentiment here, I’d argue that a criteria-based system cannot be objective without clear examples of assessment to set the level of rigour.

5. Pupil progress and achievement should be communicated in terms of descriptive profiles rather than condensed to numerical summaries (although schools may wish to use numerical data for internal purposes).
Dylan Wiliam poses three key questions that are at the heart of formative assessment. 

  • Where am I? 
  • Where am I going? 
  • How am I going to get there?

A school assessment system should answer these three questions, and a system that communicates only aggregated numbers does not. Good assessment should collect data at a granular level so that it serves teaching and learning. Aggregating this data into summary statistics is an important, but secondary, purpose.

7. Schools should work in collaboration, for example in clusters, to ensure a consistent approach to assessment. Furthermore, excellent practice in assessment should be identified and publicised, with the Department for Education responsible for ensuring that this is undertaken.
The balance between consistency and autonomy will be the biggest challenge of the post-levels assessment landscape. Consistency allows parents and students to compare between schools, and will be particularly important for students who change schools during a key stage. Autonomy allows schools the freedom to innovate and design continually better systems of assessment from which we all can learn. I worry about calls for consistency, that they will degenerate into calls for homogeneity and a lowest common denominator system of assessment.

18. The use by schools of suitably modified National Curriculum levels as an interim measure in 2014 should be supported by government. However, schools need to be clear that any use of levels in relation to the new curriculum can only be a temporary arrangement to enable them to develop, implement and embed a robust new framework for assessment. Schools need to be conscious that new curriculum is not in alignment with the old National Curriculum levels.
Can we please stick the last sentence of this to billboards outside every school? I really don’t think this message has actually hit home yet. Students in Year 7 and 8 are still being given levels that judge their performance on a completely irrelevant scale. This needs to stop, soon. I worry about this recommendation, which seems sensible at first, leading to schools just leaving levels in place for as long as possible. Who’s going to explain to parents that Level 5 now means Level 4 and a bit (we think, but we haven’t quite worked it out yet so just bare with us)?

Design Checklist

Assessment criteria are derived from the school curriculum, which is composed of the National Curriculum and our own local design.
As above, it’s not a one way relationship from curriculum to assessment – the curriculum means little without assessment shedding light on what criteria and objectives actually mean. The difference between different schools’ curricula is another reason that the desired consistency becomes harder to achieve.

Each pupil is assessed as either ‘developing’, ‘meeting’ or ‘exceeding’ each relevant criterion contained in our expectations for that year.
This is my biggest problem with the report’s recommendations. Why constrain assessment to offering only three possible ‘states’ in which a student can be? In homage to this limiting scale, I have three big objections:

  1. Exceeding doesn’t make senseThe more I think about ‘exceeding’, the less sense it makes. If you’ve exceeded a criterion, haven’t you just met the next one? Surely it makes more sense to simply record that you have met an additional criterion that try to capture that information ambiguously by stating that you have ‘exceeded’ something lesser. For the student who is exceeding expectations, recording it in this way serves little formative purpose. The assessment system records that they’ve exceeded some things, but not how. It doesn’t tell them which ‘excess’ criteria they have met, or how to exceed even further. If it does do this because it records additional criteria as being met, what was the point of the exceeding grade in the first place?

    I’m also struggling to see how you measure that a criterion has been exceeded. To do this you’d need questions on your assessment that measure more than the criterion being assessed. Each assessment would also have to measure something else, something in excess of the current criterion. The implication of all this is that when you’re recording a mark for one criterion, you’re also implicitly recording a mark for the next. Why do this? Why not just record two marks separately?

    The NAHT report suggests using a traffic light monitoring system. Presumably green is for exceeding, and amber is for meeting. Why is meeting only amber? That just means expectations were not high enough to start with.

  2. Limiting informationThe system we use in our department (see more here) records scores out of 100. My ‘red’ range is 0-49, ‘amber’ is 50-69, and ‘green’ is 70-100. I have some students who have scored 70-75 on certain topics. Yes they got into the green zone, but they’re only just there. So when deciding to give out targeted homework on past topics, I’ll often treat a 70-75 score like a 60-70 score, and make sure they spend time solidifying their 70+ status. Knowing where a student lies within a range like ‘meeting’ is incredibly valuable. It’s probably measured in the assessment you’d give anyway. Why lose it by only recording 1, 2 or 3?

  3. One high-stakes thresholdThresholds always create problems. They distort incentives, disrupt measurement and have a knack for becoming way more important than they were ever intended to be. This proposed system requires teachers to decide if students are ‘developing’ or ‘meeting’. There is no middle ground. This threshold will inevitably be used inconsistently.

    The first problem is that ‘meeting’ a criterion is really difficult to define. All teachers would need to look for a consistent level of performance. If left to informal assessment there is no hope of consistency. If judged by formal assessment then keep the full picture rather than squashing a student’s performance into the boxes of meeting or developing.

    The second problem is that having one high-stakes threshold creates lots of dreadful incentives for teachers. Who wouldn’t be tempted to mark as ‘meeting’ the student who’s worked really hard and not quite made it, rather than putting them in a category with the student who couldn’t care less and didn’t bother trying. And what about the incentive to just mark a borderline student as ‘meeting’ rather than face the challenges of acknowledging that they’re not? The farce of the C/D borderline may just be recreated.

A better system expects a range of performance, and prepares to measure it. A system Primary School system I designed had five possible ‘states’, whereas the Secondary system we use is built on percentages. By capturing a truer picture of student performance we can guide teaching and learning in much greater detail.

Conclusion

I agree with most the NAHT’s report, and am glad to see such another strong contribution to the debate on assessment. However there are three main amendments that need to be made:

  1. Acknowledge the two-way relationship between curriculum and assessment, and that criteria from the curriculum are of little use without accompanying assessment questions to bring them to life.
  2. Consider the need for autonomy alongside the desire for consistency, lest we degenerate into a national monopoly that quashes innovation in assessment.
  3. Remove the three ‘states’ model and encourage assessment systems that capture and use more information to represent the true spectrum of students’ achievements.

Trying is Risky

This blog is about the most powerful pedagogical lesson I’ve ever learned.

In my first year of teaching I had to write an essay about two under performing students I taught. I chose two Year 9 boys, both of whom had potential but whose behaviour was stopping them from achieving. I followed the behaviour policy, experimented with all the standard behaviour advice, and had great support from more senior staff, but their learning just wasn’t good enough. In my frustration with the lack of help from the recommended education literature I turned to a reliable old friend: game theory.

The Model

When coming into a lesson students can make one of two choices: to exert effort, or not to exert effort. In a school with a solid behaviour policy the students who choose not to exert effort may avoid work, complete only the bare minimum, or not spend enough time thinking to remember. In a school without a solid behaviour policy they may cause carnage.

The lesson they are coming into can be one of two things: it can be a good lesson, or it can be a bad lesson. A good lesson is one where a student will learn if they exert effort; a bad lesson is one where they may not.

These two sets of options give us a two by two matrix like this:



For each pair of inputs there are two outcomes, the student’s level of academic and social success.

Consider the student’s choice. If they choose to exert effort, they will get either the best or the worst outcome. If the lesson is a good one then they will be both academically and socially successful, having learned in class and appeared capable/talented in front of their peers. However if the lesson is a bad one then they will be both an academic and a social failure. They will not only have failed in learning, but by trying and failing they will be embarrassed as an incapable or unintelligent person.

If a student chooses not to exert effort they receive a certain outcome – academic failure and social success. They have no chance of succeeding academically as they do not try to learn, however their rejection of learning guarantees that they never try and fail – their social status is secure.

So how does a student make their choice? It depends on how likely they think the lesson is to be a good one. Call the student’s perceived probability of the lesson being good p. If p is high, then they’re more likely to choose to exert effort, as it’s more likely they will get the best available outcome.

Risk Aversion

Imagine p = 0.5; that is the probability of a lesson being good was 50%. In this case would a student choose to exert effort (gambling between the best and worst outcomes) or to not exert effort (accepting a certain, albeit mediocre, outcome)? Most students would, quite rationally, opt to not exert effort. The reason for this is that they’re risk averse. They’d much rather choose a strategy that guaranteed them an okay outcome than a strategy that gambles between a good outcome and a bad one.

Because students are risk averse, p will have to be a high value before they would consider taking the risk of trying in class. Otherwise they’d rather settle for the poor yet certain outcome of academic failure complemented by social success.

The goal for teachers is making p as high as possible so that all students, no matter how risk averse they may be, exert effort in school.

What makes p?

Remember that p is the student’s perception of the probability that the lesson will make sure they learn, if they exert effort. It’s not a measure of how good the lesson actually is, or anything to do with the actual quality of teaching. All that matters for the decision to exert effort is the student’s perception. This can be affected by a huge number of variables way beyond the teacher’s control. A very non-exhaustive list is:

  • the student’s self-esteem (p is low if “I can’t do it”)
  • the student’s prior experience of the subject (p is low if “I’ve never been able to learn this”)
  • stereotypes around learning (p is low if “people like me don’t do well at this”)
  • the school culture (p is low if “our school’s no good at this”)

Teacher quality plays a part (p is also low if “this teacher’s rubbish”), but is by no means the whole picture, and is often not the dominant factor.

Raising p

Students reason by induction. Just as they believe that the sun rises tomorrow because it has always risen before, they believe that they’ll do badly in Maths because they’ve always done so before. Raising p is about breaking this damaging chain of reasoning, and the only way to go is by forcing them to experience success. This means that you plan your lesson to make sure that if they exert any effort at all, they will have some measurable success.

A personal tale

At the start of January I took over a new class, who were pretty disengaged about Maths. Our first lesson wasn’t great – they came in expecting to do badly, and largely met their expectations. p was low. Our lessons since then have been an all out war of attrition to raise p, and make sure they believe that if they exert effort they absolutely will succeed. My p-raising lessons have a very distinct structure:

  1. Clearly defined, ambitious lesson objective that seems daunting and will be rewarding if met.
  2. Sub-skills or steps broken down, almost list-like.
  3. Super-clear, often rehearsed explanation of the first step.
  4. Guided practice on mini-whiteboards until everyone can do it.
  5. Independent (timed) practice in books.
  6. Short assessment to prove to them they have achieved that step.
  7. Repeat 3-6 for next steps.
  8. Final assessment to prove to them they have achieved the whole skill.
  9. Repetition of my p-raising mantra – that everything in Maths looks scary and confusing at first, but easy once you’ve learned it.

If this looks remarkably like archetypal Direct Instruction, that’s because it is. The aim of these lessons are not to excite or engage in the popular sense. The aim is to convince all students, that if they try then they will learn. Discovery and inquiry have their place, but not when building confidence in fragile learners. Right now, I can’t risk any student not understanding at the end of the lesson.

I worry that too often teachers are encouraged to deal with disengaged classes by engaging them in expert-type activities that leave them too open to the risk of failure, and entrench many students’ pre-existing beliefs that they will not learn even if they try. I emphatically aim to build up to meaningful mathematical inquiry with all my students, but only when they have the confidence to cope with the very real prospect of failure in this.

A Warning

Teaching a student whose p is low is very different to teaching a student whose p is high. The former needs nurturing, confidence-building treatment where they are protected from failure and practically forced to succeed. The latter need to build their confidence by trying, failing and trying again. Where one type of student needs a tight structure, the other often needs a more open one. The trick is in identifying each type of student, and teaching appropriately to both of them.

Conclusion

Trying is risky. Lots of students quite rationally decide not to bother in their lessons, because the evidence they have tells them the probability of them doing well isn’t high enough. They’d rather take the certain path of failing academically, but with the social kudos of never having tried. To tackle this disengagement we need to take the risk out of trying. Turning around disengagement means relentlessly ensuring that every lesson ends in success, until confidence is built sufficiently high that trying no longer seems risky.

Innovation Day

How does your school innovate?

At Google, employees have their famed 20% time, where they work on projects of their choice that fall outside the scope of their usual job.

In Drive Daniel Pink tells the story of Atlassian, a software company who run quarterly FedEx days. On each of these days employees have 24 hours to work on any project of their choosing that relates to the company’s products.

Institutional innovation seems common in the computing sector – so why is it not in education?

Barriers to Innovation

  1. Hierarchy – most schools are built on a hierarchical structure. They will differ in who rigid this is, but I’ve not heard of many where dissent and challenge are actively encouraged. Innovation is a process of “creative destruction”. It can only take place where the hierarchy allows elements of the status quo to be challenged and creatively destroyed.
  2. Time – innovation takes time. Unlike a software company, schools cannot opt to stop working for a day, or afford to reduce timetables by 20% on account of speculative endeavours.
  3. Orthodoxy – almost all teachers have been trained in the same dominant orthodoxy, and are used to being told that particular strategies are ‘right’. As a profession we have, until quite recently, been discouraged from thinking independently and challenging orthodoxy.
  4. Silos – teachers tend to work in silos. Whether they be classrooms, departments, or even whole schools, the physical and organisational structures of education encourage teachers to work in silos rather than cross boundaries into other areas.

The Desire to Innovate

Schools are full of creative potential. Teachers know their students and their needs better than anyone else, and are best placed to drive the ideas and initiatives needed to improve their life chances. These barriers to innovation must be overcome. At WA we strongly believe that the best ideas will come from staff, and have begun trying to shape our culture of innovation.

Innovation Day

The first inset day this January was our first Innovation Day. Every member of staff was given the day to work on an idea or project of their choice. The only constraints were that:

  • It must contribute to the mission of the school.
  • It must have an impact beyond an individual teacher’s practice.
  • It cannot assume funding from the school budget.

Staff were in one of two streams. Developers submitted a project in advance, and got the day to work on bringing it closer to fruition. Over twenty five projects were submitted involving over sixty staff. 



Innovators began the day with problem-solving around our five strategic priority areas, looking for the biggest underlying barriers and ways to combat them. Groups formed around good ideas, and they spent the rest of the day building these into more concrete plans.



The range and quality of innovations was incredible. To give a quick flavour we had:

  • A cross-curricular think tank founded, to develop stronger links and synergies across subjects.
  • A community World Cup programme designed, to open the school up as a community hub and take the opportunity to enthuse children of all ages about different subjects through football.
  • A programme for improving questioning developed, using a specially designed structure of lesson observation to pick out key successes and areas for development.
  • A new programme to develop presentation skills to be delivered through tutor time.
  • An improved induction programme for vulnerable students to make sure they settle in and succeed to the best of their abilities.
  • And about thirty more!

Lifting Barriers

Innovation Day worked because it lifted the above barriers.

  1. Hierarchy – we explicitly said that anything goes, and no area of the school was off limits. To make this easier, senior leaders did not join in the rooms with other staff so that challenge could flow more freely. Instead groups that wanted to seek the advice of leadership booked consultation slots to go through their ideas.
  2. Time – we freed up one day. One day is not enough, but it is a start!
  3. Orthodoxy – all teachers were encouraged to challenge orthodoxy. Innovators had displays of prompts for their problem-solving, including things such as a table of Hattie’s effect sizes, and Prof Rob Coe’s great scatter graph. These prompted a challenge to some of the orthodoxies we have grown used to accepting.
  4. Silos – staff chose the groups they worked in, but never fell back into silos during the day. Developers were roomed with projects tackling similar problems from different departments, and innovators were mixed from the start. Activities such as Idea Speed Dating created opportunities for further discussion outside of traditional school silos.


Lifting these barriers, just for a day, unleashed a huge amount of creative energy and has led to fantastic innovations to improve our students’ futures. Our challenge now is further minimising barriers in the longer term, so that innovation becomes part of our culture rather than an annual event.

Why homework is bad for you

Laura McInerny’s third touchpaper problem is:

“If you want a student to remember 20 chunks of knowledge from one lesson to the next, what is the most effective homework to set?”

After a day of research at the problem-solving party, I came to this worrying conclusion:

Setting homework to remember knowledge from one lesson to the next could actually be bad for their memory.

So stop setting homework on what you did in that lesson – at least until you’ve read this post.

Components of Memory

Bjork says that memories have two characteristics – their storage strength and their retrieval strength. Storage strength describes how well embedded a piece of information is in the long-term memory, while retrieval strength describes how easily it can be accessed and brought into the working memory. The most remarkable implication of Bjork’s research surrounds how storage strength is built.


Storage and Retrieval strength – courtesy of Kris Boulton


Retrieval as a ‘memory modifier’

Good teaching of a piece of information can get it into the top left hand quadrant, where retrieval strength is high but storage strength is low. Once a chunk of knowledge is known (in the high retrieval sense of knowing), its storage strength is not developed by thinking on it further. Rather storage strength is enhanced by the act of retrieving that chunk from the long-term memory. This is really important. Extra studying doesn’t improve retention. Memory is improved by the act of retrieval.

The ‘Spacing Effect’

Recalling a chunk of knowledge from the long-term memory strengthens its storage strength. However for this to be effective, the chunk’s retrieval strength must have diminished. ‘Recalling’ a chunk ten minutes after you’ve studied isn’t going to be very effective, as your brain doesn’t have to search around for such a recent memory. Only when a memory’s retrieval strength is low will the act of recall increase storage strength. This gives rise to the spacing effect – the well-established phenomenon that distributing practice across time builds stronger memories than massing practice together. 

Rohrer & Taylor (2006) go a step further and compare overlearning (additional practice at the time of first learning) with distributed practice. They find no effect of over learning, and ‘extremely large’ effects of distributed practice on future retention.

Optimal intervals

There is an optimal point for recalling a memory, in order to maximise its storage strength. At this point, the memory’s retrieval strength has dropped enough for the act of retrieval to significantly increase storage strength, but not so much to prevent it from being accurately recalled. Choosing the correct point can improve future recall by up to 150% (Cepeda, et al., 2009).

There has been a common design of most studies into optimal spacing. Subjects learn a set of information at a first study session. There is then a gap before a second study session where they retrieve learned information. Before a final test there is a retrieval interval (RI) of a fixed time period. Studies such as Cepeda, et al (2008) show that the optimal gap is a function of the length of the RI, and that longer RIs demand longer gaps between study periods. However this function is not a linear one – shorter RIs have optimal gaps of 20-40%, whereas longer RIs have optimal gaps of 5-10%.

Better too long than not long enough

Cepeda et al’s 2008 study looks at four RIs: 7, 35, 70, and 350 days. The optimal gaps for maximising future recall were 1, 11, 21 and 21 days respectively, and these gaps improved recall by 10%, 59%, 111% and 77%.



Perhaps their most important finding is the shape of the curves relating the gap to the future retention. For all RIs these curves begin climbing steeply, reach a maximum, and then decline very slowly or plateau. The implication is that when setting a gap between study periods it is better to err on the side of making it too long than risk making it too short. Too long an interval will have only small negative effects. Too short an interval is catastrophic for storage strength.

Why homework could be bad

Homework is usually set as a continuation of classwork, where students complete exercises that evening on what they learned in school that day. This constitutes a short gap between study sessions of less than a day. We know that where information is to be retained for a week, the optimal gap is a day, and that where this is not possible it is better to leave a longer gap than a shorter one. For longer RIs, the sort of periods we want students to remember knowledge for, the optimal gap can be longer than a week.

Therefore, if you want students to remember information twenty chunks of knowledge for longer than just one lesson to the next, the best homework to set is no homework!

Setting homework prematurely actually harms the storage strength of the information learned that day by stopping students reaching the optimal retrieval interval. In this case, students who don’t do their homework are better off than ones who do!

Why I might be wrong, and what we need to do next

There is not enough good evidence of how to stagger multiple study sessions with multiple gaps. For example, we do not know where it would be best to place a third study session, only a second. However we do know that retrieval is a memory modifier, and so additional retrieval should strengthen memories as long as the gap is sufficiently large for retrieval strength to have diminished. Given we know that retrieving newly learned information after a gap of one day is good for storage strength, it may be that studying with gaps of say 1, 3, 10 and 21 days are better for storage strength than a solitary study session after 21 days, where the RI is long (350 days or greater). In this case for teachers who only have one or two lessons a week, homework could help them make up the optimal gaps by providing for study sessions between lessons.

The optimal arrangement of multiple gaps is a priority for research. We need to better understand how these should be staged, so that we can begin to set homework schedules that support memory rather than undermine it. Until then, only set homework on previously learned knowledge, and better to err on the side of longer delays. My students will be getting homework on old topics only from now on.

Bibliography

Joe Kirby on memory this weekend
EEF Neuroscience Literature Review
Dunlosky, et al., 2013. Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology
Rohrer & Taylor, 2006. The Effects of Overlearning and Distributed Practise on the Retention of Mathematics Knowledge
Cepeda, et al., 2009. Optimizing Distributed Practice
Cepeda, et al., 2008. Spacing Effects in Learning: A Temporal Ridgeline of Optimal Retention
Everything Kris Boulton writes

Performance Related Pay

On performance related pay I am a believer in principle but a sceptic in practice. After reading Policy Exchange’s report published yesterday, “Reversing the Widget Effect“, I remain so. However I am coming to believe that PRP can be rescued, and that a more flexible and transparent system could help teachers to improve by improving the quality of professional development in schools.


This is a heated topic of conversation, and far too closely tied to mistrust of the political establishment and insinuations about privatising education. This much is evidenced by the disparity between two recent polls on PRP: when YouGov asked on behalf of Policy Exchange 89% of teachers were in favour of PRP in principle; when YouGov asked on behalf of the NUT in a survey about the government’s reforms, 81% were against PRP. Context here is king, and separating PRP from opinions about Michael Gove’s personal integrity is essential if we’re to have any semblance of rational debate.

PRP in Principle

The foreword to Matthew Robb’s report is written by George Parker, a former US union leader turned advocate of PRP. Branded a traitor by teaching unions in the States, Parker recounts a lightbulb moment he had after delivering a speech at a “high poverty primary school”. He writes that:

“Afterwards, a little girl came up to me and hugged me, and said that no-one had ever said that before. No-one had ever been fighting for them to get a better education. And in the car on the way back, I realised: you lied. You lied to that little girl. Because I didn’t really care about her, and getting good teachers in front of her. In fact, I’d just spent $10,000 to overturn a firing and keep a bad teacher in that school – a bad teacher I would not want anywhere near my own granddaughter…”


The PX Report devotes a lot of time to addressing this ‘in principle’ case, that it is almost morally wrong to reward poor or mediocre performance in the same way as good and excellent performance. I do strongly agree with their argument here. We should be doing everything possible to ensure that all children receive the best education, and as the biggest determinant of that is the teacher they have, we should be putting all of our effort into improving teaching. If tying together pay and accountability make even a marginal difference to student outcomes, then in principle we should be accepting PRP.

The Status Quo is Inadequate

The first step in Robb’s argument is that the apparently performance related status quo has ceased to reward performance. He references a report finding no relationship between the Ofsted quality of teaching grade a school is given and the average teaching salary in that school, and shows us the distribution of pay bands within schools of different Ofsted ratings. This evidence is damning. A pay system that has no relationship with performance is wasting taxpayers’ money.



Nor can it be argued that experience or tenure is a good proxy for performance. Do First Impressions Matter?, a recent paper by Atteberry, Loeb and Wyckoff, shows that of teachers whose first year performance is in the lowest quintile, 62% remain in the bottom two quintiles five years later. More worryingly they show that although the gap between the top and bottom quintiles closes, this is not just because the bottom quintile get better but because the top quintile actually get worse, with those in between largely stagnating.



With no evidence to suggest that the current system either is or should be working as we desire, in principle we should be looking for a new one.

The In-Principle Argument for PRP

There seems to me to be a reasonable causal chain, backed up by evidence, from well-implemented PRP to better student outcomes. PRP causes them to exert greater effort/raised extrinsic motivation. This leads to more deliberate practice, which leads to increased student outcomes.



i. Raising extrinsic motivation
As Robb recognises, “it is not in doubt that for the majority of teachers, the primary motivation is to help their pupils progress”. Nonetheless even the most virtuous of teachers can be influenced to some extent by external factors, of which pay is one. The actual evidence on the relationship between teacher pay and teacher effectiveness is mixed. Few teachers cite pay as a motivation for entering the teaching profession, yet many cite it as a reason for leaving. Comparative international studies show that countries where teacher pay is higher have better student outcomes, but they do not conclusively show that a performance aspect of this pay is significant.

This is definitely the weakest link in the PRP causal chain. The most robust element of Robb’s argument is that higher pay, through PRP, would attract and retain good teachers who would otherwise either not enter or leave teaching. This is undoubtedly a positive effect, but I question whether this effect alone is enough to warrant the effort that implementing PRP would be. Rather I am compelled by Dylan Wiliam’s argument that improving the quality of entrants into the teaching profession will take a long time to have a relatively small effect, and therefore that “the key to improvement of educational outcomes is investment in teachers already working in our schools”. I am unaware of any evidence suggesting that there would be a sufficiently large influx of suitably talented new teachers under a new pay regime to undermine Wiliam’s argument.

More compelling, but less well evidenced, is the claim that PRP could increase the extrinsic motivation of teachers in schools. Nonetheless it seems to me that building teacher performance into the formal accountability proceedings of a school, tied to a teacher’s progression up the pay scale, cannot fail to increase the incentives for teachers to improve their performance. Not only this, but it places a much greater pressure on the school to improve its teachers (more on this later on). I believe, as I will argue later, that even if the impact on the motivation of teachers were to be minimal (although much evidence does suggest otherwise, as Robb discusses), the impact on school processes would be enough to drive the improvement we seek.

ii. Deliberate practice
The second causal leap in the above chain is that increased motivation leads to increased deliberate practice. Much has been written about the role of deliberate practice in improving performance across domains. The canonical violinists study showed how practice, not talent, was the determinant of a great violinist, and although more recent evidence has shown the role of innate talent in some physical pursuits, deliberate practice still reigns in most other domains. Teaching, for example, is one of these, as discussed in Alex Quigley’s blog on applying deliberate practice to become a better teacher.

If deliberate practice improves teaching quality then the leap to better student outcomes is a straightforward one. Robb references research showing that the difference between a teacher in the 25th percentile and a teacher in the 75th percentile is 0.4 GCSE points per subject, whilst the difference between the 5th and 95th percentiles is 1 whole GCSE point per subject.

The causal chain from PRP to better student outcomes works in principle, and as George Parker argues, we have a moral obligation to take that very seriously indeed.

PRP in Practice

Robb’s argument for PRP hinges on a school’s ability to accurately measure teacher performance. Using the results of the Measures of Effective Teaching (MET) project, Robb dismisses the claim that teaching quality cannot accurately be measured. He does so too hastily.

The MET results are certainly positive, and have taught us a great deal about measuring effective teaching. Of particular interest for me was the significant predictive power of student surveys, something I’m confident would not be particularly popular with teaching unions. Robb argues, based on the MET results, that an appropriately weighted basket of measures, preferably averaged over two years, would be sufficiently accurate to determine a teacher’s pay.

I am less convinced.  Robb’s report includes a table (below) comparing teacher effectiveness by quintile in two consecutive years. It finds that “the variance is such that only half the teachers assessed as being in the lowest quintile of performance in one year are in the lowest two quintiles the following year – and a third of those assessed as being in the top quintile in one year have moved to the lowest two quintiles as well!”



Even the most reliable measure in the MET study (an equally weighted basket of state test results, observations, and student surveys) only had a reliability of 0.76, and this is using observations where observers have been specially trained and certified in a far more rigorous system than anything commonly used in Britain. Indeed Wiliam quotes research showing that to achieve a reliability of 0.9 in assessing teacher quality from observation a teacher would have to be observed teaching six different classes by five independent observers. This is hardly a viable proposition.

Although Robb is willing to write off these difficulties by arguing for averages over greater periods of time, or focusing on extreme performance, neither of these are good enough solutions to the reliability problem. As he himself argues, for PRP to be workable it needs “a solid performance evaluation system that teachers support”. A system where a third of teachers fluctuate from the top to the bottom each year is neither solid, nor likely to be supported.

Squaring the Circle: Professional Development Targets

Although I am sceptical of PRP as suggested in the Policy Exchange report because of its reliance on unreliable measures of teacher quality, I am reluctant to throw away the potential to improve student outcomes through the use of pay reform. The clearest lever by which this would work is improving professional development.

Wiliam identifies that teachers, on the whole, stop improving after two or three years in the profession. He suspects, as do I, that this is strongly linked to the poor availability of good-quality feedback for teachers post-qualification. Deliberate practice is hard without feedback. Where we differ is on how to improve the feedback cycle for teachers to better support good quality deliberate practice. Wiliam so far is relying on the goodwill of schools. Although this might be enough for some schools, it will not be enough for all. PRP could be the way to radically improve the support schools give their staff in order to become more effective teachers. The combination of upward pressure from teachers demanding the support they need to improve, and downward pressure from regulators demanding an improvement in more accurately measured teacher quality, is significant and powerful enough to change the face of professional development in most schools.

i. Upward pressure from teachers
As Robb argues, teachers who are judged on their performance will demand better feedback, coaching and training. They will insist on frequent, good-quality feedback that helps them to improve, and schools will be compelled to provide this. Once a teacher is given appropriate feedback they are much more able to improve through a cycle of deliberate practice, and to therefore improve the performance of the students they teach.

ii. Downward pressure from administrators
Robb writes that “The implementation of performance-related pay will require Heads and senior managers to undertake more rigorous performance evaluations of their staff…[this] will also force managers to more explicitly acknowledge the range of teacher performance in their school and act on it.” Once a school has explicitly measured the quality of teaching in the school as part of a more rigorous framework, they will be compelled – by Ofsted and by governors – to do more to improve it.

My question is whether a system of PRP can be designed that replaces the attempted measurement of objective performance with more of a focus on development. Could we, for example, set and more accurately measure specific targets related to a teacher’s improvement, rather than try to measure their ethereal ‘effectiveness’? Poorly measured effectiveness is not transparent, so does not help a teacher to improve. The measure fails Robb’s own criterion. Drawing up a set of clear but demanding targets, on the basis of student performance data, (better) observation and student surveys would provide transparent objectives for teachers to meet. The involvement of pay would cause teachers to demand, and schools to offer, the support and feedback needed for deliberate practice, which in turn would improve student outcomes.

Conclusion

Performance related pay works in principle. It has great potential to improve student outcomes by encouraging and supporting deliberate practice amongst teachers. However systems attempting to measure teacher effectiveness are not sufficiently reliable for pay to be based on. Their unreliability would create confusion and unpopularity, which undermine the central arguments for PRP. A better system is for schools to take advantage of PRP powers to strengthen performance management, and use clear, demanding and evidence-based targets to improve teacher effectiveness. By combining teachers’ increased extrinsic motivation and schools’ increased pressure to provide good-quality support, teachers will become more effective and student outcomes will improve.

A Curriculum That Works

This is the second of three posts reflecting on my first term as Curriculum Lead for Maths. The last post, on our new post-levels assessment system, can be found here.

A New Key Stage 3 Curriculum

The curriculum is so much more than a statement of what is to be taught and when. It embodies a school’s vision for its students and its philosophy of learning. I can look at a school’s mathematics curriculum and tell you all about the person who wrote it – their expectations of students, their hopes for their futures, their beliefs about how to get there. The curriculum is the embodiment of all these things, and it is crucial to get it right.

To begin writing a curriculum you must begin from a vision of the mathematicians you want your students to become. Mine is that I want our students to become “knowledgeable problem-solvers who relish the challenge Mathematics offers”. They should, at the end of their time with us, be able to independently tackle an unfamiliar mathematical problem and create a meaningful solution to it.

I am mindful when considering this vision, of the roaring debate around discovery and project-based learning, and how it can fall foul of WIllingham’s novice-expert distinction. My view is this:

Knowing that students begin novices, the purpose of education is to make them into experts.

The curriculum needs to train students. We cannot assume expert qualities of them from the start, and plunge them into investigations where most or many students will fail to learn. Similarly, we cannot dogmatically write off any activity involving discovery, investigation or project work. I emphatically want my students to be capable of expert investigation when they leave school, and so our curriculum must explicitly prepare them for that. The last part of this post in particular looks at how we manage this in a practical way.

From the vision of how students should leave school, I drew up three design principles:

1) The curriculum must develop fluency.
2) The curriculum must develop conceptual understanding.
3) The curriculum must teach students to solve problems.

Principle 1: Developing Fluency

When I wrote the last iteration of our school’s KS3 Maths curriculum, I abandoned the traditional spiral structure and opted for a depth before breadth approach. We probably halved the amount of content covered in a year, as we wanted to give students the time they needed to develop fluency. This year we’ve cut it again. Each of the six terms covers a maximum of three ‘topics’, most of which are closely linked. Terms in Year 7, for example, look like this:

  1. Mental addition and subtraction; Decimal addition and subtraction; Rounding
  2. Mental multiplication and division; Decimal multiplication and addition; Factors and multiples
  3. Understanding fractions; Operations with fractions
  4. Generalising with algebra (expressions and functions only)
  5. Properties of 2D shapes; Angle rules
  6. Equivalence between fractions, decimals and percentages

Smaller concepts that tie closely with big ones above are taught alongside them. For example, perimeter is taught in Term 1 alongside addition, area and the mean average are both taught in Term 2 alongside multiplication and division.

Since we give so much time to teaching each mathematical skill, we expect a high degree of fluency. To take the National Curriculum’s definition, fluency is students’ ability to “recall and apply their knowledge rapidly and accurately to problems“. It means not just being able to do something, but being able to reliably do it well and quickly. I would add that a necessary condition for fluency in a skill or operation is that it is embedded in your long-term memory.

This is exceptionally valuable in mathematics. A student may learn to be able to multiply decimals, but not become fluent in it. When multiplying decimals they have to slow down, to stop and think, and may make mistakes. This means that in Term 4 when they are learning to substitute into formulae with decimal numbers, they will face two severe problems. Firstly, their working memory will be occupied thinking about multiplying decimal numbers together, and not about substituting into formulae. Secondly, their reduced pace will mean that they have less exposure to substituting into formulae in each lesson. Overall they will spend less time thinking about the new concept they are supposed to be learning, and will learn it less well as a consequence (after all, “memory is the residue of thought”).

Developing fluency then, means lots of practice time with well thought out problems. Practice has got a bad reputation in mathematics, with too many people having been turned off maths by pages of repetitive textbook questions. My response is that practice need not mean making maths dry or uninspiring. Our students appreciate the value of practice as something that gives them the skills to do fun maths, and to achieve things they are proud of. Practice is invaluable, but can be dangerous if not used alongside meaningful and motivating problems.

If fluency is about rapid and accurate recall of knowledge, then Kris Boulton will tell you that fluency depends on high storage strength and high retrieval strength. A depth-focused curriculum gives us storage strength, but could easily sabotage retrieval strength if knowledge is not revisited. This is probably our biggest area to work on. The curriculum includes notes about what content to revisit when (thanks Kris!) and our assessments presume previous content as mastered prior knowledge. However we haven’t yet found a more structured way of revisiting content consistently across classes.

Principle 2: Conceptual Understanding

It is not enough for a curriculum to say what to teach. A meaningful curriculum also says how to teach it. At WA we’re big believers in the Singaporean approach of concrete-pictorial-abstract (CPA), and use this to structure our teaching. One of the reasons mathematical understanding in Britain is historically so poor is because students have been immediately confronted with abstract representations, representations that are well separated from any concrete reality, and not been given enough support in understanding them. 

A favourite example of mine is ratio. I meet strikingly few students who can answer a question of the following type correctly:

“Bill and Ben share sunflower seeds in the ratio 3:2. If Ben has 20 sunflower seeds, how many does Bill have?”

I’d love to do some research and rely on less anecdotal evidence, but I’d guess that more British 16 year olds would say 12 than would say the correct answer of 30. Why? Because they were taught ratio in a completely abstract way, where they learned to apply a method but didn’t every receive the support needed to understand the concept of ratio.

In our curriculum, however, the pictorial bar model is central to teaching ratio. In fact I don’t teach my students an abstract method (they’re perfectly capable of coming up with it for themselves by doing the bars mentally and writing down calculations). For the unfamiliar, a bar model to represent the above problem would look like this:



Students draw the ratio, label what they know, work out the size of each block and then the size of Bill’s bar. I am yet to find a student who doesn’t understand this method, and who can’t do considerably harder problems using it. This is the benefit of having a pictorial representation to help students understand the concept they are learning, and to soften the jump into pure abstract. Every topic in our curriculum comes with CPA guidance to develop strong conceptual understanding in all students.

Also key to developing conceptual understanding are links between areas of mathematics. I am eternally frustrated by how students see maths as broken down into small discrete chunks that have little or no relationship with one another. Even when we have topics that are just different representations of identical concepts (sequences and linear graphs, for example), few British students will ever see them as linked. At the core of our curriculum then is a sequence carefully designed to make every concept learned useful to a later one. More than this, it guides teachers to make links, and uses assessment to make sure students are comfortable making these.

Principle 3: Problem-solving

Mathematics is essentially the study of problem-solving. The process of mathematical abstraction has been followed for millennia because it is so useful for generalising and solving what the National Curriculum calls “some of history’s most intriguing problems”. If our students are to become the experts we want them to be when they leave, we need to train them in problem-solving now.

For me, problem-solving is a skill to be taught, and it should be taught like any other. Adept problem-solvers have not come to be so through innate talent, but because they have seen the solutions to many problems before and are able to spot similarities and apply familiar techniques. Our curriculum aims to teach students the most powerful problem-solving techniques by exposing them to a carefully selected sequence of problems, some of which are taught and some of which are independently worked on.

Each term has a problem-solving focus. For example, Term 1 was “Working systematically”. Students began with a problem where they had to work out how many different possible orders there were for a two course and then a three course set menu at a restaurant. They began using ordered lists to write out combinations, before speculating on general rules and checking them on new possibilities. Through a range of different problems in the term students learned (a) how to work systematically in different contexts, and (b) the value of doing so. 

Conclusion

Our curriculum has definitely met the three design principles set out, and is working well for our students. Depth before breadth has given them time to become fluent, to develop conceptual understanding and to solve problems. They see the value in mathematics as they’re exposed to interesting and meaningful problems. However this is done in a deliberate and structured way to make sure they are learning throughout. By applying the concrete-pictorial-abstract principle throughout we make sure that all students can interact with the concepts they’re learning and develop their understanding to a deeper level.

For me, we have two key things to work on after Christmas. Firstly, the revisiting of prior knowledge. We need to keep retrieval strength high, and must find a more structured way of doing this. Secondly, developing the guidance we give for teaching, particularly around drawing links between areas of maths. Although this happens well it is not yet a big enough part of our formal curriculum documents, which risks it slipping away in future.

An Assessment System That Works

I’ve been fairly absent from blogging/Twitter since the summer – an inevitable consequence of taking up a few new roles amidst the discord of new systems and specifications emerging from gov.uk with increasing regularity. But I don’t mean that as a complaint. Much that was there was broken, and much that is replacing it is good. Although life in the present discord is manic and stressful, it is also a time of incredible opportunity to improve on what went before, and to rework many of the systems in teaching that went unquestioned in schools for too long.

This Christmas I’m stopping to reflect on the term gone by, and on our efforts to improve three areas: Assessment, Curriculum, and Teaching & Learning. There are many failures, many ideas that failed to translate from paper to practice, but also a good number of successes to learn from and develop in January.

A Blank Slate

KS3 SATs died years ago. National Curriculum levels officially die in September, but can be ‘disapplied’ this year. With tests and benchmarks gone, there is a blank slate in KS3 assessment. This is phenomenally exciting. Levels saturated schools with problems – they were a set of ‘best fit’ labels, good only for summative assessment, that got put at the heart of systems for formative assessment. No wonder they failed.

At WA we decided to try building a replacement system, trialled in Maths, that could ultimately achieve what termly reporting of NC levels never could. We began with three core design principles:

1) It has to guide teaching and learning (it must answer the question “what should I do tonight to get better at Maths?”).
2) It has to be simple for everyone to understand.
3) It has to prepare students for the rigour of tougher terminal exams and challenging post-16 routes.

Principle 2 led us to an early decision – we wanted a score out of 100. This would be easy for everyone to understand, and by scoring out of 100 rather than a small number we are less likely to have critical thresholds where students’ scores are bunched and where disproportionate effort is concentrated. Scoring out of 100, we felt, would always encourage a bit more effort on the margin in a way that GCSE with eight grades fail to do.

Principle 1 led us to another early decision – we need data on each topic students learn. Without this, the system will descend into level-like ‘best fit’ mayhem, where students receive labels that don’t help them to progress. Yet there’s a tension here between principles 1 and 2. Principle 1 would have data on everything, separated at an incredibly granular level. However this would soon become tricky to understand and would ultimately render the system unused.

For me, Principle 3 ruled out using old SATs papers and past assessment material. These were tied to an old curriculum that did no adequately assess many of the skills we expect of our students. They also left too much of assessment to infrequent high-stakes testing, which does not encourage the work ethic and culture of study we value.

These three principles guided our discussions to the system we have now been running since September.

Our System

The Maths curriculum in Year 7-9 (featured in the next post) has been broken down into topics – approximately 15 per year. Each of these topics is individually assessed and given a score out of 100. This score is computed from three elements: an in-class quiz, homework results, and an end of term test. Students then get an overall percentage score, averaged from all of the topics they have studied so far. This means that for each student we have an indication of their overall proficiency at Maths, as well as detailed information on their proficiency at each individual topic. This is recorded by students, stored by teachers, and reported to parents six times a year.

Does it work?

Principle 1: Does it guide teaching and learning?

Lots of strategies have been put in place to make sure that it does. For example, the in-class quiz is designed to be taken after the material in a topic has been covered but before teaching time is over. The results are used to guide reteaching in the following lessons so that the students can retake with another quiz on that topic and increase their score. Teachers also produce termly action plans as a result of their data analysis, which highlight the actions needed to support particular students as well as adjustments needed to combat problematic whole class trends.

Despite this, we haven’t yet developed a culture of assessment scores driving independent study. Our vision is that students know exactly what they have to do each evening to improve at Maths, and I believe that this system will be integral to achieving that. We need a bigger drive to actively develop that culture, rather than expecting it to come organically.

Extract from the Year 7 assessment record sheet.

I’m also concerned that assessment at this level has not yet become seen as a core part of teaching and learning. Teachers are dedicated in their collection and recording of data, and have planned some brilliant strategies for extending their students’ progress. But it still just feels like an add-on, something additional to teaching rather than at the heart of it. One of our goals as a department next term must be to embed assessment data further into teaching; not to be content with it assisting from the side.

Principle 2: Is it easy to understand?

Unequivocally yes. Feedback from parents, tutors and students has been resoundingly positive. Each term we report each student’s overall score, as well as their result for each topic studied that term. One question for the future is how to make all past data accessible to parents, as by Year 9 there will be 40+ topics worth of information recorded.

Principle 3: Is it rigorous enough?

By making the decision to produce our own assessments from scratch we allowed ourselves to set the level of rigour. I like to think that if anything we’ve set it too high. We source and write demanding questions to really challenge students, and to prepare them to succeed in the toughest of exams. A particular favourite question of mine was asking Year 8 to find the Lowest Common Multiple of pqr and pq^2, closely rivalled by giving them the famed Reblochon cheese question from a recent GCSE paper.

The Reblochon cheese question – a Year 8 favourite.


Following the advice of Paul Bambrick-Santoyo (if you haven’t read Leverage Leadership then go to a bookshop now) we made all assessments available when teachers began planning to teach each topic. This has been a great success, and I’ve really seen the Pygmalion effect in action. By transparently raising the bar in our assessments, teachers have raised it in their lessons; and students have relished the challenge.

Verdict

This assessment system works. It clearly tells students, teachers and parents where each individual is doing well and where they need to improve. Nothing is obscured by a ‘best fit’ label, yet the data is still easy to understand. Freeing ourselves from National Curriculum levels freed us from stale SATs papers and their lack of ambition. Instead we set assessments that challenge students at a higher level – a challenge they have met. The next step is making data and assessment a core part of teaching. Just like NC levels were once a part of every lesson (in an unhelpful labelling way), the results of assessment should be central to planning and delivering each lesson now.

The Practice Gap: Quantity

At its core, the achievement gap is just a practice gap. Children from a more advantaged socio-economic backgrounds have a greater quantity of academic practice, and its effect is compounded by the higher quality of this practice.
We know that, on average, children from wealthier backgrounds spend longer engaged in academic pursuits than their less wealthy peers. We also know that the growth of knowledge is exponential. Once a gap has emerged it will grow, even if experiences after that point are identical. This means that even a small practice gap will grow into a big achievement gap.
The first step to closing the practice gap is to close the gap in quantity of practice. This blog is about the role of lessons in closing that gap. Its aim is to provide general principles that increase the quantity of practice time within a lesson. 
1) Every second counts
The cumulative effect of wasted minutes is tremendously destructive. Consider a student who arrives two minutes late to the start of each lesson. They take two minutes to begin working, and manage to waste another two minutes ‘packing away’ at the end. Ignoring any other down time during a lesson, this student would lose the equivalent of 19 school days each year – practically a term’s worth of learning. Every second counts.
The classroom that closes the practice gap eliminates lost minutes. It considers as late a student who is late to begin working, because being on time is about more than arriving at the classroom door. Transitions are tight, and every logistical operation is rehearsed to military efficiency. Teacher instructions are precise and concise, with non-verbal cues being used wherever possible. Accepting wasted seconds is accepting a practice gap.
2) Scarcity motivates
Give a student an hour to complete a task, and you can be damn well sure they’ll take an hour. They’ll crawl along with heroic inefficiency, working with the enthusiasm of a sloth on sedatives. Give the same student the same task with a finite, even daringly short time limit, and they’ll swing into action. A student’s mood should not determine the pace of their work. You should.
Every task given without a time limit is giving a blank cheque from the account of your most precious resource. The scarcity of limited time forces students to work efficiently and push themselves to achieve before their opportunity has passed. I like to generate scarcity by having a timer on display throughout my lessons, constantly counting down the seconds until the task must be completed. I also find that round numbers of time have far less effect than unusually specific ones. Five minutes is shorthand for “a little while”. Six minutes is a reasoned and deliberate limit. The teacher who’s calculated a specific maximum time is the teacher who won’t waste a second.
3) Speed matters
It is not good enough to just be able to perform a task. Students have to be able to perform it quickly, and without occupying too much of their working memory. Barry Smith taught me to call this “overlearning”, and it has changed the way I teach. A student has learned a skill or fact well enough when performing or recalling it exerts sufficiently small demands on their working memory that they are able to study something else at the same time. Otherwise, why bother? Students will never be able to operate in an unknown situation, or draw links across topics and subjects. They have to be able to do the thing you’ve been teaching them, and learn something new.
A great measure for this is speed. Directly measuring whether an operation has entered a student’s ‘muscle memory’, or its cognitive equivalent, is a tough problem. Monitoring their speed can be an effective proxy. Better still, speed is easily measured by students and can give them a tangible number with which to prove the progress they make. This motivating effect spurs them on to practise more and achieve even lower times.
That said, speed should be used with caution. It is not appropriate for all skills, and is a poor measure for non-routine or creative tasks. It is also risky because speed is easily ranked, and can turn practice into a competition against each other rather than against the clock. When well managed, however, speed is an excellent way of increasing the quantity of practice for routine skills that need embedding in long term memory.
4) Target mastery
It doesn’t matter what students have done, it matters what they’ll be able to do next. Students are too used to seeing a task as the end in itself. They complete 20 questions for homework because their homework is 20 questions. The practice required is limited and invariant. The job is done when the questions are done. 
Learning needs to shift from the past into the future tense. The goal of learning is to be able to face a future challenge, not to have completed a past one. By changing the objective of your class to focus on what students have to master, their quantity of practice will increase. Their motivation changes, so they are thinking about the skills they have mastered rather than whether they have hit their quota of questions. They are more likely to enter a state of flow, and to practise for the right amount of time. Tweaking your classroom to expect and reward mastery rather than task completion can revolutionize your students’ attitudes and significantly increase the quantity of their practice.
Conclusion
The gap in quantity of practice is a big one. It starts early – there is a 22% gap between 3 year olds who watch more than 3 hours of TV a day. It is fed into by a wide range of influences, many beyond the class teacher’s control. But by placing these philosophies at the heart of your classroom, you can make a significant impact in closing the practice gap for your students.

A Natural Theory of Group Work

Theory: group work is a natural phenomenon that comes about when motivated people independently realise that they need to share their ideas with others and open their own ideas up for scrutiny.

I’ve been teaching problem-solving this year. Every week my year 7 and 8 classes get two problems. The first is a ‘taught’ problem. They explore the problem and try to solve it, but receive extensive help along the way, such as the steps of a strategy or worked examples of a very similar problem. The second is a ‘practice’ problem. It will be similar to the taught problem, but differ in at least one respect. They receive much less help here, as it is a chance to practice what they learned from the taught problem.
The aim of this is to prepare students to solve interesting problems later on in their lives, be it in academia or the ‘real world’. And in both these cases, interesting problems are more often than not solved in groups.
Since beginning teaching I’ve tried many different ways of doing group work. I’ve allocated different roles, used word frames, scripts and strange restrictions to try and bring about the group dynamics I wanted, but it’s never quite worked.
So for problem-solving lessons I’ve done it differently. 
  1. Students get given a problem, and begin working on it – on their own and in silence. There’s a time limit on this, and a timer on the board. They will not be allowed to discuss with another person for somewhere between 6 and 10 minutes.
  2. Once the time limit has passed, students are allowed to discuss with a partner and compare strategies. Typically they’ll have to report back in some way for me to assess how they’re doing.
  3. Once pairs have converged on strategies they’re allowed to discuss with other pairs to reach consensus in a bigger group.
This has led to the most productive group work I’ve ever seen, and am convinced it’s because of the silent working at the start. 
This period of silence gets every student familiar with the problem. It’s so long that they have to start thinking about it; to sit and wait for someone else in the group to give them the answer would be too boring for even the most hardened work avoider. So after accepting that they may as well start thinking, they get interested. An intrinsic motivation to solve it kicks in, and students want to figure it out.
It’s this desire that leads to productive group work. I’ve found in these lessons that groups come together largely organically. In one lesson this week the silence went on for about ten minutes after the timer stopped – nobody was ready to discuss. Then some people got more certain of their ideas, we’re ready to compare and subject them to scrutiny, and started pairing up. Once pairs got joint strategies and started making real progress, they wanted to check with others and formed small workgroups, each pair taking on a different angle but the results being pooled.

It was how truly productive groups work. People, motivated to solve a problem, who realise that their best chance lies in subjecting their ideas to scrutiny and working together.