Stupidity has a knack of getting its way.
Most schools these days routinely monitor students’ exercise books in an attempt to extrapolate the quality of teaching. In some ways this is positive and reflects the growing recognition that we can tell much less than we might believe about teaching quality by observing lessons. On the whole I’m in favour of looking at students’ work, but, predictably, book monitoring goes wrong for pretty much the same reasons lesson observation doesn’t work.
The thing is, there’s nothing wrong with observing lessons, work scrutiny or any of the other practices used to peer inside the black box of teaching quality, the problems stem from how the information gleaned is then used. If I observe a lesson with a checklist of criteria I will be viewing the lesson through a set of predefined parameters which will distort what I see. If I conduct a work scrutiny with something like this:
It’s not that I think Ross McGill’s approach is unhelpful per se – in fact, as he explains in his post, the idea that this monitoring should be accompanied by conversation with students is probably useful – but by using the pro forma he suggests then the best you can expect is to find what you’re looking for.
What, you might wonder, is wrong with finding what you’re looking for?
Let’s consider the Learning Policy Ross mentions in his blog:
1. Teachers must have a secure overview of the starting points, progress and context of all students.
2. Marking must be primarily formative including use of a yellow box which is clear about what students must act upon and selective marking, where relevant.
3. Marking and feedback must be regular.
4. The marking code must be used.
Number 1 is a clear statement of what the role of a teacher entails and as such seems an excellent way to hold teachers to account. But this is undermined by predetermining what good looks like in points 2-4. Why must a yellow box be used? Why can’t marking be irregular? What’s the reasoning for one marking code being superior to another? This sort of thing results in teachers marking books not for students’ benefit, but for the convenience of auditors. This isn’t a learning policy, it’s managerialism and it is to be resisted. Rather than creating unnecessary workload, it would be better to simply say, “We trust you to have a secure overview of the starting points, progress and context of all students and how you go about doing that is up to you.”
It comes down to whether you’re more interested in getting what you want or trusting people to do what is best. Instead of looking for items on a checklist we should be looking at what is there and asking questions about why it’s there and what it represents. As I’ve argued before, accountability only works if those being held to account are prompted to try to be their best instead of trying to look good. When teachers are told what good looks like they know that anything that deviates from this expectation is likely to be viewed with suspicion and subject to misunderstanding. The safe option is to cover your back, give the observer what they want and regularly festoon your books with yellow boxes.
The point is, none of this matters. The only thing worth checking for is the quality of students’ work. As such, if you really feel you need a pro forma to fill in, I suggest it looks something like this:
Now, let’s consider the evidence. Teacher 1’s students have produced work which is untidy and lacking in quality. Teacher 2’s classes, on the other hand, have produced some great stuff, but it hasn’t been marked. Teacher 3’s classes are turning out rubbish work which is also going unmarked and the students of Teacher 4 are working well and their work is being marked. What does this tell you? Which outcome do you prefer? What assumptions are you in danger of making? What questions would you want to ask?
The last two cases present few difficulties. It seems reasonable to have a word with Teacher 3 and suggest her books need marking. Even if we charitably assume that there are other methods the teacher might be giving feedback, clearly it’s not working. In the case of Teacher 4, both teacher and student seem to be doing exactly what’s expected and required. Case closed.
But what about the first two teachers? What has our scrutiny actually revealed? I’d want to have a chat with Teacher 2 to find out how this magic is being worked. It would be interesting to compare students’ work across subjects to see whether they’re all simply highly motivated young chaps who do what’s required despite feckless teachers. I might want to speak to some of the students to ask about the conditions under which their work was produced and to find out whether they have been receiving feedback through means other than marking. But, if the work is good, the last thing I want to do is tell off the teacher.
Teacher 1 though is a cause for concern. Despite the work being marked it’s just not up to snuff. Is this because their students blithely ignore their teacher’s earnest efforts? Might it be that the presence of marking isn’t providing useful feedback? If the teacher is working hard to mark, but the quality of work isn’t improving, maybe the teacher needs some support? Or perhaps the situation will right its self given time and should just be earmarked for further monitoring. It should always be remembered that teaching teachers equally is fundamentally unfair.
Both of these cases reveal circumstances where book monitoring could go wrong.It’s far harder to assess the quality of work than it is the quality of marking and so we have an entirely natural tendency to do what’s easier. If we’re just looking to see whether a marking policy has been followed Teacher 1 might get a gold star, despite the poor quality of work. And I can well imagine a scenario where Teacher 2 is forced to comply with a marking policy despite the successes of the students.
Another related point is about who’s doing the book monitoring. McGill makes the point in his post that it should be subject leaders and this is generally sound advice. The last thing we want is school leaders with a subject specialism in, say, DT, quality assuring maths books. I was once told by a PE teacher that the work my Year 7 class had been doing wasn’t challenging enough. When I asked why, he told me this was because they’d been studying The Lady of Shallot, a poem he’d seen being taught in a primary school he’d visited. “Hmm,” I replied. “That’s odd because Tennyson’s poetry is on the A level specification and I’m about to start studying it with my Year 13 class.”
Who cares if marking is regular or in line with a policy as long as the work students produce is of a fantastic quality? And if the work is ropey, only a fool would be happy if the marking meets expectations.
This should be the standard against which we hold teachers to account: Is the work great? If the answer’s no, then whatever they’re doing isn’t working. But if the answer’s yes, no other question need be asked.
If Venus de Milo did feedback – what reach she could have had by Andy Day
Is book sampling valid? by Greg Ashman
Evidence? What evidence? by Toby French
The post The problem with book monitoring appeared first on David Didau: The Learning Spy.
David Didau: The Learning Spy