Assessment and Accountability

I have considered the suitability of the above image and what I mean to say with it. What I think needs to be discussed and questioned more in our schools is the way in which accountability is encroaching on how assessment works (or doesn’t) in our schools. Now I am not for a minute suggesting that any teacher has a deliberate agenda to unfairly assess their students, but what I am suggesting is that external pressures of accountability can exert pressure upon many teachers to adapt their practice. This is important for teachers to consider, as whilst they respond to these pressures, we are also trying to design assessments with the purpose of informing the learning of our students. How often do we get to think about the true purpose of our assessments, the fundamental principle of the good assessment?

Accountability in schools is currently spiraling from control, with linear progress often being expected for all students and teachers questioned or judged when assessment results show students not on course to meet attainment targets. Much of the data for students rely upon predictions of attainment or estimations of which students are on track to meet targets. This has moved away from the use of assessment data to inform teachers about where interventions should be placed to have the greatest effect and to which teachers need to be put under further scrutiny / heavier handed performance management. Accountability and its stranglehold over data is now taking up so much time that it is becoming increasingly challenging for teachers to actually implement the necessary teaching interventions to act upon it.

Now I am not saying all accountability activities in school need to be negative. Teachers, just like learners need to be evaluating the success of their teaching and thinking about what CPD activities might improve the outcomes for their students. From the perspective of senior leadership, accountability is a way of measuring the impact of CPD and looking for where the next intervention might be needed. These positive uses of accountability do require a sense of trust and belief that all teachers can improve with the right feedback and training. The type of accountability which is unhelpful is that which looks to weed out ‘unsuccessful’ teachers and replace them or acts out of distrust. This type of accountability will not only lead to teachers feel anxious about the judgments made about them but it will make them act out of desperation to appear faultless. This anxiety can make teachers act in ways which they would not normally, using their professional autonomy.

One way in which teacher’s practice can be altered by accountability is in their deployment and interpretation of assessment. As discussed earlier, one of the fundamental pillars of good assessment is the thinking behind the purpose of the assessment taking place. Sometimes in our classrooms, we might deploy an assessment to meet a data input deadline or for reporting a grade, but we should ask how valuable is this information? I have known teachers have to rush teaching a particular topic or delay moving on to a topic because they know one of these data drops is fast approaching. This is where assessment for accountability interrupts or stops learning.

Accountability can interfere with the purpose of an assessment and attempt to measure the quality of teaching, or even attempt to measure the likelihood of reaching a GCSE target grade in two years time! If we aimed to design an assessment for these purposes, what questions might we ask? What level of bloom’s taxonomy would need to be reached satisfy an external marker that the teacher had performed adequately? These questions are obviously very difficult to design, but very often GCSE exam papers are used to do this very task every year.

I would recommend that where possible, student assessment and accountability are best kept separate. Classroom assessment is an excellent tool to judge where students are working and inform them of what needs to be done in the next stages of learning. It is a terrible measure of the overall effectiveness of a teacher or a school and using it for these purposes of accountability should be exposed to be a useless exercise where it exists. It also drives teachers to use external assessments to shape their curriculum andinfluences the design for all assessments aiming to improve learning. As Daisey Christodoulou explains in the conclusion in her book on assessment for learning (Christodoulou, 2016):

‘Indirect measures are easily distorted and corrupted, so we have to be careful in the way we use exams and the way we prepare for them’ 

 

Christodoulou, D. (2016) Making Good Progress? The future of Assessment for Learning. Oxford University Press, Oxford.

Small School CPD – David Vs Goliath

Leading Continuous Professional Development in a school can be daunting. There are multiple organisations and consultants who have their services snapped up by large Multi-Academy Trusts (MATs) who can then often roll these out across a number of schools contained within policies. These ‘economies of scale’ are not available to smaller schools and so their smaller communities will be disadvantaged by not have the same access to these professional resources. Whilst I am not blaming large schools or MATs for being able to use their size to their advantage, I do see room for a discussion of how smaller schools can gain access to these resources.

So what are the options for smaller schools who need expert input into their CPD but struggle to afford even the most reasonable of consultant fees? One strategy is different varieties of in-house training, where the expertise of our best practitioners is shared across or between different schools. This sharing of best practice can be an excellent form of CPD and will be specific often to the students you are serving in your local area. One of the drawbacks to this, however, is that it doesn’t import any new skills or practice to your school, but assuming there is a pool of excellent practice in one area and a deficit in another, a positive change should occur.

The standard for teachers’ professional development document (http://tinyurl.com/z6jzoff) contains two standards (numbers 2 and 3) which indicate the use of external expertise and evidence as essential for effective professional development:

  • 2. Professional development should be underpinned by robust evidence and expertise.
  • 3. Professional development should include collaboration and expert challenge.

I would suggest these standards argue that for diverse training on multiple teaching skills in a school, internal coaching is not sufficient. Smaller schools may also have fewer staff to provide expertise and training and so it is much less likely they can run internal coaching programmes. Schools of this size will, therefore, need to be creative in order to make sure they can provide the same external expertise with their limited budgets.

Some of these creative methods include small schools clubbing together to pay for external consultants that can be arranged for an INSET for the group of staff. This is a useful way for training to be arranged for small groups of staff to gain subject-specific training which would simply be too expensive for a single school. Problems with this can occur, however, where schools might have different training needs due to differences in exam boards, ages being taught or context. Where these differences occur, working collaboratively on a local level may be no more advantageous than participating in CPD at a national level.

On a national level, the EEF have many projects which continually recruit schools to participate as treatment schools for evaluations of interventions. These interventions are being appraised for effectiveness which also allows for opportunities for expert challenge and the guarantee of evidence-informed practice. Where these exist and match the needs for school improvement, they can be a fantastic resource for CPD.

If any other small schools are struggling to recruit challenging, relevant but also cost-effective CPD for their dwindling CPD budgets, please comment below with solutions you have found.

 

 

In defense of constructive feedback

The EEF Review – A marked improvement?

 

The Education Endowment Foundation in the UK has published a review of current research on marking and feedback. Their review brings some welcome insights into practices teachers could adopt, and those teachers could reject without negative impact. Below I summarise the findings and the take home messages for teachers.

Thoroughness

Thoroughness revolves around the frequency of marking and how in depth it was marked. The review found that teachers could abandon ‘tick and flick’ without a noticeable fall in the effectiveness of the marking. Therefore, if you are a teacher that feels they should ‘acknowledge work’ with ticks and indiscriminate ‘good’s dotted around the page then you can abandon this for two reasons.

The first is that if you are ticking the work, you are in reality suggesting you have read it and approve. But do you? If there are mistakes in the work and we have just skim read it, you may not pick up on those. You have then ticked the work suggesting to the pupil it is correct, but it isn’t. So what did you mean? Perhaps what you meant, isn’t quite worth writing anything?

Secondly, good quality feedback is definitely time-consuming. Let’s all abandon the strategies, no matter how engrained, that simply add no value to our practice. The traditional ‘tick and flick’ is a time-saving strategy you can stop today!

Frequency and Speed

Another element reviewed by the EEF / Oxford Review is  the speed with which marking is returned after being completed by students. The report found little quality evidence regarding this which is unfortunate as it is something many students would describe as important to them. It did find that work given back the following lesson to it being completed had a positive impact. It is worth mentioning however, it was advised that the quality of marking and its precision should always be considered before attempting to return poorly marked work as quickly as possible.

Grading

There were some small-scale reports which reviewed the effects of grading as feedback. There was no evidence that grading student work was effective, except for one study in Sweden which found a small positive effect for girls. This was explained as the girls appreciating the validation of their, often underestimated, abilities.

Grading work with written feedback can often hamper the progress made as a result of the feedback. This is due to the students focusing on their performance in terms of the grade, but not the comments for improvement. Indeed, withholding grades alongside feedback may not impact students greatly.

Pupil Response

This report found that in general, students can find acting upon feedback difficult. Sometimes, as the subject specialists, we may use terms that we consider tacit, but students may struggle to understand. what is being asked. The question is, though, how can we facilitate this best?

Dialogic marking is an emerging practice where students are asked to converse with the teacher in their exercise books. This form of student feedback has not been researched to any great extent, and so the effectiveness of this practice cannot be known. The report does recommend that if this practice is to be improved, then students should be allowed dedicated reflection time in the lesson.

Corrections

When corrections were considered in the report, it was particularly interesting that coded feedback was found to be as beneficial as written comments. This could be a really time-efficient way of allowing teachers to feedback to their students.

Conclusion

In conclusion, it is clear the review has many positive messages for teachers. If we can continue to move the workload and effectivenes discussion around feedback forward, then feedback can be made to work for students and teachers, not the other way around.

 

Timing effective feedback

image

Hattie (2009) rates feedback with an effect size of .79. To put that in to perspective, quality of teaching is rated at just under half the effect size at 0.44. Clearly then feedback is a hugely important part of teaching and formative assessment. What I wonder, is can all feedback be equally effective or is there a way of maximising the effect of how we respond to student’s work?

One factor affecting the efficacy of

Formative or summative, what is the a difference?

The definition issue

Since its initial proposal, formative assessment has been most oftenly juwtified and clarified by comparison with summative assessment. These comparisons are not always helpful however, as they tend to focus upon the use of the assessments rather than the actual processes involved. In fact, when it comes to the process, sumative and formative assessments have much in common and so it can be more confusing to try and define what they are through comparison. Further still, some have tried to use assessment for both of these purposes which can add comfusion for both students and teacher when trying to validate any information from the assessment.

Their purpose

I have posted criticism before about colleagues who are adopting a false form of formative assessment through tying it on to the end of a summative test. I think, and research has shown, that assessments are most valid when they are aimed at as few purposes as possible. This does not mean that exam questions cannot be used formatively, it iust simply means that an exam which is aimed at producing a grade and being conducted in certain conditions may not be the best tools to infor learners or teachers about how to move on in learning. Formative assessment is not about the tools that are used in the assessment part of the process however, it should be more aboout what is done with them.

The design

Designing formative assessment ahould be done with the future teaching in mind. It should be used to direct teaching and improve it in a direction that would not have happened if the assessment were not done. I have had many comversations recently with colleagues who have been eager to try nw formative assessment tools, but not always understood why they are using them or what they could do with assessment information afterwards. One basic example is marking. Many teachers are clear that marking is a useful tool to feedback to students, but it is also a remarkable tool for lesson planning. This kind of measue of student learning from, their books can be extremly illuminating about what has been understood well and what hasn’t. Using this information, a highly effective intervention can be planned for the following lesson.

I hope more people will think about the intentions of their assessments and what they can do for their teaching, as opposed to what tools they could use to demonstrate they ae ‘doing formative assessment’. Having said that, new tools generating new information that I can use to inform my teaching are always an excellent discovery on Wordpess!

Mike

Learning for assessment?

image

Formative assessment and assessment for learning have many things in common. The most obvious of them though is regretfully the phrase assessment. This has led to many people conducting formative assessment as a series of exams, which are used to track the progress of their students over a period of their learning. This is an effective way of identifying learners who are struggling, and learners who are performing well. Many teachers who I have met, intend to use this data to create interventions for their students but often find they have reluctant students (those that ‘failed’) or unmotivated students (those that did better than their competitors).

The photograph above shows an attempt at this type of ‘formative assessment’ where a colleague is clearly intending to write upon the front of an exam paper some feedback to their examinee. I can’t help but wonder how much time the student will spend reflecting on the feedback they receive from their teacher as opposed to comparing their grades with those of their classmates. Butler (1988) conducted a well-controlled study in Israel comparing students receiving feedback with grades and students receiving only written comments. She found that students who received grades alongside written feedback did not subsequently perform as well as their peers receiving only written comments for feedback.

If this form of assessment and feedback does not result in improvements in learning, then it cannot be construed as formative assessment surely? I hope that I don’t have to see any more exam papers with the school’s policy of marking across it in an attempt to deliver the prescribed regular feedback regardless of whether it has an impact or not. In addition, I wish those teachers that do, would have empathy for the student who is struggling in their subject and now receives a barrage of grades telling them that all the work they have put in to revision and completing an exam paper has been given a summary judgement of a G.

Exams are incredibly useful at certain times and students do appreciate being able to benchmark their progress. Please do correct someone though if you see them confusing this process with formative assessment.

 

  1. Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task- involving and ego-involving evaluation on interest and performance. British Journal of Educational Psychology, 5 (58), p.1–14.

Intentions for Learning?

image

What do I want them to learn?

I think to non-teachers this question must sound like a rather obvious simplistic idea to need to dwell upon for a long period of time. Even more so, for it to be something for teachers to have difficulty elaborating. I am currently deciding how to ensure teachers within my school both decide upon what makes a good learning intention and what makes a poor learning intention. We are undergoing a two year project, improving our use of Formative Assessment in our school. My main explenation for them currently is that assessment becomes formative when they use it to alter and adapt their teaching to meet the learning needs of the students in front of them. An analogy for this, is the use of road signs along a journey. A road sign every so often that helps you to take the right turning or junction of a motorway, is much more beneficial than a sign at the end of a long journey which tells you that you are in the wrong place.

Learning intentions are sometimes more commonly called learning objectives. They are meant to be generic statements which relate to the transferable skills we are trying to get our students to demonstrate. This can be a more managable challenge for some colleagues than others, especially when practical subjects can be developing niche skills which may not relate easily to other subject areas. This can also help us to evaluate what our students are learning by seeing the skills we are looking for them to demomstrate. From here, it is possible to build a picture of the competencies that we wish our students to exhibit.

Formative assessment works only when the course of teaching is steered due to information gained from assessments along the course of learning. In order for this assessment and redirection to take place however, a clear course needs to be planned out. Within my school training I have explained this as planning a trip to London, where you need to direct and use road signs along the way. London is where we are intending to end up, but to get there we must plan and acknowldge what to look for along the journey so that we know we are headed in the right direction. I think teachers have grown accustomed to the use of levels and grades which have acted in the place of signs to tell them whether or not students are learning. In this strategy, underperforming students must work harder whilst students who are achieving a ‘target grade’ must clearly be learning, right?

Measuring whether or not students are learning can be extremely challenging and often inprecise. Having instead, the aim of gauging whether learners have learnt enough to move on with the next challenge is more accomplishable. So too, is finding relevant signs of students who are struggling to continue on the planned course of learning. This is where planned learning outcomes and defined success criteria can enable a good teacher to be an effective assessor. These planned checkpoints along the journey, can make sure we end up in London, rather than our learners unexpectedly arriving in New York, and being told to just work harder.

Mike