S21 Tuol Sleng – My LRTT experience

On Sunday 6th August I entered the secret Khmer Rouge torture camp known as S-21 or Tuol Sleng. A site of extreme contrast, S-21 was redesignated from a school to a centre for forced confessions and barbaric torture. We were warned about what might be inside. Truthfully different people reacted in different ways to the horrific scenes before us; having each other the only thing getting us through.

Building A was to our left as we entered the compound. The barbed wired and boarded air vents were scars from the perverse transformation that had taken place. I walked over to building A, led by the audio guide warning of the Interrogation and Imprisonment that had taken place. The external bustling sounds of Phnom Penh made the exterior of the building feel firmly set in the past; walking into the building dragged me back.

The interrogation rooms were bare with filthy floors and walls still displaying the crimes committed within them. I can’t describe the smell, not being distinct, but present enough to repel my nose. These rooms still contained the beds where interrogators would force confessions from their victims. The stark reality of these atrocities was not hidden but displayed in photographs and the blood stains on the wall. We absorbed the atmosphere in these rooms of horror, classrooms facing out to the bustling city.

Leaving this building was a relief, but we were aware it was only a foreshadow of what we were about to see. Outside of the building a gym climbing frame stood adjacent to a large wooden frame with pots underneath. The children’s toy served as entertainment for the building’s first inhabitants but the Khmer Rouge’s reappropriation. The gallows, standing around nine feet high, were used to hang people by their wrists until they passed out.  They were then resuscitated by being submerged in human faeces before they then continued their torture regime. A striking contrast in setting and emotions continued as we moved through to the next building.

Building B was even more defaced and warped than those previous. Corse cells had been constructed from bricks and cement, few lucky enough to overlap a window. Inside these cells men, women and children were held. The silence was enforced with beatings should a sleeping hostage dare move and rattle a chain. On the second floor, many stories of the Khmer Rouge members and their victims made for further horrors. The emotion of fear and despair evoked from the letters of those taken in by propaganda and anger from the members of Khmer Rouge yet to face justice created turbulent thoughts. To read of injustice being committed and closure not being given in the same room made me feel sick and angry.

Building C only worked to evoke more feelings of anger and despair for the darkest depravities contained inside. For me personally, the stories of children being separated from their parents and what they must have witnessed tugged at my insides. Walking through the rooms of horror, victims starting from mugshots, I was not alone. Surrounded by friends and those I could seek comfort from presented a welcome relief from the audio guide. Listening to the harrowing mistreatment of fellow human beings, imagining it being myself in these rooms, my family also, it was easy to drown in the emotion. Building C contained the tools of torture, stories of families brutally destroyed and finally a room with skulls and bones of victims. The connection with the past was tangible and unlike previous museums I have visited, I felt those people presented in front of me. In contrast, comforting those around me and sharing the experience provided a tie to the present and a shield to stop me from feeling lost in the depths of how horrific humanity can be. The horrors of those buildings will never leave me and neither will the bonds I have with the fellows I shared those horrors with.

After exiting building C we were met with the monument commemorating those that had died in S-21. Pausing here allowed us to consider what had happened there, what still happens today and whether this might be repeated in the future. The funds to build the monument were donated by the German government an act of kindness and kinship between these two countries. Walking around this we found Marquis with two happy individuals selling books. It transpired that they were survivors of the camp who had their life biographies on sale. These books were poignant anecdotes of what had taken place in the building we all shared together. One of the survivors pointed to a photograph in his own book and said ‘That is Pol Pot, he killed my wife’. This man was friendly and greeted us with a smile, an experience hugely different to that which greeted him on arrival.

‘That is Pol Pot, he killed my wife’

S-21 survivor

Leaving S-21 we travelled through the city to the site of the killing fields outside of Phnom Penh. The killing fields were once a Chinese burial ground, where the dead were brought by love ones for respectful burial. In continuation of their previous projects, the Khmer Rouge defiled the land by using it for torture and death. Moving through the fields we were quick to discover evidence of atrocities still lying in the earth, lying alongside symbols of love and respect from visitors who had left tributes to those that had died.

Just on further from the initial sites of mass burial was a large beautiful tree covered in more tributes and set in a tranquil glade. Looking at the size and the shade it gave, it was a welcome sight for those looking to hide from the mid-day sun. Reading the sign nearby, it became clear that what happened there just 40 years ago really did not reflect the beauty we could see that day. This tree was called the killing tree and was used to end the lives of babies and small children in the most horrific and inhumane method possible. The vision from the leaders of the Khmer Rouge was to start a brand new simplistic society, with all undesirable individuals removed. Their rationale that their children should also be killed was summed up in the phrase ‘When pulling out weeds, remove their roots and all’. This propaganda message summed up for me the brutality and sheer insanity of the people behind this regime.

IMG-20170808-WA0007

 ‘When pulling out weeds, remove their roots and all’

Khmer Rouge propaganda

In the centre of the killing fields was a structure called a Buddhist Stupa, which contained the skulls of many victims. By the time we reached this, moral support really was needed as the day had been very reflective, dark and immersive experience. Once again, what really made the difference here was the fellowship of people I was surrounded by. The tower of skulls showed scientific markings describing how these men, woman and children had been massacred with brutal tools for murder. Laying a flower and incense with another compassionate friend allowed us to pay respects to those inside and around in the best way we could. The faces of the victims inside, staring out of the display case, made any beauty of the flowers and incense fade away quickly. These were human beings, with love, hopes, and finally fears who had their lives needlessly and tragically taken away.
One moment of today lacking juxtaposition was our meal after the killing fields. The strangers we were two weeks ago, we might have not known how to approach each other in our different moods of contemplation. As a fellowship, a group of friends, we now knew that the best remedy for these fields of horror was to chose kindness and banish the negative feelings for that of love between us all as friends. In a place permanently scarred by brutality, being surrounded by the warmth and friendship of fantastic human beings. I am so grateful to the people who helped me feel grounded that day. I know that whenever I reflect on the darkness and sad events of that day, I will also be reminded of the love and fantastic feelings I found in my friends surrounding me that day!

Advertisements

Assessment and Accountability

I have considered the suitability of the above image and what I mean to say with it. What I think needs to be discussed and questioned more in our schools is the way in which accountability is encroaching on how assessment works (or doesn’t) in our schools. Now I am not for a minute suggesting that any teacher has a deliberate agenda to unfairly assess their students, but what I am suggesting is that external pressures of accountability can exert pressure upon many teachers to adapt their practice. This is important for teachers to consider, as whilst they respond to these pressures, we are also trying to design assessments with the purpose of informing the learning of our students. How often do we get to think about the true purpose of our assessments, the fundamental principle of the good assessment?

Accountability in schools is currently spiraling from control, with linear progress often being expected for all students and teachers questioned or judged when assessment results show students not on course to meet attainment targets. Much of the data for students rely upon predictions of attainment or estimations of which students are on track to meet targets. This has moved away from the use of assessment data to inform teachers about where interventions should be placed to have the greatest effect and to which teachers need to be put under further scrutiny / heavier handed performance management. Accountability and its stranglehold over data is now taking up so much time that it is becoming increasingly challenging for teachers to actually implement the necessary teaching interventions to act upon it.

Now I am not saying all accountability activities in school need to be negative. Teachers, just like learners need to be evaluating the success of their teaching and thinking about what CPD activities might improve the outcomes for their students. From the perspective of senior leadership, accountability is a way of measuring the impact of CPD and looking for where the next intervention might be needed. These positive uses of accountability do require a sense of trust and belief that all teachers can improve with the right feedback and training. The type of accountability which is unhelpful is that which looks to weed out ‘unsuccessful’ teachers and replace them or acts out of distrust. This type of accountability will not only lead to teachers feel anxious about the judgments made about them but it will make them act out of desperation to appear faultless. This anxiety can make teachers act in ways which they would not normally, using their professional autonomy.

One way in which teacher’s practice can be altered by accountability is in their deployment and interpretation of assessment. As discussed earlier, one of the fundamental pillars of good assessment is the thinking behind the purpose of the assessment taking place. Sometimes in our classrooms, we might deploy an assessment to meet a data input deadline or for reporting a grade, but we should ask how valuable is this information? I have known teachers have to rush teaching a particular topic or delay moving on to a topic because they know one of these data drops is fast approaching. This is where assessment for accountability interrupts or stops learning.

Accountability can interfere with the purpose of an assessment and attempt to measure the quality of teaching, or even attempt to measure the likelihood of reaching a GCSE target grade in two years time! If we aimed to design an assessment for these purposes, what questions might we ask? What level of bloom’s taxonomy would need to be reached satisfy an external marker that the teacher had performed adequately? These questions are obviously very difficult to design, but very often GCSE exam papers are used to do this very task every year.

I would recommend that where possible, student assessment and accountability are best kept separate. Classroom assessment is an excellent tool to judge where students are working and inform them of what needs to be done in the next stages of learning. It is a terrible measure of the overall effectiveness of a teacher or a school and using it for these purposes of accountability should be exposed to be a useless exercise where it exists. It also drives teachers to use external assessments to shape their curriculum andinfluences the design for all assessments aiming to improve learning. As Daisey Christodoulou explains in the conclusion in her book on assessment for learning (Christodoulou, 2016):

‘Indirect measures are easily distorted and corrupted, so we have to be careful in the way we use exams and the way we prepare for them’ 

 

Christodoulou, D. (2016) Making Good Progress? The future of Assessment for Learning. Oxford University Press, Oxford.

Small School CPD – David Vs Goliath

Leading Continuous Professional Development in a school can be daunting. There are multiple organisations and consultants who have their services snapped up by large Multi-Academy Trusts (MATs) who can then often roll these out across a number of schools contained within policies. These ‘economies of scale’ are not available to smaller schools and so their smaller communities will be disadvantaged by not have the same access to these professional resources. Whilst I am not blaming large schools or MATs for being able to use their size to their advantage, I do see room for a discussion of how smaller schools can gain access to these resources.

So what are the options for smaller schools who need expert input into their CPD but struggle to afford even the most reasonable of consultant fees? One strategy is different varieties of in-house training, where the expertise of our best practitioners is shared across or between different schools. This sharing of best practice can be an excellent form of CPD and will be specific often to the students you are serving in your local area. One of the drawbacks to this, however, is that it doesn’t import any new skills or practice to your school, but assuming there is a pool of excellent practice in one area and a deficit in another, a positive change should occur.

The standard for teachers’ professional development document (http://tinyurl.com/z6jzoff) contains two standards (numbers 2 and 3) which indicate the use of external expertise and evidence as essential for effective professional development:

  • 2. Professional development should be underpinned by robust evidence and expertise.
  • 3. Professional development should include collaboration and expert challenge.

I would suggest these standards argue that for diverse training on multiple teaching skills in a school, internal coaching is not sufficient. Smaller schools may also have fewer staff to provide expertise and training and so it is much less likely they can run internal coaching programmes. Schools of this size will, therefore, need to be creative in order to make sure they can provide the same external expertise with their limited budgets.

Some of these creative methods include small schools clubbing together to pay for external consultants that can be arranged for an INSET for the group of staff. This is a useful way for training to be arranged for small groups of staff to gain subject-specific training which would simply be too expensive for a single school. Problems with this can occur, however, where schools might have different training needs due to differences in exam boards, ages being taught or context. Where these differences occur, working collaboratively on a local level may be no more advantageous than participating in CPD at a national level.

On a national level, the EEF have many projects which continually recruit schools to participate as treatment schools for evaluations of interventions. These interventions are being appraised for effectiveness which also allows for opportunities for expert challenge and the guarantee of evidence-informed practice. Where these exist and match the needs for school improvement, they can be a fantastic resource for CPD.

If any other small schools are struggling to recruit challenging, relevant but also cost-effective CPD for their dwindling CPD budgets, please comment below with solutions you have found.

 

 

In defence of constructive feedback

The EEF Review – A marked improvement?

 

The Education Endowment Foundation in the UK has published a review of current research on marking and feedback. Their review brings some welcome insights into practices teachers could adopt, and those teachers could reject without negative impact. Below I summarise the findings and the take home messages for teachers.

Thoroughness

Thoroughness revolves around the frequency of marking and how in depth it was marked. The review found that teachers could abandon ‘tick and flick’ without a noticeable fall in the effectiveness of the marking. Therefore, if you are a teacher that feels they should ‘acknowledge work’ with ticks and indiscriminate ‘good’s dotted around the page then you can abandon this for two reasons.

The first is that if you are ticking the work, you are in reality suggesting you have read it and approve. But do you? If there are mistakes in the work and we have just skim read it, you may not pick up on those. You have then ticked the work suggesting to the pupil it is correct, but it isn’t. So what did you mean? Perhaps what you meant, isn’t quite worth writing anything?

Secondly, good quality feedback is definitely time-consuming. Let’s all abandon the strategies, no matter how engrained, that simply add no value to our practice. The traditional ‘tick and flick’ is a time-saving strategy you can stop today!

Frequency and Speed

Another element reviewed by the EEF / Oxford Review is the speed with which marking is returned after being completed by students. The report found little quality evidence regarding this which is unfortunate as it is something many students would describe as important to them. It did find that work given back the following lesson to it being completed had a positive impact. It is worth mentioning however, it was advised that the quality of marking and its precision should always be considered before attempting to return poorly marked work as quickly as possible.

Grading

There were some small-scale reports which reviewed the effects of grading as feedback. There was no evidence that grading student work was effective, except for one study in Sweden which found a small positive effect for girls. This was explained as the girls appreciating the validation of their, often underestimated, abilities.

Grading work with written feedback can often hamper the progress made as a result of the feedback. This is due to the students focusing on their performance in terms of the grade, but not the comments for improvement. Indeed, withholding grades alongside feedback may not impact students greatly.

Pupil Response

This report found that in general, students can find acting upon feedback difficult. Sometimes, as the subject specialists, we may use terms that we consider tacit, but students may struggle to understand. what is being asked? The question is, though, how can we facilitate this best?

Dialogic marking is an emerging practice where students are asked to converse with the teacher in their exercise books. This form of student feedback has not been researched to any great extent, and so the effectiveness of this practice cannot be known. The report does recommend that if this practice is to be improved, then students should be allowed dedicated reflection time in the lesson.

Corrections

When corrections were considered in the report, it was particularly interesting that coded feedback was found to be as beneficial as written comments. This could be a really time-efficient way of allowing teachers to feedback to their students.

Conclusion

In conclusion, it is clear the review has many positive messages for teachers. If we can continue to move the workload and effectiveness discussion around feedback forward, then feedback can be made to work for students and teachers, not the other way around.

 

Timing effective feedback

image

Hattie (2009) rates feedback with an effect size of .79. To put that in to perspective, quality of teaching is rated at just under half the effect size at 0.44. Clearly then feedback is a hugely important part of teaching and formative assessment. What I wonder, is can all feedback be equally effective or is there a way of maximising the effect of how we respond to student’s work?

One factor affecting the efficacy of

Formative or summative, what is the a difference?

The definition issue

Since its initial proposal, formative assessment has been most oftenly juwtified and clarified by comparison with summative assessment. These comparisons are not always helpful however, as they tend to focus upon the use of the assessments rather than the actual processes involved. In fact, when it comes to the process, sumative and formative assessments have much in common and so it can be more confusing to try and define what they are through comparison. Further still, some have tried to use assessment for both of these purposes which can add comfusion for both students and teacher when trying to validate any information from the assessment.

Their purpose

I have posted criticism before about colleagues who are adopting a false form of formative assessment through tying it on to the end of a summative test. I think, and research has shown, that assessments are most valid when they are aimed at as few purposes as possible. This does not mean that exam questions cannot be used formatively, it iust simply means that an exam which is aimed at producing a grade and being conducted in certain conditions may not be the best tools to infor learners or teachers about how to move on in learning. Formative assessment is not about the tools that are used in the assessment part of the process however, it should be more aboout what is done with them.

The design

Designing formative assessment ahould be done with the future teaching in mind. It should be used to direct teaching and improve it in a direction that would not have happened if the assessment were not done. I have had many comversations recently with colleagues who have been eager to try nw formative assessment tools, but not always understood why they are using them or what they could do with assessment information afterwards. One basic example is marking. Many teachers are clear that marking is a useful tool to feedback to students, but it is also a remarkable tool for lesson planning. This kind of measue of student learning from, their books can be extremly illuminating about what has been understood well and what hasn’t. Using this information, a highly effective intervention can be planned for the following lesson.

I hope more people will think about the intentions of their assessments and what they can do for their teaching, as opposed to what tools they could use to demonstrate they ae ‘doing formative assessment’. Having said that, new tools generating new information that I can use to inform my teaching are always an excellent discovery on Wordpess!

Mike

Learning for assessment?

image

Formative assessment and assessment for learning have many things in common. The most obvious of them though is regretfully the phrase assessment. This has led to many people conducting formative assessment as a series of exams, which are used to track the progress of their students over a period of their learning. This is an effective way of identifying learners who are struggling, and learners who are performing well. Many teachers who I have met, intend to use this data to create interventions for their students but often find they have reluctant students (those that ‘failed’) or unmotivated students (those that did better than their competitors).

The photograph above shows an attempt at this type of ‘formative assessment’ where a colleague is clearly intending to write upon the front of an exam paper some feedback to their examinee. I can’t help but wonder how much time the student will spend reflecting on the feedback they receive from their teacher as opposed to comparing their grades with those of their classmates. Butler (1988) conducted a well-controlled study in Israel comparing students receiving feedback with grades and students receiving only written comments. She found that students who received grades alongside written feedback did not subsequently perform as well as their peers receiving only written comments for feedback.

If this form of assessment and feedback does not result in improvements in learning, then it cannot be construed as formative assessment surely? I hope that I don’t have to see any more exam papers with the school’s policy of marking across it in an attempt to deliver the prescribed regular feedback regardless of whether it has an impact or not. In addition, I wish those teachers that do, would have empathy for the student who is struggling in their subject and now receives a barrage of grades telling them that all the work they have put in to revision and completing an exam paper has been given a summary judgement of a G.

Exams are incredibly useful at certain times and students do appreciate being able to benchmark their progress. Please do correct someone though if you see them confusing this process with formative assessment.

 

  1. Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task- involving and ego-involving evaluation on interest and performance. British Journal of Educational Psychology, 5 (58), p.1–14.