Module+3+Topic+3+-+Quality+and+Accountability

Topic 3 - Quality and Accountability

I found it very interesting to browse this issue of the Journal of Education Policy. When you read about the development and use of data in place in much of Europe, it is easy to see what has paved the way for many policy changes here in Australia. It is a little scary to consider how the government can just ignore the failures that have been observed overseas to push blindly on with the reforms they favour. Caldwell (2010) cites Sir Ken Robinson as saying, about England; ‘Education is being strangled persistently by the culture of standardised testing. The irony is that these tests are not raising standards except in some very particular areas, and at the expense of most of what really matters in education (Robinson, 2009).’ p 50 Wouldn’t you think more notice would be taken of things that are happening, or not happening in the case of publishing school specific data, in a country like Finland who tops the PISA results!

The seemingly bloody-minded push for ‘data driven’ improvements, once again fails to listen to and be guided by research. I really agreed with a comment made by a QAE officer interviewed for the study in Finland; ‘With evaluation... If it’s accurately focussed and accurately used, it produces knowledge that’s useful for management.’ P174 NAPLAN data has many limitations. It is not accurately focussed and it is not always used accurately. ‘With the current NAPLAN design, where there is only one annual test of 40 questions per subject area, student scores contain large margins of error. NAPLAN results do not provide sufficiently accurate information on student performance, student progress or school performance.’ (Wu, 2010) p25 I can’t really answer this question. I can only surmise that the people who commissioned and developed the NAPLAN tests have in some way measured the quality and validity of the results. As a classroom administrator of the test, I have serious doubts about its quality. To me, one test on one day, under foreign ‘testing’ conditions, does not truly reflect the capabilities of every child. I know children who have achieved terrible results in the NAPLAN, yet are capable of much more in their normal classroom situation. With any multiple choice test, there are issues of how much does the child know as opposed to how well the child guesses. We have had some lucky guessers over the years! Having worked as a marker for the writing section of NAPLAN last year, I have concerns over some aspects of that process. Despite the markers being well trained, the final mark is still open to personal interpretation. I believe some of the students at my school were marked very harshly, while others were marked very generously. It’s a bit like Babbie’s discussions of ‘objectivity’. It’s very hard to put your personal ideas and preferences aside. All of these things make me suspicious of NAPLAN results being published for all to see. As John Graham states in his editorial for Professional Voice, ‘The message of the website (My School) was that a school’s quality can be accurately equated with the performance of its students on literacy and numeracy tests held over three days in May.’ p15. Isn’t this a worrying trend. There must surely be fairer, broader ways of gathering data than simply one standardised test. We are now heading towards a National Curriculum that has no attached method of assessment; if an effective set of outcomes could be devised to match the learning opportunities within this curriculum, along with some appropriate tasks that assess understanding, could this be used to form the basis of measurable data? I believe more effort needs to be put into exploring this or other alternative ways of gathering quality data. ‘Policies that see data and accountability as the primary means of generating improvement are assuming that the central problem is one of motivation, and with the right incentives people could and would improve their performance. Such an assumption is not consistent with research on work performance in most fields (Pfeffer and Sutton 2006;Amabile and Kramer 2010).’ cited in Levin p741 Yes it is important for an element of accountability to be present in education, just as it is in any other area of society. The difference with education is that the role of the teacher is not the only factor in educational improvement for students. When striving to improve educational outcomes across the board, policy makers need to look broadly at a number of issues such as race, family circumstance and social pressure. There are an increasing number of children in our society who do a good job to turn up at school each day, let alone try to learn something! Teaching is not the sole responsibility of the classroom teacher. If a child who gets support at home is compared to a child who doesn’t get support, then a big difference could be seen in results and potential for improvement.
 * So who does decide what constitutes quality in the evidence presented?**
 * Are there other ways of identifying quality on the scale needed for government policy to be effectively determined? How do "quality" and "accountability" relate to each other?**

I believe accountability in education should be more about the teacher, and other stakeholders, being able to articulate what knowledge they have about the child’s learning achievements (whether this comes from NAPLAN or other summative or formative assessment within the classroom) and needs and what intervention or extension strategies they have in place to assist that child. A child who has not reached department set benchmarks may still have improved a lot and both he/she and the teacher deserve to be recognised for that fact. Benchmarks are different for everyone! A final word from Levin (2008) Good information about current performance is necessary for improvement, but improvement depends on people knowing what they need to do in order to get better results and on their having the desire and ability to do those things (what I have called ‘will’ and ‘skill’ (Levin 2008)). p740

References

Andersen,V; Dahler-Larsen, P and Strømbæk Pedersen, C (2008) Quality assurance and evaluation in Denmark Journal of Education Policy Vol. 24, No. 2, March 2009,p 135-147

Caldwell, B (2010) The Impact of High-Stakes Test-Driven Accountability Professional Voice ‐Volume 8 Issue 1 p49-54 http://www.aeuvic.asn.au/pv_8_1_complete.pdf

Croxford, L; Grek, S and Shaik, F (2009)Quality assurance and evaluation (QAE) in Scotland: promoting self-evaluation within and beyond the country Journal of Education Policy Vol. 24, No. 2, March 2009, p179-193

Graham, J (2010)– Editorial: The Trouble with My School. Professional Voice ‐Volume 8 Issue 1, p 7 – 12 http://www.aeuvic.asn.au/pv_8_1_complete.pdf

Greka,S; Lawna,M; Lingardb, B and Varjoc, J (2009) North by northwest: quality assurance and evaluation processes in European education Journal of Education Policy Vol. 24, No. 2, March 2009, p 121-133

Levin, B (2010) Governments and education reform: some lessons from the last 50 years Journal of Education Policy Vol. 25, No. 6, November 2010,p 739–747

Ozga, J (2008) Governing education through data in England: from regulation to self-evaluation Journal of Education Policy Vol. 24, No. 2, March 2009, p149-162

Segerholma,C (2009) ‘We are doing well on QAE’: the case of Sweden Journal of Education Policy Vol. 24, No. 2, March 2009, p195-209

Simolaa, H; Rinneb, R; Varjoa,J; Pitkänena, H and Kaukoa, J Quality assurance and evaluation (QAE) in Finnish compulsory schooling: a national model or just unintended effects of radical decentralisation? Journal of Education Policy Vol. 24, No. 2, March 2009, p163-178

Wu, M (2010) The Inappropriate use of NAPLAN data Professional Voice ‐Volume 8 Issue 1, p 21-26 http://www.aeuvic.asn.au/pv_8_1_complete.pdf