Tina Wu

In the wake of introducing automated marking for NAPLAN’s writing section, there are fears that teaching methods may change to train students to write specifically for the computer.

The Australian Curriculum, Assessment and Reporting Authority (ACARA) introduced the concept of automated marking for writing assessments in 2015, and the formal process is set to begin with a double-marking scheme completed by a computer and a human marker in 2018. The writing section of NAPLAN asks students to write either a narrative or a persuasive text.

This decision was made in the hopes of reducing turnaround times for exam results so that students will receive their marks within weeks instead of months.

However, there is concern for the risks involved in making the change to automated marking. According to Dr Les Perelman from the Massachusetts Institute of Technology, whose research was commissioned by the NSW Teachers Federation, learning will eventually become tailored toward writing specifically for the computer and hinder the complexity of the writing process for students.

“[Dr Perelman] very, very comprehensibly covered the concern around robot marking generating a particular set of criteria,” says Gary Zadkovich, Acting President of the Federation.

“Teaching and learning over time would be skewed towards meeting the criteria that robots use and learning will actually suffer and deteriorate as a result.”

Dr Robyn Cox, Associate Professor of Literacy Education at the Australian Catholic University, also says that automated marking may hold particular implications for textbook and software manufacturers.

“We may well have a new set of books saying ‘how to prepare your child for their writing to be marked by a computer’,” Cox says.

Despite this, research conducted by ACARA found that automated marking systems grade students at the same level as human markers, if not better.

The study compared the grades of 339 essays awarded by human markers and four independent Automated Essay Scoring (AES) systems. The results saw that the AES systems awarded similar scores to their human counterparts.

Over 1000 human-scored writing tasks were used to first “train” the systems to familiarise themselves with the marking criteria used by human markers.

“NAPLAN analytical marking rubrics are explicitly designed to assess differential and combined contributions of lexical and semantic features in writing,” ACARA researchers write in a report from 2015.

“The current approach to marking of NAPLAN writing is well positioned to utilise the advancements in
the field of automated scoring models.”

Mohan Dhall, CEO of the Australian Tutoring Association, says that it will take time for educators to
understand the full effects of automated marking.

“Any testing regime, be it in-school or out-of-school has implications for the development of writing,”
Dhall says.

“It is too early yet to see what the implications of this decision will be, however, evidence should be used as a critique rather than anecdotes and scaremongering based on opinion.”