Skip to the main content

Original scientific paper

https://doi.org/10.15516/cje.v21i1.2922

Evaluating Essay Assessment: Teacher-Developed Criteria versus Rubrics. Intra/Inter Reliability and Teachers’ Opinions

Veda Aslim-Yetis orcid id orcid.org/0000-0002-0435-1217 ; Anadolu University, Faculty of Education


Full text: english pdf 454 Kb

page 103-155

downloads: 697

cite

Full text: croatian pdf 454 Kb

page 103-155

downloads: 474

cite


Abstract

Rater reliability plays a key role in essay assessment, which has to be valid, reliable and effective. The aims of this study are: to determine intra/inter reliability variations based on two sets of grades that five teachers/raters produced via assessing argumentative essays written by 10 students learning French as a foreign language in accordance with the criteria they had developed and with a rubric; to understand the criteria they used in the assessment process; and to note what the raters/teachers who used rubrics for the first time within the scope of this study think about rubrics. Quantitative data set has revealed that intra-rater reliability between the grades assigned, through the use of teacher-developed criteria and the rubrics, is low, that inter-rater reliability is again low for the grades based on teacher-developed criteria, and that inter-rater reliability is more consistent for assessments completed through the use of rubrics. Qualitative data obtained during individual interviews have shown that raters employed different criteria. During the second round of individual interviews following the use of rubrics, raters have noted that rubrics helped them to become more objective, contributed positively to the assessment process, and can be utilized to support students’ learning and to enhance teachers’ instruction.

Keywords

evaluation; mixed-method research design; writing

Hrčak ID:

220699

URI

https://hrcak.srce.hr/220699

Publication date:

27.3.2019.

Article data in other languages: croatian

Visits: 2.391 *