Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.15516/cje.v21i1.2922

Evaluating Essay Assessment: Teacher-Developed Criteria versus Rubrics. Intra/Inter Reliability and Teachers’ Opinions

Veda Aslim-Yetis orcid id orcid.org/0000-0002-0435-1217 ; Anadolu University, Faculty of Education


Puni tekst: engleski pdf 454 Kb

str. 103-155

preuzimanja: 689

citiraj

Puni tekst: hrvatski pdf 454 Kb

str. 103-155

preuzimanja: 474

citiraj


Sažetak

Rater reliability plays a key role in essay assessment, which has to be valid, reliable and effective. The aims of this study are: to determine intra/inter reliability variations based on two sets of grades that five teachers/raters produced via assessing argumentative essays written by 10 students learning French as a foreign language in accordance with the criteria they had developed and with a rubric; to understand the criteria they used in the assessment process; and to note what the raters/teachers who used rubrics for the first time within the scope of this study think about rubrics. Quantitative data set has revealed that intra-rater reliability between the grades assigned, through the use of teacher-developed criteria and the rubrics, is low, that inter-rater reliability is again low for the grades based on teacher-developed criteria, and that inter-rater reliability is more consistent for assessments completed through the use of rubrics. Qualitative data obtained during individual interviews have shown that raters employed different criteria. During the second round of individual interviews following the use of rubrics, raters have noted that rubrics helped them to become more objective, contributed positively to the assessment process, and can be utilized to support students’ learning and to enhance teachers’ instruction.

Ključne riječi

evaluation; mixed-method research design; writing

Hrčak ID:

220699

URI

https://hrcak.srce.hr/220699

Datum izdavanja:

27.3.2019.

Podaci na drugim jezicima: hrvatski

Posjeta: 2.354 *