An Alternative Method Used in Evaluating Agreement among Repeat Measurements by Two Raters in Education

Author/s: Semra Erdoğan, Gulhan Orekici Temel, Huseyin Selvi, Irem Ersoz Kaya

DOI: 10.12738/estp.2017.1.0357  OnlineFirst published on November 23, 2016

Abstract

Taking more than one measurement of the same variable also hosts the possibility of contamination from error sources, both singly and in combination as a result of interactions. Therefore, although the internal consistency of scores received from measurement tools is examined by itself, it is necessary to ensure interrater or intra-rater agreement in order to provide reliability. The biggest problem while conducting agreement analyses for obtained measurement results is deciding which statistical method to use. Inconsistency between measurements obtained by different methods over the same individual has been suggested as being similar to inconsistency between repeated measurements obtained by the same methods over the same individual. For this purpose, a new approach is proposed for estimating and defining an agreement coefficient between raters or methods. Based on this goal, an answer to the following question is sought: When the dependent/predicted variable has two categories (such as successful-unsuccessful, sick-healthy, positive-negative, exists-does not exist, etc.) and there are two raters who each undertake repeat measures, how does the method work in terms of disagreement functions and individual-agreement coefficient, as well as for different numbers of repeat measures and different sample sizes?

Keywords
Reliability, Methods comparison, Disagreement function, Inter-method agreement, Intra-method agreement, Individual agreement coefficient

Click for Turkish (Türkçe)