The 60% agreement now does not look so impressive after all. ![]() If you are interested in the calculation, take a look at Krippendorff (2004, p. The agreement that is expected by mere chance is 56% = (9.6 +1.6)/20. The question is: How much higher is the 60% agreement over the agreement that would occur by chance? we would expect them to agree a certain percentage of the time by chance alone. If the two coders were not to read the data and would just randomly coding the 10 segments. This calculation, however, does not account for chance agreement between ratings. Let's take a look at the following example: There are ten segments of text and two coders only needed to decide whether a code applies or does not apply:Ĭoder 1 and 2 agree 6 out of 10 times, so percent agreement is 60%. The benefits of percentage agreement are that it is simple to calculate, and it can be used with any type of measurement scale. It is calculated as the number of times a set of ratings are the same, divided by the total number of units of observation that are rated, multiplied by 100. Percentage Agreement is the simplest measure of inter-coder agreement. ![]() As the percent agreement measure, and the Holsti index do not take into account chance agreement, we recommend the Krippendorff alpha coefficients for scientific reporting. ![]() Krippendorff's family of alpha coefficients.Īll methods can be used for two or more coders. “An inter-coder agreement coefficient measures the extent to which data can be trusted to represent the phenomena of analytical interest that one hopes to analyze in place of the raw phenomena” (Krippendorff, 2019).ĪTLAS.ti currently offers three methods to test inter-coder agreement:
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |