Hello,
I have a pre-segmented .eaf file which two annotators annotated separately using the same set of 5 codes. If I check for agreement manually, I clearly see that 71% of the segments have the same code, so I would expect a strong value of Cohen’s kappa.
However, when I run the calculation in ELAN, for some reason each of my colleague’s codes is doubled. For example, we have a code named “3” and in the annotation statistics I see that there are two separate “3” codes, even though “3” is typed correctly and always in the same way.
In short, I have a total of 5 codes and my colleague has 10, which leads to a very low Inter-Annotator Agreement.
Have you any clue why this happens?
Thank you,
Marta
