I’m in the process of creating a template to code gestures for my study, but I’m a little confused about the best way to set up my coding system in ELAN so that I can later calculate reliability between coders. I will have 2-3 research assistants working with me, and I will be coding reliability on 10-20% of the subject video files. I want to make sure I setup the template in a manner that allows me to calculate a reliability score (preferably, kappa).
Do you have any recommendations for how to setup a template to allow for this? Ideally, I’d like my other coders to not see what the other coders have coded and later compare their work after they coded all of the gestures from a particular subject.
I read on the forum about Merge Transcriptions. Is this the best solution to this question?
Thanks for reading my question. I really appreciate it!
Yes, although it is possible to hide tiers in ELAN, it is probably better to have two templates, one for rater 1 and one for rater 2. The tier setup in both files should correspond but the tier names should be (slightly) different, e.g. by using suffixes like _R1 and _R2. When scoring has finished the two files can indeed be merged into one file.
There is a “compare annotators” function in ELAN which calculates the ratio between the overlap of two corresponding annotations (AND) and their total duration (OR). Calculation of kappa is planned but not implemented yet.
-Han
Oh, I’m chipping in here as this is exactly my problem. I have two separate eaf-files both originating from the same template, i.e., tier names are identical. The first file was created by annotator 1 and the second one by annotator 2. I now want to compare both annotators – and apparently need to merge the files. As far as I understood, I need to rename at least one set of tiers (say the ones corresponding to annotator 2) and then “merge the file”.
I tried multiple file export (selected tiers as eaf) but ended up with two new files – non-merges 
How do I combine the annotations of two different files into one to do the annotator comparison?
Many thanks.
Thomas
There is not really an elegant solution for this, currently. Two possible approaches are:
- change the tier names in the file(s) of rater 2 (if there is more than one file you can use File->Multiple File Processing->Edit Multiple Files) and merge transcription (File->Merge Transcriptions). These are quite a few steps and changing tier names breaks the correspondence with the template.
- export the (selection of) tiers to tab-delimited text (possibly from multiple files) and re-organize rows and columns such that corresponding annotations from rater1 and rater2 are in the same row. Here it depends on how the raters have been coding
Thanks, it’s a good information. We have changed the tier’s name and now we can have the two coder’s annotations in the same file. BUT… how to compare them? We have to do it manually or?
