Question seeking ideas/feedback/suggestions for dealing with multimodal data
I have some annotations in multiple tiers that I would like to export in some form that can be used for statistical analysis.
In addition to the usual transcription and translation tiers, I have three tiers, speech_vp, body_vp, and hands_vp. Annotations in each of those tiers has a numerical value that is an ID for a particular character in a narrative.
What I want to find out, is how and when the ID tags line up/don’t line up across the tiers. Here’s a screenshot:
So let’s say I want to use the speech_vp tier as my anchor point. Within the highlighted portion, in which a single annotation in the speech_vp tier is selected. In the other tiers, I can see that the character ID values change several times in the body_vp tier, and once in the hands_vp tier.
I need some way to capture this data ‘in the aggregate’ so I can make some attempts to try and quantify these co-occurrences.
I’ve been experimenting with various export options, including CSV files, but haven’t yet produced anything immediately useful.
So I’m just here fishing for any suggestions anyone might have. I realize that the way the data should be organized depends on the kinds of questions I want to investigate, but at the moment I’m exploring what kind of export output gives me data that is easy to work with in a spreadsheet or R.

