The traditional transcript export works fine for most of our work, and the computation of gaps (silence duration) is a wonderful feature. However if you output too many tiers (e.g, in a multimodal situation) they get entered into the computation of the silence durations. It would be helpful if there was a way of restricting/focusing the computation of silence durations to specific tiers (e.g., speech tiers) while being able to export other tiers (e.g, gesture) as well.
Jeff Higginbotham
