Associating participants with audio files

This is a question about Transcription Mode.

I make recordings of conversation where each participant is recorded on a separate audio device. If the participants in a recording are A, B, and C, the linked files are AB_video.mp4, A_audio.wav, B_audio.wav, C_audio.wav, etc.

When processing these files, my team segments them in Segmentation Mode one speaker at a time, i.e. we listen to and segment all of A_audio.wav, then all of B_audio.wav, then all of C_audio.wav. No problem there since we only work with one audio file at a time.

When we start transcribing in Transcription Mode, however, we want to listen to A’s turn 1, then B’s turn 2, then C’s turn 3, etc. rather than listening to only a single participant. This creates an issue because there is no way to change the audio playback from the Transcription Mode screen. Instead, every time the user needs to switch from A’s audio to B’s audio, they have to go back to Annotation Mode or Segmentation Mode to access the audio controls and manually switch. This eats up a lot of time changing between modes and is confusing for new users.

So, I’d like to ask:

  • Is there any way to associate an audio file with a participant so that when that participant’s tiers are played in Transcription Mode, only that audio plays? For example, associate the media file A_audio.wav with the tiers A_transcription, A_translation, etc. so that only A_audio plays when those tiers are selected in Transcription Mode.

  • If not, is there any way to change the audio playback to a different file without leaving Transcription Mode?

Hi,
I think the short answer is: “no” and “no”.
I’m not sure if I understand the situation correctly. If there’s not too much overlap between the turns of the 3 speakers, you could, in the controls tab in Annotation Mode, mute the sound of the video, and set the volume of each wave file to 100 (or 75, or 50) and then switch to Transcription Mode. When transcribing a turn of participant X, mainly the sound of the audio belonging to that speaker will be heard (even though none of the wave files is muted)? Or is the situation not that simple?

-Han

Hi Han,

I appreciate the reply. On most of our recordings, the participants are close enough together that the turns from one can be heard on the others’ tracks, just not very clearly. As a result setting the volume sliders to 50/75/100 for every participant creates an echo-like effect that makes the audio more difficult to transcribe.

It’s a high priority for me to find a way around this issue because the individual devices really improve the transcribability of audio in situations with loud environmental noise - so I’ve been encouraging students to use them as well. The only disadvantage to the method is compatibility with ELAN.

I’d be happy to share examples of some of my EAFs if that would allow you to understand the problem more clearly.

Best wishes
Amalia

Hi Amalia,

Ok, I understand, in that case (multiple speakers can be heard per track, causing an echo effect) the suggested workaround can not be used.
I don’t know of another workaround that would make the workflow that you want possible and there is probably no better way than transcribing the tier for A first, then for B etc., either in separate files first, or in Transcription mode with a single tier per run.

I can add the wish to be able to associate a participant to a particular media file to the request list (although I don’t immediately see how this could best be implemented).

You’re welcome to send me (han.sloetjes@mpi.nl) a link to some of your example files, but I think I now have a good idea of how the materials and intended workflow look like.

Best,
Han