How to know the text line being spoken?

Hello I’m producing a game with a lot of Voiceover using FMOD Unreal integration. Since often I have multi-instruments with a pool of voiceover assets related to the same situation (for example enemy yelling at player), but different text content, is there a way for me to know which text line is being spoken (=which asset is being played) to manage the related subtitle?

There’s no easy way to get which playlist entry in a multi instrument is playing. We therefore recommend not using multi instruments for this purpose.

For dialogue, it’s often better to use programmer instruments, as they allow your game’s code to decide which audio asset to play, and so can also be used to determine which subtitle to display. More information on using programmer instruments to play dialogue can be found in the Dialogue and Localization chapter of the FMOD Studio User Manual.

1 Like

Thank you @joseph for your reply. I know the (localized) programmer sound feature very well and successfully used it in many projects. But now I need to create an interactive VO system using multis, parameters that change lines, etc. So I need multi, scatterer and so on, and we need to keep track of the spoked line to display the correct subtitle. The next answer would be “how to manage an interactive dialogue with such features using localized assets?”. Programmer sound can’t be easily integrated into a complex interactive system so I suppose we need to do it by ourselves using parameters to select the language.

Thank you.

Have you considered using localized audio tables, as described in the Dialog and Localization chapter of the FMOD Studio User Manual? They allow you to record multiple different localized versions of the same line, and play the one corresponding to the language you specify at runtime., without needing to add additional parameters to your events.

Yes joseph thanks, as I said I know that feature very well. The problem I have is to try to implement Audio table assets into an interactive event that contains scatterer and other advanced feature on enemy voice to add depht during gameplay. I think it cannot be done because it is impossible to integrate programmer sound system into a scatterer instrument or a multi. Is this right ?

Programmer instruments can be placed in the playlists of multi instruments and scatterer instruments, and will work as expected. You won’t be able to distinguish between multiple different programmer instruments in the same playlist, but that shouldn’t be a problem, as your game’s code will be determining which line to play in any case.

Thanks Joseph I did some tests using programmer sound into multi/scatterer and it works very well! I see that I can also select a specific entry in audio table for every programmer sound in the scatterer, this can be done by setting the programmer sound name using the entry key
This is really useful for me to manage specific Voiceover events that has multi/scattered and need to be localized. Awesome!

Now I need to manage subtitles for that special events, but I will open a new thread for that