Guys, maybe someone can help:
I’m working in FMOD with UE5.4.
In our game the main character will have a voiceover in 2 languages.
Here while everything is clear - you make an event with 2 tracks.
the first language, the second language. make a parameter and mute one or the other track depending on the selected language.
The difficulty is that we have a hero’s speech is duplicated by subtitles and they must disappear at the moment when the speech ends.
When you work with one language, the UE can take the length of the asset in seconds from the event, and so you can easily synchronize the voiceover and subtitles, but when the event is 2 different voices on different tracks, it takes the length of the longest asset in the event but not the one that is now active, so there are problems with synchronization. Is there any variants how to fix it? to make different events for different languages is not an option.
Thanks in advance!
Apologies for the delayed response.
There are potentially ways of using parameters to control localized audio like this, but as you’ve noted, subtitling can become an issue. There are potentially ways to work around this, but the intended method to handle localized dialogue is to assign them to an audio table, and then to use programmer sounds in engine to play assets from the audio table based on what language is being used.
By placing a programmer instrument in an event in Studio, you can use the FMOD_STUDIO_EVENT_CALLBACK_TYPE type FMOD_STUDIO_EVENT_CALLBACK_CREATE_PROGRAMMER_SOUND
to receive a callback when the programmer sound is to play an asset, and use a localized audio table to select the asset to play based on the current language.
When the programmer sound asset stops playing, the callback FMOD_STUDIO_EVENT_CALLBACK_DESTROY_PROGRAMMER_SOUND
fires, which you can use to stop displaying the current subtitle. It would also be possible to use the length of the actual asset being played to determine the amount of time to display subtitles, which can be retrieved with Sound::getLength.
This only requires the use of a single event, so it would satisfy your need to avoid using different events for different languages. I would recommend giving programmer sounds a shot, and letting me know if you run into any issues or if there’s any reasons why this method doesn’t work for you.
Thank you so much! We will try now
Hello. We haven’t fully figured out yet how to properly implement localization using “event callback,” “audio table,” and “programming sounds.” We will definitely look into this. However, I wanted to ask if there might be an easier way. Up until this point, we haven’t localized our game into other languages. For each dialogue, we already have its own event, which is connected in the code. I am using getLength()
to wait for a certain time and then destroy the subtitle. Ideally, we would like to find a solution that would allow us to:
- Simply add another sound track with a another one language to the event.
- Get the duration of this particular sound in the code. (by some way)
I read a forum post from 2022, which mentioned that it’s not possible to get the length of an individual audio track without using a callback. But that was two years ago. Is there any workaround now? For example, could we use different parameters (create two parameters for each localization and get the duration for the specific parameter)? I understand that when there are more than two localizations, all these sounds will be loaded into memory. But for now, we are working on a demo, and I have a feeling that if we implement localization in the “correct” way, we would have to significantly change the engine code and rework all the existing events. Please correct me if I’m wrong.
Hi,
Thank you for sharing the information.
What version of UE and the integration are you using?
Could you elaborate on this please?
As of the latest version of FMOD, there is still no direct way to retrieve the length of an individual audio track without using callbacks.
You could consider recursively searching a ChannelGroup and all its sub-groups to find and return the current active sound playing, which you could then retrieve its length. Please refer to
Channel::getChannelGroup
ChannelGroup::getNumChannels
ChannelGroup::getNumGroups
Channel::getCurrentSound
You could indeed create a parameter sheet to control the language switch and automate the volume of each track based on the parameter index(0 = English, 1 = Japanese, 2 = Chinese .etc) and called Studio::EventInstance::setParameterByName via code to set target language index:
For simple subtitle timing, you could consider using Studio::EventInstance::getTimelinePosition along with getLength()
to manually check if the current event has finished. This approach could be a temporary solution for a demo release but please note that it is less efficient and precise compared to using callbacks.
You are correct that loading multiple audio tracks for localization into memory could be an issue and might impact your games’ coding structure, but for a demo release, this might be acceptable.
There’s a similar post that might help you understand how to set up localization with programmer sound
and audio table
if you decide to implement that in the future: Parameter to swap between programmer instruments
You could also consider having a look at our callback scripting example for reference.
Hope this helps, let me know if you have any questions.