Localized Audio and Unity Integrations

Hi,

Thanks in advance for your support :hugs:

I am working on localizing audio to 6 languages working in conjunction with the Unity Localization package. I am using FMOD 2.02 and Unity 2021.3.
I have followed the documentation here https://www.fmod.com/docs/2.02/unity/examples-programmer-sounds.html and I can play localized audio files using localized audio tables and programmer instruments.

However, I have some issues with this approach. It seems, when working with audio tables:

  • I cannot use StudioEventEmitter for good editor support
  • I cannot preview and select events from a browser using EventReference
  • I cannot use a multi instrument for randomized variants
  • I cannot use Unity Timeline Clips
  • I cannot trim the events easily like with individual timeline events
  • The implementation in Unity feels very low level (with GCHandle.Alloc & co.)

What I was hoping for when I started working on this topic, was that I could take my existing bank with all the narrator-comment events I already have set up and turn it into a localized bank. I was hoping I could define the respective source files for each event & language. I was hoping that in Unity I could then load the respective bank and do everything else as usual.

Instead, I lose most of the FMOD integration, like timeline clips and preview in the editor, and have to rework my banks and Unity code to a very different workflow.

Am I missing something here? Do I have other options to realize localized audio and keep some of the Unity integrations and tools?

FMOD is fantastic! I highlighted some parts, for easier reading, not being passive aggressive :slight_smile:

Hi!

Thanks for so clearly laying out your issues with the current way the FMOD Unity integration handles localized audio and programmer sounds. A new feature/improvement we’re currently tracking is some kind of Audio Table instrument which would simplify handling localized audio by:

  • Placing more of the complexity at design-time in Studio instead of in-engine
  • Simplifying the process needed to actually set audio table keys (i.e. a single API function instead of the rigmarole of setting a callback)

This would address a lot of your issues, and as such I’ve noted your interest in it internally.

As for your other points:

  • You can still use StudioEventEmitter to handle your event instances when setting a programmer sound callback, but you will need an additional script to access the underlying EventInstance with StudioEventEmitter.EventInstance and set a callback on it

  • When using Unity Timeline clips, if you set a callback on an event’s description in a different script with Studio::EventDescription::setCallback, the callback will be applied to subsequently created event instances, including those created by Unity Timeine clips

  • Would you mind elaborating on exactly what you mean by “I cannot preview and select events from a browser using EventReference”?

Hello Louis,

thank you for your prompt response!
Indeed, your tracked improvements would be much appreciated :slight_smile:

StudioEventEmitter
What I meant with not being able to use StudioEventEmitter and EventReference is that I cannot browse the bank through the Unity inspector and play the files to select the right one.
My current workflow is to prepare events for each audio file in FMOD Studio and then select them with the search function provided by the Unity inspector for fields of type EventReference.
With Audio Tables, I will have to know the file name or key and pass that as user data. This way I cannot know in the editor, if the file actually exists and is the correct one. If I rename the file, it will break the playback.
It would be great to see and select from the available keys of an Audio Table in Unity.

Timeline
Interesting. I will try to register the callback this way. But how do I pass the key to specify the audio file? Different clips will refer to different audio files, after all.

[EDIT]
I have managed to adapt the FMODEventPlayable so that it passes user data and registers the callback. In playmode it actually plays the audio, but in the editor, I’d have to also adapt the FMOD EditorUtils. I also get warnings and I feel overall uncomfortable using this in production.

Multi-instrument
Is there a solution for playing randomized audio files? I can, of course, write a script to pass a randomized filename/key , but the available set will depend on the language. Some event might have 2 english variants, but 3 french ones. Will I have to code a solution in Unity for this?

So that I can help diagnose whether any of them would be potential issues, what warnings are you receiving?

Unfortunately yes, you will need to code a solution in Unity for playing randomized audio files.

I’ve added both this and Unity Timeline support for programmer sound callbacks to our internal feature/improvement tracker.

So that I can help diagnose whether any of them would be potential issues, what warnings are you receiving?

I got two warnings, but I’ve reverted my attempt, so I cannot paste them.
One was about the sample not being loaded in time. I can fix that in the same way as was discussed elsewhere by creating the sound ahead of time and passing it through user data. For this I have created a struct including the FMOD.Sound object and the FMOD.Studio.SOUND_INFO. Does that make sense?

The other warning was received when the timeline ended before the clip was finished. I can probably handle this, too, but I will not be able to rebuild the preview capabilities of the usual utilities, anyway. So at this point I think it makes more sense to trigger the playback through events instead.

Unfortunately yes, you will need to code a solution in Unity for playing randomized audio files.

Got it. I build a ScriptibleObject wrapper. I think this would be something that would make a lot of sense in the Unity package.

I’ve added both this and Unity Timeline support for programmer sound callbacks to our internal feature/improvement tracker.

Thank you!

1 Like

That makes perfect sense.

That warning could be a number of things, some harmless and some not so much, but either way likely to be related to not releasing the sound at the appropriate time. Either way, I agree that triggering playback through events may be better for you given the current state of things.