Handling large voiced Dialogue database with FMOD


I’m wondering if anyone has put together a tutorial on how to handle large amounts of dialogue (localized in several different languages) with FMOD and Unity?

I’ve read that using programmer sounds is the way forward due to memory constraints but what would hte end to end process be for setting things up in FMOD Studio all the way to integration? How would we setup things to handle multiple versions of a line of dialogue recorded in multiple languages?

We’re writing a custom dialogue database in Unity that we’re going to hook up to our FMOD events but any help would be much appreciated.


Justin French
Founder / CEO / The Audio Guy
Dream Harvest Games

As of the time of writing (January of 2017), we are not aware of any detailed tutorial on building dialogue systems such as you describe.

To be honest, you are unlikely to find such a tutorial. The API calls required to make use of programmer sound modules are detailed in the FMOD Studio programmer’s API Documentation; However, the choice of which audio file to play when a programmer sound module is triggered is something that must be done in your game’s code rather than in FMOD Studio (which is why they’re called ‘programmer’ sound modules). The optimal method of organising and selecting between audio files in your game’s code is entirely dependant on the requirements of your particular game; There is no one-size-fits all solution.

1 Like

Thanks Joseph,

So I’m presuming that I create template Events for the different types of dialogue such as Narrative Cutscene, In Game, Walla, etc and then make use of programmer sounds to trigger individual dialogue assets (While doing a check to see which language has been selected) through a selected event at a give point in time?

Yep, that’s how programmer sounds are intended to be used.