Localized audio tables, bank management

Hey there!

I was wondering if its okay to put every audio file (containing voice overs, probably around 20k per language…), into a single bank, without splitting to separate banks/audio tables for each scene in the game, and load the bank (EN,DE or whatever language) on the first scene of the game (using it in every other scene as well)?

I can’t really afford creating multiple banks for the separate scenes of the game, since most of the files could be needed pretty much anywhere in the game. It will cost me a lot of time if I have to organize the files in such manner + time is tight.

So far, for what we need, it seems the best approach would be to not load sample data for this bank, and probably stream most of the files, or even all of them, even those which are shorter in terms of length?

It seems Building Metadata and Assets to Separate Banks helps only with less download size when the game has patches, but it also says “Separate metadata and sample banks require slightly more resource overhead”, which I don’t understand. Does this mean it would affect the performance of the game in any way, even in the in the slightest?

What do you think is the best approach in this scenario?

Thanks a lot in advance!

There isn’t anything inherently wrong with loading all your files into one audio table, it depends on how you go about organizing your audio files. Memory wise it will only use the amount needed for the sample loaded in using getSoundInfo.


The separation of metadata and samples only applies to regular banks and not audio table banks, as the audio table bank technically is the sample bank.

Having multiple separate audio table banks is useful for tight memory requirements, download/install size management, or for general organization. Depending on the size of your game, the benefits of having multiple banks over a single one might be eligible but it’s hard to tell without seeing the game first. Since you can’t really afford the time to organize into multiple banks, don’t worry too much about it.

Streaming assets would mostly be helpful for loading longer audio files. It would be the difference of loading an entire 10mb audio file or chunks of that 10mb at a time. If your audio files aren’t long (default is 10 seconds or longer) and/or don’t need to be on time exactly (such as for lip syncing or musical beats) then streaming will be fine.

Thank you very much for the clarifications! :slight_smile:

When using this method, all event content, sound structure data, and media files are stored in one bank that is loaded into memory at the same time.

Loading a bank containing an Audio Table only loads the lookup metadata, the data that allows getSoundInfo to work. The metadata for the Sounds and the sample data itself are loaded by you when you call System::createSound. The first createSound for an audio table loads all the table metadata but only the sample data for the sound you selected.