Multiple Banks

Quick question here: what is the general rationale for using multiple banks, as opposed to just one that houses every sound event you could want? Instinctively that seems like a bad idea, but I don’t specifically know why it would be a problem. Would it help save memory to split it up, or would it just be an organizational thing?

Many banks might not need to be loaded for different game areas and thereby saves you memory.

For example: A current project has a lot of interactive music but will have numerous cut scene animatics and short films that break up the levels so I’ll use those cinematics as breakpoints to switch out banks and save them memory by only having the pertinent game music loaded as needed. (If all goes well that is.)

That makes sense, I’ll figure out a system for my sound designer to load and unload banks as needed in the game. Thanks!

I haven’t delved too deeply into Studio yet, but in FMOD Ex/Designer I’ve used multiple banks for the following reasons. Some of these may not apply to Studio:

  • To have a mix of compression types. I’ve had trouble getting seamless loops/transitions in the music system with mp3/mp2, so I’ve used ADPCM for any wav files that were troublesome.

  • Any 5.1 streaming sounds (i.e. 6 channel wav files) should be in their own bank, or else the streaming buffer size for stereo files sharing the same bank will default to the 6 channel wav buffer size (i.e. 4 or 5 times more RAM usage than the stereo streaming buffer). This may not be issue if you don’t have a lot of streaming sounds.

  • If you’ve got a lot of streaming sounds, and some are lower priority than others (such that you’d be okay with the lower priority sounds sometimes not playing), you can move the lower priority streaming sounds to their own bank and limit the “Max Streams” to something reasonable. This way you can have an overall lower amount of streams by choking/limiting the low priority bank. Also, in FMOD Ex/Designer, if high and low priority sounds share the same bank, the bank will allocate streams on a first come first serve basis (i.e. it doesn’t care if a higher priority sound requests a stream; if all the streams are taken, it won’t give up a stream). Therefore, by splitting up the high and low priority streams, you can better ensure that the high priority streams always get a stream without setting the “max streams” to something unreasonably high.

Just to reiterate, what I’ve written may be moot in Studio. For instance, I haven’t seen a “Max streams” setting in Studio yet.


Here’s a screenshot of where the Max Voices setting is:

that’s not the same.

Your screenshot is showing the max voice polyphony of an event.

What Capybara is looking for is the Studio equivalent of “Max Streams” value that was setup in the soundbanks property in Designer

Ah, my mistake.