Performance downsides to using lots of events?

Hello,

I’m working with a project which will have hundreds of individual sound cues occuring during dialogue. We already have a system in place to input individual events, but as the number of required sounds grows I’m wondering if a switch to programmer events makes sense.

In the end, is there a noticeable impact on computing resources to have hundreds of individual one-shot events (essentially one per clip) versus a small number of programmer events? In the end, both scenarios would have the same amount of audio assets/content to be loaded and played back.

I’m not fully understanding how your usage of individual events to programmer instruments differ from each other. If you are having hundreds of events going at the same time, FMOD should be able to handle that regardless of if it is with regular instruments or programmer instruments. If possible, pre-rendering multiple sounds together into one audio file will help with performance.

You can view performance benchmarks for individual platforms in our documentation for further information:

https://www.fmod.com/resources/documentation-api?version=2.02&page=platforms-win.html#performance-reference

Hey Richard, thanks for the reply! To clarify, I’m not talking about layering multiple clips to form a sound or playing lots of sounds simultaneously, my question is whether there is a memory or processing consideration in using:

a) 1 programmer event referencing 100 sound files
-vs-
b) 100 individual FMOD events each, with one sound file

All being short, individual audio files which support (written) dialogue text on-screen.

Right now I’m using option b. There are creative examples out there of ways to use transitions to avoid using a ton of events, so this makes me think that using a lot of events should be avoided where possible.

In this example the sound data to be loaded and played back in both scenarios are the same. It just feels like I may be doing something inefficiently by creating so many unique events for the same function (‘at conversation node X, play Y’). In the end though, it’s just two ways to achieve the same result, I just want to make sure I’m not doing something which wouldn’t be recommended.

I see what you mean now. There really would not be any difference performance wise between having a single programmer instrument event that is used to play one of 100 sound files or 100 events that each reference a single sound file. Both require instantiating an event and both will have the same I/O usage of loading up a single sound file.

The main difference between these two would be how it is set up and utilised. The simple event would require the path to the event, which will depend entirely on how you set up your FMOD Studio project. The programmer instrument event would require the key specified in the audio table. So really it would come down to which method of calling these sounds fits your workflow better.

1 Like