Audio clip length and performance

Hi all,

I’m trying to plan my audio pipeline out and have a question about pre-slicing audio clips. The target platform is iPhone, clips are set as streaming FADPCM.

I have an event with 4 nested event tracks. Each event track has 16 instruments on seperate tracks with clips that are around 3 mins long.

Only one of these 16 tracks will play at a time (for a total of 4 playing clips; one on each event track) and the instruments will be switched between during playback. Sometimes they loose syncronicity as they are enabled. I’m enabling them with parameters to change the instrument’s volume from -inf db to 0db, player actions trigger these parameters.

Is there a benefit to slicing the clips into smaller (1bar) sections?
I’m imagining it would force the clips to all play in sync at the start of the next bar.
Would this really help with the sync issue?
Is there also a benefit to memory usage (or is this dealt with under the hood)?

The sync issue is resolved if I don’t virtualise the tracks (eg set them to -70db rather than -infinity db) but this means all 64 tracks are actually playing (although quietly) and I assume are held in memory.

Or is there a more elegant way to handle this situation?

thanks for anyones thoughts!

Another user has somewhat recently reported a similar issue to you i.e. tracks with volume automation/modulation applied to them desynchronize when subject to virtualization - does this sound like the same behaviour you’re experiencing? Mixing different loops/tracks to "compose" a soundtrack at runtime in a Unity game

A desync when virtualization occurs shouldn’t be happening. Could I get you to upload your FMOD Studio project (or a stripped down version where you can still reproduce the issues) to your FMOD user profile so that I can take a closer look? Note that you’ll have to register a project with us to do so.

After doing some digging and talking with the development team, I can confirm that this is a known issue with the virtualization system that has been fixed in our next major release. That said, depending on how far along into development you are, I wouldn’t necessarily recommend upgrading to it when it becomes available, as we will first be releasing preview builds instead of a full release.

As you’ve mentioned, you can work around this by not muting the tracks, thereby not causing them to virtualize. An improvement over this, and the recommended workaround, is to set your event priority to “Highest”, which will cause muted tracks not to virtualize at all. Obviously, this will means that the muted tracks will still incur resource overhead, but ultimately this is the best solution available, so I would recommend implementing your events with this and then evaluating performance from there.

If your assets are set to stream, they won’t incur much of a memory overhead since they are streamed piecemeal on-demand, but they will incur increased CPU and IO overhead. If you’re finding that having all of them set to stream is too much, you may wish to set them to not stream, and see how much memory usage they incur instead. I would recommend reading over our glossary entry on Sounds and the links to each sound type within to understand how asset loading/playback methods impact performance differently.

1 Like

thanks for looking into this. It’s all super useful information. I’m right at the start of building out the audio pipeline, so I will look out for the next release (2.03).

1 Like