Mixing different loops/tracks to "compose" a soundtrack at runtime in a Unity game

Hello,

This is more of a request for some guidance than a concrete question, but here it is:

We are trying to make a Unity game where one of the key mechanics is that the player mixes and matches different types of tracks/loops to create a dynamic soundtrack in the game (similar to a launchpad or the game Incredibox). You select from 5 different categories, each representing a part of the whole composition.

Now the issue that we have run into with FMOD is that syncing these tracks has been a challenge. If we set each track to separate FMOD events, and play them at the same time, they play slightly out of sync with each other. We’re currently trying a different solution (keeping all of the tracks in one FMOD event), but it seems that this solution is complicated and the end result is not always great.

So I wanted to ask a more general question, maybe we’re approaching this the wrong way. How would you guys suggest we implement such a mechanic in our game? Is it best to use separate FMOD events for each loop, or should we have all the tracks in one FMOD event and mute them (we’ve had some issues hitting the limit of active voices in the build, also having so many muted tracks seems wasteful)? Maybe there’s a way to dynamically load and change out these tracks at runtime? Is there an intended/obvious way to use FMOD to “compose” music at runtime by playing separate changeable sets of predefined loops?

Thanks to anyone who takes the time to give us some advice/suggestions.

Event instances started in the same FMOD update should start in sync, but some things can cause the start to be delayed - the usual culprit for this would be that the sample data that your events play isn’t loaded when the event instance is started. If sample data isn’t loaded already, the Studio system will automatically load it when you create an event instance, but if you try to start the instance too soon after this, before the sample data is loaded, it will delay starting until the data is loaded.

In Unity, you can force sample data loading in FMOD → Edit Settings → Initialization, or by checking “Preload Sample Data” in a Studio Bank Loader component. If you’re directly using the Studio API, see the Studio Guide section on Sample Data Loading for more information.

If you need to keep multiple pieces of audio or events exactly in sync, playing them all within a single event’s timeline is the usual way to go about it, as FMOD can guarantee exact scheduling and synchronicity in this case. This can be done by using all of your audio assets in a single event, or by splitting your assets into individual events, and then playing those events using a parent event’s timeline.

Either way, the simplest way to swap between loops would be to use parameters to mute/unmute tracks as needed. As far as “wastefulness” goes, the only major impact having many muted tracks should have is on the memory required to load the banks where all the assets are stored. Muted tracks should virtualize, saving on CPU usage and not contributing to the active voice count.

You mentioned that you’ve tried something like this already, and that the end result is not always great - are you running into any specific issues?

Hello again,
Sorry for the delay in response. We wanted to do enough tests to give an adequate response.
The main problem is that we start and stop tracks using fmod parameters (essentially muting and unmuting them), and this causes them to become out of sync despite all of the tracks being in a single event as different instruments. In that case, preloading sample data makes no difference as far as we can tell.
You mentioned using virtualization, but in our testing, we found virtualization to be one of the biggest culprits of desynchronization.
Another problem was having an insufficient real channel limit codec count in fmod settings. With default settings we’d experience issues of tracks not being audible when they should be. For example, if by changing fmod parameters we would set one instrument to be at -80db, start the loop and then gradually change its volume to 0db via the same parameter, it will sometimes not be audible until the event reaches the end of its loop (at which point it will instantly jump to full volume). We’re pretty sure that this is due to the fact that there’s more instruments in that single event than there are real channels available, but it doesn’t explain why even if most of them are set to -80db, the audible tracks won’t take over until the very end of the loop.

Now, as far as “wastefulness” goes - currently we managed to get our tracks mostly synced by setting the fmod event to highest priority (therefore not virtualizing the instruments within it) and increasing the real channel limit as well as codec counts in fmod settings. Unfortunately, that leaves us with an unreasonably high number of real channels active at all times (despite only a few of the instruments being set above -80db), and we’re not able to scale it further as fmod doesn’t allow increasing the maximum number of real channels/codecs past 256 (and we’re not even sure what the implications are for performance with such a high number of tracks playing at all times).
Here’s a link to a video (https://youtu.be/S7MV3RFwhuM), where you can hear tracks getting out of sync (both in the build and in fmod studio) as well as not being audible until end of loop.

No problem for the delay, thanks for doing some testing and laying out your findings clearly. The video especially is helpful for me in conveying the issue, but it also seems to be complex enough that I can’t really diagnose it with just this information.

If possible, can I get you to upload a packaged copy (File → Package Project) of your FMOD Studio project with all metadata/binaries included to your FMOD User profile so that I can take a closer look? If you could also record a profiler session where you reproduce the issue, that would be helpful for me to understand the behavior you’re seeing on your end. If I could get your FMOD Studio version as well, that’d be great.

Just to follow up on this, I can confirm that this is a known issue that has been fixed in our next major release. The recommended workaround is to set your event priority to “highest”, or to set your tracks to -79dB instead of muting them, which will stop them from virtualizing.

Hey Louis,

Thanks for looking into this issue. We are glad to hear that a fix is coming. Do you
happen to have a date for when the next major release is going to happen?

Unfortunately we can’t offer any date or timeline at present, but we hope to have some more news on it soon.